Different uses of ChatGPT

in All Saints
I am wondering about what sort of things people use ChatGPT for, and what you have found helpful and what you have found not so helpful. I wasn't interested in using it at first, but lately I have seen some very varied and interesting ways people use it - for help with emails, organisation, tidying their home, processing feelings, idea generation, communication profile, and all sorts - so I've started using it too.
To begin with, I just wanted help with tidying, but I like to experiment, so I started asking ChatGPT all sorts, out of curiosity to see how it would respond. And I have found it quite versatile in some ways - for instance, it's helped me understand my emotional responses to opera arias, for instance, and compared different singers, and how the music and the way of singing creates different emotions.
Something that is not so helpful is when it offers to check in with me after ten minutes (if I'm using it to help me with tidying, for instance) and it doesn't, and when I ask why, it says it can't spontaneously do that, it has no timer system, it doesn't even know what time it is! So far, it has offered several times to check in with me, and I have to keep telling it not to offer what it can't do!
So I'm curious about other people's experiences - whether there are some good uses for ChatGPT that I haven't thought of, and pitfalls to avoid.
To begin with, I just wanted help with tidying, but I like to experiment, so I started asking ChatGPT all sorts, out of curiosity to see how it would respond. And I have found it quite versatile in some ways - for instance, it's helped me understand my emotional responses to opera arias, for instance, and compared different singers, and how the music and the way of singing creates different emotions.
Something that is not so helpful is when it offers to check in with me after ten minutes (if I'm using it to help me with tidying, for instance) and it doesn't, and when I ask why, it says it can't spontaneously do that, it has no timer system, it doesn't even know what time it is! So far, it has offered several times to check in with me, and I have to keep telling it not to offer what it can't do!
So I'm curious about other people's experiences - whether there are some good uses for ChatGPT that I haven't thought of, and pitfalls to avoid.
Comments
Well, I am using it for help with the executive dysfunction aspect of my neurodivergence, as a tool for organisation and motivation. My diagnosis is autism but I would probably meet the diagnostic criteria for ADHD too if I were to pursue a diagnosis.
Today I have used it to break through inertia, but I had to tell it really explicitly to challenge me more, to stop using empty affirmations and gushing, soft praise and encouragement all the time, and to be firm and challenging and name my behaviour. And it did - it told me I was looping and needed to make a decision!
You could ask it how people use it for ADHD and it will give examples.
That is exactly why I made the thread. I keep experimenting (from curiosity rather than trust!) and discovering by chance that it can do all sorts of things that never would have occurred to me. So I figured others might have also found uses that wouldn't have occurred to me and we could pool them.
...but here I am, on the Magick Interweb, anyway...
I did this (AI did this!) to make a sign for the door to welcome guests for Friday's party - and to warn them to please ignore the dogs or they'd be over enthusiastic in their greetings.
https://photos.app.goo.gl/ifR5PSy4bmgsVWUK6
Ditto. And I have concerns about the morality of using a large language model trained on texts and images without the creators' permission.
I've never used it - yet.
I have done that too. Me thinks I should use it to smooth out my responses on Ship of Fools.
How do you know that the explanation is accurate ?
Both of these.
If you go to ChatGPT or another large language model (note the "language"!) and ask it for FACTS (such as medical facts, geography, history, etc.), you are taking dangerous chances. Because ChatGPT is not capable of distinguishing between truth and falsehood, and it will cheerfully tell you all sorts of bullshit if your query happens to trigger the wrong processes. It is NOT a search engine--it's basically spicy auto-correct, as my son puts it. If there are lies in its training material (and there most certainly are), it will happily reproduce them if your words trigger that particular material to appear, and it will have No.Freaking.Idea. that it has just told you some ridiculous thing (such as telling you that the chickenpox vaccine causes cancer). It's a language model, not a scientific research tool. To put it another way, it's a parrot. It will repeat what it's heard elsewhere, and never mind if the output is complete baloney. It can't tell--how could it, a program is only as good as its programming, and "garbage in, garbage out" still applies. This isn't a sentient human being.
So if you want medical advice, go find a proper search engine. Consult properly checked material from a trustworthy human source, like the Mayo Clinic. As somebody said somewhere, don't trust yourself to a thing you can't see where it keeps its brain. That's dangerous.
So what could you use ChatGPT for safely? Well, I suppose anything where you aren't interested in facts but would like it to smooth out your language (so, maybe in writing or speaking). That would be in its bailiwick. Or you could use it for non-fact-based pursuits, like chatting to it, or directing it to create / pull up images for you (though there you hit the moral problem of all the authors/artists/etc. who have had their work stolen from them without consent or compensation).
But unfortunately, human beings are pretty bad at telling the difference between a trustworthy source and one that sounds confident and trustworthy, but isn't really. The number of folks who put their trust in these things! I think it's because to them, computers = technology, and to them, technology is just another word for "good, beneficial, knows more than me."
I wonder if anyone has used it for help with learning languages? It gave me a very thorough explanation in answer to a question I had about Italian. I was also having a conversation with an Italian friend (who uses it a lot for English) about the question, and she wasn't sure, and I showed her what ChatGPT said, and we were discussing from that. So it's of course possible to use it discerningly as part of a wider conversation. I wouldn't imagine intelligent people are just accepting what it says unquestioningly, any more than you'd automatically accept all the things a friend or colleague or google tells you.
The problem with using it for any kind of generative task is that the language it produces is often strangely flat and bland (as others have said upthread), that said that's not the only way of using it as a learning tool, and in some circumstances it can help with testing your own knowledge of a subject as you learn it:
https://www.oneusefulthing.org/p/assigning-ai-seven-ways-of-using
https://www.ox.ac.uk/students/academic/guidance/skills/ai-study
When it comes to content creation there's evidence emerging that it can actually hurt the learning process:
https://www.media.mit.edu/publications/your-brain-on-chatgpt/
And at the far end of things there are some reports of people having psychotic episodes after getting sucked in to using ChatGPT as a companion of sorts:
https://futurism.com/commitment-jail-chatgpt-psychosis
Agreed!
Wait, isn’t using AI in one’s posts disallowed on the Ship?
Using AI to entirely compose a post, or as a source in serious discussion is not permitted. Using it as a spellcheck or stylistic aid, we can probably live with - though if people would like a discussion thread on this issue in Styx, please do write an op. The primary concerns being, we don’t want posters to plagiarise other people’s work, we don’t want people to cite sources that are inaccurate, can’t be verified, or can’t be replicated/found when someone else looks for them.
[/tangent]
Or, even better, not show you those you'd already seen, so you really did only see the NEW jobs and not ones you'd seen twice a day for the last month...
I'd be all over it like a rash!
It's a real concern amongst artists at the moment, who are losing work on things like book covers because the publishers are using AI to create the images, sometimes from the very work that they are no longer paying the artists to create.
https://photos.app.goo.gl/SEjU6AVuNAJc83j98
I submitted my first draft to AI for editing. Within seconds, it returned a revised version that corrected some factual inaccuracies. For example, I had stated that a federal grant could cover 40% of the cost. The AI identified the actual program and clarified that it would cover 30%—still a significant portion. I had also referenced a third-party partnership model where a company installs and owns the equipment, and the church receives a fixed return for the electricity the panels would produce. The AI provided the correct name for this model and cited a local school district as an example.
The AI also expanded on my suggestion to form a subcommittee, offering six specific areas the group could investigate.
That was the second draft.
Next, the AI asked if I wanted to include examples of churches that had already implemented solar. I said yes. Initially, it listed two East Coast congregations, but when I asked for examples from the Intermountain Northwest—our Synod’s region—it corrected itself. It provided two churches: one ELCA and one LCMS. The LCMS congregation, located on the main route between Spokane and Pullman, is well known and reportedly willing to offer technical advice to other congregations considering solar. I decided to include both examples.
That became the third draft.
I accepted the third draft. The AI then asked if I preferred the letter in PDF or Word format. I chose Word. Two seconds later, the final version was ready—complete with footnotes for the sources it had cited.
The entire process took about 30 minutes. Had I done it all on my own, it likely would have taken a full day or more.
Even this note has gone through AI for final editing. I don’t believe this violates any SOF restrictions—it simply helps me express my ideas more clearly.
Less research has been done into the reasons why, but one explanation is that human memory is highly contextual. It evolved in an environment where you had to remember things like "the blackberry patch is by those tall pine trees near the bend in the river with the big rock and they'll be ripe when the stars look like so and it's been hot enough to smell leaves baking in the sun". Surveying people who had read material in book form their memories were often associated with the physical attributes of the book itself, or the position of text on the page, or the quality of light on the page of the book and so on.
Bringing this back to generative AI; I think the something analogous is going on. By default these tools will generate text by a kind of averaging/flattening process which picks from a selection of 'most likely' outputs. But it precisely each person's particular and consistent deviation from that average which makes up their individual style.
I suspect I could identify most of my close friends and family from relatively short transcriptions of their verbal speech. Conversely trying to wade through AI generated prose is an exercise in frustration and persistence. It's like trying to get through one of those technical texts which use passive subject only and where all the edges of a language are sheared off.
I know this is true of me - I always remember things like this. It will be "I remember reading about this, and want to find the reference. It was on the top right corner of a right-hand page." and I'll probably remember something about the typeface.
This is a bit of a tangent, but I read a lot on my Kindle and remember the books in detail. And I do often remember where certain lines were on the page of the screen, and the font, and where I've highlighted it, and the setting I was reading in, etc., but if I want to find a reference, I just do a search. My observation of the research is that it was on people who were new to reading on a Kindle, and I would say there is always an adjustment when one is experiencing a new type medium. Like writers in the past saying one should never use a typewriter/word processer, because real creativity comes only with pen and paper!
To a point I do as well, but the position of text is often not consistent (sometimes just going forwards and then back can move it).
AFAICR the biggest evidence was from a meta-study and this was not true of a significant minority of the underlying studies, and at least at this point e-readers are relatively old hat.
[There are similar results for people who read on screen, and that technology is even older]
It's a good tangent, though!
Unfortunately, search is pretty sucky. If I can't remember the thing I want to reference, I probably can't remember enough key words well enough to pick it up immediately in a search. It's possible that the brave new AI world will make this better, but for me, search is better than using a physical book's index, but nowhere near as good as my spatial memory.
And search doesn't help as much if part of the question is "there are a dozen books on my desk, and I'm looking for a phrase from one of them".
LC,
this is excellent, and I've pinched it for my diary to share (in conversation) with others.
As to the wretched ChatGPT, there is concern it is being used for reviews of scientific papers. Had I found one of my undergrad students cheating by using iin an essay they would have recived an 'F'.
I would prefer to read Gramps 49 posts unedited as it's him!
Perish the thought of ChatGGPT generating and editing sermons!
But the I'm old-fashioned I suppose. My poetry might be awful, but at least its MINE!
As RockyRoger wrote, it, to me, stripped their personalities out of it. It all kind of sounds the same, artificial (I know...) in some sense I couldn't put my finger on.
That said, I used it to generate some ideas for assessments I had when studying recently. I asked for ideas when I was stuck but left it at that and filled in the blanks with my own research and wrote the response myself.
AI sermons may be very confusing given the wide range of theological opinion. I wonder if it may veer from a Calvinist viewpoint to a Catholic one midstream! "You know we are utterly depraved but Mary is Queen of Heaven".
My online university has had a policy for several years on how students are allowed to use AI in essays, for instance, for improving sentence structure or to explain concepts (the latter of which they would then need to re-write in their own words and reference to the AI). The university also uses AI detection software. So I treat AI in my students’ essays as I would any other source of information in terms of reliability, referencing of evidence and plagiarism.
It is generally easy to spot AI in essays and I use the normal structured marking criteria for grading (and refer students to the academic conduct committee for investigation if it is clearly written by unacknowledged AI). In most cases students will get a poorer grade if they use AI as they need to refer to our own study materials within assignments and correctly cite their sources of information, whether internal and external sources or AI. If they cite AI as a reference within an essay, I question on my feedback how they know the accuracy of what they have written and the sources that informed that discussion; we want evidence based essays. Sometimes students have clearly used it to write their summarising conclusion having fed their essay in (not allowed under the AI policy) and there are often mistakes in this as it fills in gaps in its knowledge.
Oddly enough, AI is easy to shock. Not really, I know—it has no human heart or brain—but apparently enough of its training material comes from prissy human beings that you can get virtual horror by feeding it the “wrong “ query. As I’ve done just yesterday, when I asked it if Jesus was ticklish. Or another time when I asked it to interpret that infamous verse from Ezekiel 23:20. I felt rather guilty about the ticklish inquiry—it just seemed so shocked at the idea.
'Oh blessed may that trooper be,
When riding on his naggie,
Takes their wee bairns by t' toes
And dings them on the craggie'.
To the point, of AI not encouraging people to think for themselves, lately it seens AI will respond to an inquiry I have and then ask me what I think. Once in a while I can even stump it. Just asked my AI a question very common for people my age and asked it to do a deep think. At first it started to respond then stopped with a message: sorry I cannot respond to this question.
Talk about prudish.
How did it express its shock @Lamb Chopped?
Those are of course not exact words, I'm trying to reproduce the tone. But I had to laugh, it sounded like I'd accused Jesus of murder or something.
Ah, that's a shame. I have the opposite, where I remember exact words, so the search works better for me than flipping through a physical book. Also, sometimes when a character is introduced near the beginning and then referred to quite a bit later, I have forgotten who they are, and that's where I really find the search feature handy. I can find the exact moment they were introduced.
Do you have a link to this meta-study? I wonder how results vary according to what a person grew up with. (Though this whole topic probably should be a new thread!)
Can't remember which one exactly, but it may have been this:
https://journals.sagepub.com/doi/10.3102/00346543231216463
...until I started thinking about a machine using human conversational cues to remind a user that it's not human. Is it a novel example of conversational duality (or maybe duplicity) - the meaning of the words saying one thing, the way it uses language saying another? And why would you program a machine to do this - whose interests does it serve?
It is in the nature of technology to change the human beings that use it, and it appears that the way that AI converses is already starting to have an effect on human language. From futurism: