Using AI in Ship forums
As a spin-off from a couple of other threads, at least one Shipmate seems to have been using AI to help write/edit their content, but accidentally copied & pasted the prompts. Perhaps other Shipmates have done so, but without the clumsy touch.
Without focusing on the Shipmate, but rather on the issue, what are our views about using AI to either generate or refine posts on the Ship's forums?
Obviously, it was not a possibility back when the Ship first set sail and the 10 Commandments were uttered.
I personally take a very dim view of it, but if one does deem it necessary, it should be disclosed.
Without focusing on the Shipmate, but rather on the issue, what are our views about using AI to either generate or refine posts on the Ship's forums?
Obviously, it was not a possibility back when the Ship first set sail and the 10 Commandments were uttered.
I personally take a very dim view of it, but if one does deem it necessary, it should be disclosed.
Tagged:

Comments
Thank you for your response @Caissa.
Even as I enter a response here, a predictive text is being suggested. It is the way things are working now. I have a bit of dyslexia. I have never been able to spell correctly. Sometimes I just cannot understand what a person is saying. My find wonders a lot. So I will ask AI to explain a response, as in the case of @Gamma Gamaliel's expressed concern. I will then write a note as to how I would like to respond to the post and AI helped me write something along the lines of what I wanted to say. As I said, it is much like having a personal secretary coming up with the final draft.
At the very least if you are using generative ai, you should say so in the post.
I do not wish to be unkind, but @Gramps49 if you don't understand the post, how do you know what chat gpt has given you in summary is correct ?
ETA: I found it: Doublethink wrote: »
We haven’t previously had a policy on referencing AI such as ChatGP. However, as an interim ruling (we will further discuss this further) AI programs should not be used as a source for serious discussion such as Purgatory or Epiphanies. (Used on any other forum for entertainment, please clearly indicate its use and ideally what you told it to get the output you are posting.)
The reason for this, is that the AI is simply using an algorithm to scrape the internet for content - there is no guarantee anything posted is factually accurate and we would prefer not to potentially spread misinformation.
Doublethink, Admin
I appreciate the issue of accessibility but I don't buy it in recreational contexts, rather than practical ones. I've seen similar arguments to justify AI-generated "art" and it's bollocks. There is no "right" to participate in a discussion if you're not actually up to doing so.
The problem, @Gramps49 , is that the posts read like LLM output, not your own thoughts. I'd much rather have your unvarnished thoughts than have them polished up into something that may or may not resemble them. If a shipmate had a personal secretary I'd think it extremely odd and not a little insulting for them to run their posts by them rather than engaging directly. If you're struggling to assemble thoughts in a particular thread it's ok to say that or simply to withdraw.
People are not as dazzled by quantity as LLM proponents seems to think.
That sounds incredibly ableist although I am sure that was not your intent.
Agree. I'd add that the obvious and important difference between using AI and a personal secretary is that the secretary is a human being. At least we'd be still be conversing with a person, if not actually the person posting, if someone were using a secretary.
It would perhaps be better phrased as: you are free to participate in a discussion to the extent of your capacity, just as you are free to create art to the extent of your skills. Using LLM output in a discussion is no more valid than submitting AI "art" to an exhibition. It's not discriminatory to expect that people engage on the basis of their own efforts, not those of a content theft amalgamator.
On the old Ship I remember the time when someone would cite a Wikipedia article about a subject and would be called out for it. Wikipedia is not accurate, they said, it does not cite its sources. It is clunky and so on. But now people readily cite it here. While it does get some facts wrong, it submits itself to peer review, and sources are cited. Its technology as advanced to the point people can accept it more.
AI is not the end all of any response, but it is a tool for understanding and crafting a response.
This isn't an academic institution, but most such organisations have developed policies regarding the use of LLMs and AI. This is the current position of my university which includes a lot of wisdom that could potentially be relevant in the less formal situation of online discussion as well as assessed work by students. In particular, I'd say the guidance to question the outputs of AI and ensure that what's finally submitted is your own thoughts is very apt.
About scammers on the Seeing is not believing thread
This post about feminized AI assistants, ironically enough, on the Brave New World thread
The OP you "wrote" for the thread on whether we're entering a new dark age
I get the impression that your concern about self-presentation has made you emphasize presentation to the detriment of self.
Presumably @Gramps49 has always been dyslexic and still managed to post prior to LLMs becoming widely available. What we're seeing goes way beyond just checking you've read or typed something correctly and into posting actual LLM generated content. Nobody's "missing" it, it's just not relevant to the use case being discussed.
ETA: I think we cannot make a hard and fast rule about Generative AI. There are many, many different tools that use a form of Generative AI.
Having something write your posts for you is not an "accommodation" and in an academic context would be considered misconduct.
There comes a point, though, where it goes beyond assistance and becomes replacement, and that is my concern. What @Gramps49 uses to help him formulate his posts is none of my business, so long as they are actually his posts but right now a number of posts of his read as verbatim LLM output. If I want to read AI slop I'll prod a LLM myself; I have no interest in 2nd hand slop.
No, he's got whole posts he didn't actually write. And since these are just discussion board posts, all the more reason he should just write things himself.
Writing a post and then getting the grammar and spelling checked is one thing and I can see that for some people it could be helpful. Expanding a couple of sentences into several paragraphs is quite another.
And if you weren't interested in the actual conversation (i.e. to troll), it would provide massive leverage. I'm a bit surprised we haven't really seen that. (Of course they would ignore any rules against AI)
There is a time and season. I'd definitely be curious how an AI 'sermon' goes and what it reveals under analysis. I'm not sure how we'd manage that.
As a general rule (not directed anywhere) for posting...
I would say "do unto others" and never rely on "what they don't know, won't hurt them" is a good start.
If you would be unhappy if I was doing what you are doing. Then it's probably a bad idea.
And that will neither teach them anything nor result in a post with any appreciable personal content.
In the responses I posted
The one dealing with Whimsical Christian. I had set down a three sentence thought about where I wanted to go. I was going to use another biblical citation, but AI told me the citation would not work as I had intended. I then suggested Matthew 18 and AI showed the pros and cons of the citation. I then settled on making the point that both Whimsical Christian had made certain mistakes in the personal message that prompted the thread. It actually took about ten minutes to get the message down to where I wanted it.
With Gamma Gamliel's response it took somewhat longer. After reading about three of his paragraphs I was lost--not because he was a poor writer--but because I am a poor reader. There have been many a time he has asked me to read for comprehension. I asked AI for a short summary of what he was saying. Once I saw what he was driving at according to AI, I re read the post and I picked up on the grief Gamma Gammiel had expressed. I knew I wanted to keep the tone even, and I thought of including the story from John to show how the resurrection event for me was not in the empty tomb but when Jesus appeared
to the disciples behind closed doors. I also wanted to include the story of the road to Emmaus was more important than the empty tomb. AI offered three possible responses using my ideas. I decided the Emmaus story was not necessary and dropped it. When I settled on what I wanted to say, I wrote what I wanted to say, then AI did the final edit.
When you post something, or write a report, or whatever else you're doing, you claim and assert responsibility for its content. "My LLM made a mistake" is not an excuse: if you are, for example, a lawyer using a LLM to prepare a legal submission, and your LLM invents fictitious legal cases to cite, this should be no different from if you deliberately lied in your submission.
Have you considered using text-to-speech to assist with reading?
But I found it made me sound less 'me'.
More knowledgeable, but less real.
Interim ruling
Whilst this discussion continues, any post created with generative AI involvement - I don’t mean spell check or predictive text, I mean software that creates all or part of the conceptual content - should contain the following sentence at the bottom of the post.
Per previous ruling on rolling policy update, AI should not be used for sourcing in serious discussion.
Doublethink, Admin
[/Admin]
But I am not great at the abstract argument, I need things tied into concrete examples. There are whole realms of discourse where I sort of know what's being said, but the terms are somehow ungraspable. There's every reason to think AI would only increase the nebulosity.
No, I'll go with Yeats -
...lie down where all ladders start
In the foul rag and bone shop of the heart
But I think Caissa has a valid general point, in that this is a technology that enables people with impaired abilities to participate in this medium. And some of us do not find this an easy medium in which to operate. For me, this calls more for a judgment call than a hard and fast rule.
A completely inappropriate analogy - Wikipedia requires its entries to be written and edited by human beings.
The Paranoid Android. If AI ever becomes sentient, I wonder if it too would be depressed for the lack of a human body?
(As an example of AI hallucinating, google AI has just told me that some people consider Stanislaw Lem to be the greatest sci fi author since Douglas Adams!)
I'm fine with using AI to research stuff, but you still have to see if the sources are any good and they've interpreted them correctly.
It's an excellent tool, but it still makes mistakes, so the responsibility falls on the person to make sure it hasn't got it wrong.
We've been fine using google for decades. And that can throw up rubbish.
It's still up to the person to sort through the rubbish, whether its google or AI.
Google Search has been around for 30 years:
As I was referenced by @Gramps49, I'll put in a brief appearance.
Please don't take this the wrong way, but I prefer the 'real' Gramps49 to the AI-assisted version, although I have every sympathy with it's use if it helps with dyslexia.
Other posters have made a good case for its use in other contexts.
I will also admit that my loquacity leads me to post long and convoluted screeds and that won't help.
I will aim for concision in future.
I think the interim ruling makes sense.
I completely understand using forms of AI for assistance. But I’m afraid I don’t want to read an AI post or an AI opinion.
Yikes! I knew about some but not all of this.
I miss “Don’t be evil.” How much Google tried to follow it before, I don’t know.
https://time.com/4060575/alphabet-google-dont-be-evil/
If one isn't skilled enough to redact the AI's output on their own, the idea of such a person using AI to replace their own writing is unsettling.
It's actually easier, I think, to write your own words than to use an AI to put out words and then edit them appropriately. Editing, as I'm aware, is real work.
This article in the Guardian puts it quite well.
Many academic texts from publishers based in the UK or Europe use a 'vendor' like Springer and it is cost-effective to have content editing as well as reference checking and indexes done by freelance editors in Manila, Mumbai, Puducherry, Cape Town or Harare, with well-enough educated English-speakers who rely increasingly on AI. I'm not going to go into the use of Deep-L, the Chinese DeepSeek or Anthropic's Claude for translation of texts from French-speaking Africa or Asia, except to say I used to be able to figure out dialect continuums and recognise consistency of tone and regional inflections, slang etc, Now I am often at a loss, because AI harvests colloquial text but makes it sound much smoother and fluent than a human author could produce.
I can understand the attraction of doing what Gramps49 is doing. But for me, in the context of an informal discussion forum for human beings, it would be completely self-defeating. The question I'm now asking myself is whether I can live with other people doing it.
It's a good question.
But we'll have to, I think, as it will be happening more and more.
It annoys me more with images, even photographs. They are so bland. They absolutely lack the human touch.
On a forum I expect to engage with human beings and their thoughts. Using AI tools to correct spelling and grammar is fine, and long established, but what is the point of posting an AI’s summary on a forum? As I say to my students when they quote large chunks of text, using other people’s words (or AI’s) doesn’t tell me anything about your knowledge and understanding.
It is all rather horrifying, this data mining.
Didn't know the stuff about google and court and monopolies. Where are they now?
Nothing wrong with paying phone makers and browser developers to make their search engine a default but a monopoly should certainly ring alarm bells.
My friend does the same work as you, and she agrees.
I think there will come a time when all assessment is in person or by video call.
That's because in this context reading is thinking and writing is also thinking. The whole point of these forums is to engage with the 'real work' of other people. Thinking through someone else's post and working out your response to it is very different from being presented with 3/4 plausible responses and picking the one you like best.
It made me realise what Shipmates like @Heavenlyannie and @MaryLouise are having to contend with professionally.
To be fair, it was simply a summary of the gist of some documents I was trying to find and some rough background context, more of a springboard for further investigation than and 'end result' as it were.
But it unnerved me to some extent.
I don't want to see @Gramps49 hauled over the coals for using AI as an aid but I'd also like to see him stop posting undigested chunks of it in his posts.
I'll accept my post went on and on and on - I'm too wordy.
But the gist was clear.
I took exception to his use of the term 'bunch of bones' to refer to the bodily remains of Christ on the grounds that this sounded disrespectful and sacrilegious.
There was also a 'rawness' in my response due to bereavement and the emphasis my particular Christian tradition puts on mortal remains and 'matter' more generally.
Some Shipmates responded sensitively to that. Others less so. They know who they are.
I can't see that AI assistance was necessary in responding to my post - or rebutting it if people thought that necessary.
That said, if Gramps49 felt the need to use AI to assist with dyslexia then fine.
But I really don't like the idea of AI generated responses here aboard Ship, or anywhere else for that matter.