Using AI in Ship forums
As a spin-off from a couple of other threads, at least one Shipmate seems to have been using AI to help write/edit their content, but accidentally copied & pasted the prompts. Perhaps other Shipmates have done so, but without the clumsy touch.
Without focusing on the Shipmate, but rather on the issue, what are our views about using AI to either generate or refine posts on the Ship's forums?
Obviously, it was not a possibility back when the Ship first set sail and the 10 Commandments were uttered.
I personally take a very dim view of it, but if one does deem it necessary, it should be disclosed.
Without focusing on the Shipmate, but rather on the issue, what are our views about using AI to either generate or refine posts on the Ship's forums?
Obviously, it was not a possibility back when the Ship first set sail and the 10 Commandments were uttered.
I personally take a very dim view of it, but if one does deem it necessary, it should be disclosed.
Tagged:

Comments
Thank you for your response @Caissa.
Even as I enter a response here, a predictive text is being suggested. It is the way things are working now. I have a bit of dyslexia. I have never been able to spell correctly. Sometimes I just cannot understand what a person is saying. My find wonders a lot. So I will ask AI to explain a response, as in the case of @Gamma Gamaliel's expressed concern. I will then write a note as to how I would like to respond to the post and AI helped me write something along the lines of what I wanted to say. As I said, it is much like having a personal secretary coming up with the final draft.
At the very least if you are using generative ai, you should say so in the post.
I do not wish to be unkind, but @Gramps49 if you don't understand the post, how do you know what chat gpt has given you in summary is correct ?
ETA: I found it: Doublethink wrote: »
We haven’t previously had a policy on referencing AI such as ChatGP. However, as an interim ruling (we will further discuss this further) AI programs should not be used as a source for serious discussion such as Purgatory or Epiphanies. (Used on any other forum for entertainment, please clearly indicate its use and ideally what you told it to get the output you are posting.)
The reason for this, is that the AI is simply using an algorithm to scrape the internet for content - there is no guarantee anything posted is factually accurate and we would prefer not to potentially spread misinformation.
Doublethink, Admin
I appreciate the issue of accessibility but I don't buy it in recreational contexts, rather than practical ones. I've seen similar arguments to justify AI-generated "art" and it's bollocks. There is no "right" to participate in a discussion if you're not actually up to doing so.
The problem, @Gramps49 , is that the posts read like LLM output, not your own thoughts. I'd much rather have your unvarnished thoughts than have them polished up into something that may or may not resemble them. If a shipmate had a personal secretary I'd think it extremely odd and not a little insulting for them to run their posts by them rather than engaging directly. If you're struggling to assemble thoughts in a particular thread it's ok to say that or simply to withdraw.
People are not as dazzled by quantity as LLM proponents seems to think.
That sounds incredibly ableist although I am sure that was not your intent.
Agree. I'd add that the obvious and important difference between using AI and a personal secretary is that the secretary is a human being. At least we'd be still be conversing with a person, if not actually the person posting, if someone were using a secretary.
It would perhaps be better phrased as: you are free to participate in a discussion to the extent of your capacity, just as you are free to create art to the extent of your skills. Using LLM output in a discussion is no more valid than submitting AI "art" to an exhibition. It's not discriminatory to expect that people engage on the basis of their own efforts, not those of a content theft amalgamator.
On the old Ship I remember the time when someone would cite a Wikipedia article about a subject and would be called out for it. Wikipedia is not accurate, they said, it does not cite its sources. It is clunky and so on. But now people readily cite it here. While it does get some facts wrong, it submits itself to peer review, and sources are cited. Its technology as advanced to the point people can accept it more.
AI is not the end all of any response, but it is a tool for understanding and crafting a response.
This isn't an academic institution, but most such organisations have developed policies regarding the use of LLMs and AI. This is the current position of my university which includes a lot of wisdom that could potentially be relevant in the less formal situation of online discussion as well as assessed work by students. In particular, I'd say the guidance to question the outputs of AI and ensure that what's finally submitted is your own thoughts is very apt.
About scammers on the Seeing is not believing thread
This post about feminized AI assistants, ironically enough, on the Brave New World thread
The OP you "wrote" for the thread on whether we're entering a new dark age
I get the impression that your concern about self-presentation has made you emphasize presentation to the detriment of self.
Presumably @Gramps49 has always been dyslexic and still managed to post prior to LLMs becoming widely available. What we're seeing goes way beyond just checking you've read or typed something correctly and into posting actual LLM generated content. Nobody's "missing" it, it's just not relevant to the use case being discussed.
ETA: I think we cannot make a hard and fast rule about Generative AI. There are many, many different tools that use a form of Generative AI.
Having something write your posts for you is not an "accommodation" and in an academic context would be considered misconduct.
There comes a point, though, where it goes beyond assistance and becomes replacement, and that is my concern. What @Gramps49 uses to help him formulate his posts is none of my business, so long as they are actually his posts but right now a number of posts of his read as verbatim LLM output. If I want to read AI slop I'll prod a LLM myself; I have no interest in 2nd hand slop.
No, he's got whole posts he didn't actually write. And since these are just discussion board posts, all the more reason he should just write things himself.
Writing a post and then getting the grammar and spelling checked is one thing and I can see that for some people it could be helpful. Expanding a couple of sentences into several paragraphs is quite another.
And if you weren't interested in the actual conversation (i.e. to troll), it would provide massive leverage. I'm a bit surprised we haven't really seen that. (Of course they would ignore any rules against AI)
There is a time and season. I'd definitely be curious how an AI 'sermon' goes and what it reveals under analysis. I'm not sure how we'd manage that.
As a general rule (not directed anywhere) for posting...
I would say "do unto others" and never rely on "what they don't know, won't hurt them" is a good start.
If you would be unhappy if I was doing what you are doing. Then it's probably a bad idea.
And that will neither teach them anything nor result in a post with any appreciable personal content.
In the responses I posted
The one dealing with Whimsical Christian. I had set down a three sentence thought about where I wanted to go. I was going to use another biblical citation, but AI told me the citation would not work as I had intended. I then suggested Matthew 18 and AI showed the pros and cons of the citation. I then settled on making the point that both Whimsical Christian had made certain mistakes in the personal message that prompted the thread. It actually took about ten minutes to get the message down to where I wanted it.
With Gamma Gamliel's response it took somewhat longer. After reading about three of his paragraphs I was lost--not because he was a poor writer--but because I am a poor reader. There have been many a time he has asked me to read for comprehension. I asked AI for a short summary of what he was saying. Once I saw what he was driving at according to AI, I re read the post and I picked up on the grief Gamma Gammiel had expressed. I knew I wanted to keep the tone even, and I thought of including the story from John to show how the resurrection event for me was not in the empty tomb but when Jesus appeared
to the disciples behind closed doors. I also wanted to include the story of the road to Emmaus was more important than the empty tomb. AI offered three possible responses using my ideas. I decided the Emmaus story was not necessary and dropped it. When I settled on what I wanted to say, I wrote what I wanted to say, then AI did the final edit.
When you post something, or write a report, or whatever else you're doing, you claim and assert responsibility for its content. "My LLM made a mistake" is not an excuse: if you are, for example, a lawyer using a LLM to prepare a legal submission, and your LLM invents fictitious legal cases to cite, this should be no different from if you deliberately lied in your submission.
Have you considered using text-to-speech to assist with reading?
But I found it made me sound less 'me'.
More knowledgeable, but less real.
Interim ruling
Whilst this discussion continues, any post created with generative AI involvement - I don’t mean spell check or predictive text, I mean software that creates all or part of the conceptual content - should contain the following sentence at the bottom of the post.
Per previous ruling on rolling policy update, AI should not be used for sourcing in serious discussion.
Doublethink, Admin
[/Admin]