Using AI in Ship forums

SipechSipech Shipmate
As a spin-off from a couple of other threads, at least one Shipmate seems to have been using AI to help write/edit their content, but accidentally copied & pasted the prompts. Perhaps other Shipmates have done so, but without the clumsy touch.

Without focusing on the Shipmate, but rather on the issue, what are our views about using AI to either generate or refine posts on the Ship's forums?

Obviously, it was not a possibility back when the Ship first set sail and the 10 Commandments were uttered.

I personally take a very dim view of it, but if one does deem it necessary, it should be disclosed.
Tagged:

Comments

  • Gramps49Gramps49 Shipmate
    You can name me, I do not mind. As I said in my response on the Whimsical thread, I was working on two different responses to different people in different threads. Sometimes I resort to AI to help keep my thoughts organized. Quite a while ago we talked about this issue. I cannot remember who it was that said her husband often times writes letters to the editor and uses AI for editing purposes. That is the same vain in which I was using AI. My words were coming out in mixed order, hanging phrases and misspellings. I cannot afford a personal secretary to keep my responses concise and to the point. AI helps me keep things in line.
  • CaissaCaissa Shipmate
    Every time we use predictive text in a Word document we are relying on Generative AI. Using Generative AI for editing and tone is no different than when I ask one of my staff to look at something I have written for tone or wordsmithing. Generative AI has become the new technological moral crisis. The written word, the printing press and the internet have all pre-dated it as moral technological crises. This one shall pass as well. Would Grammarly, which operates as a form of Generative AL be considered off limits? Many Generative AI tools are important in the lives of people with a disability, a population with which I work.. End of rant.
  • Gramps49Gramps49 Shipmate
    Caissa wrote: »
    Every time we use predictive text in a Word document we are relying on Generative AI. Using Generative AI for editing and tone is no different than when I ask one of my staff to look at something I have written for tone or wordsmithing. Generative AI has become the new technological moral crisis. The written word, the printing press and the internet have all pre-dated it as moral technological crises. This one shall pass as well. Would Grammarly, which operates as a form of Generative AL be considered off limits? Many Generative AI tools are important in the lives of people with a disability, a population with which I work.. End of rant.

    Thank you for your response @Caissa.

    Even as I enter a response here, a predictive text is being suggested. It is the way things are working now. I have a bit of dyslexia. I have never been able to spell correctly. Sometimes I just cannot understand what a person is saying. My find wonders a lot. So I will ask AI to explain a response, as in the case of @Gamma Gamaliel's expressed concern. I will then write a note as to how I would like to respond to the post and AI helped me write something along the lines of what I wanted to say. As I said, it is much like having a personal secretary coming up with the final draft.
  • DoublethinkDoublethink Admin, 8th Day Host
    edited April 17
    There is a statement of our current policy on generative AI on the rolling policy update thread.

    At the very least if you are using generative ai, you should say so in the post.

    I do not wish to be unkind, but @Gramps49 if you don't understand the post, how do you know what chat gpt has given you in summary is correct ?
  • CaissaCaissa Shipmate
    edited April 17
    How is generative AI being defined in the rolling policy update? As I say above, predictive word text and Grammarly are both commonly used forms of Generative AI.

    ETA: I found it: Doublethink wrote: »

    We haven’t previously had a policy on referencing AI such as ChatGP. However, as an interim ruling (we will further discuss this further) AI programs should not be used as a source for serious discussion such as Purgatory or Epiphanies. (Used on any other forum for entertainment, please clearly indicate its use and ideally what you told it to get the output you are posting.)

    The reason for this, is that the AI is simply using an algorithm to scrape the internet for content - there is no guarantee anything posted is factually accurate and we would prefer not to potentially spread misinformation.

    Doublethink, Admin
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    I turn off predicted text on my devices.

    I appreciate the issue of accessibility but I don't buy it in recreational contexts, rather than practical ones. I've seen similar arguments to justify AI-generated "art" and it's bollocks. There is no "right" to participate in a discussion if you're not actually up to doing so.

    The problem, @Gramps49 , is that the posts read like LLM output, not your own thoughts. I'd much rather have your unvarnished thoughts than have them polished up into something that may or may not resemble them. If a shipmate had a personal secretary I'd think it extremely odd and not a little insulting for them to run their posts by them rather than engaging directly. If you're struggling to assemble thoughts in a particular thread it's ok to say that or simply to withdraw.
  • RooKRooK Shipmate
    Using advanced editing tools to improve access and avoid miscommunication feels entirely reasonable. However, engaging with a minimal prompt and copy/pasting statistically standardized text devoid of understanding seems to be rather missing the point of "discussion".

    People are not as dazzled by quantity as LLM proponents seems to think.
  • RooKRooK Shipmate
    Note: I use a lot of AI - for coding. Because I'm bad very rusty at coding, and because there are qualitative outputs to measure, and there is no harm in having unnecessary gibberish in the machinery as long as the outputs work. The comparative lack of meaningful evaluation of swaths of discussion text at generation is relevant.
  • CaissaCaissa Shipmate
    Arethosemyfeet wrote "There is no "right" to participate in a discussion if you're not actually up to doing so."

    That sounds incredibly ableist although I am sure that was not your intent.
  • RuthRuth Shipmate
    The problem, @Gramps49 , is that the posts read like LLM output, not your own thoughts.
    Indeed. After reading his pre-AI posts for years, it's easy for me to see now when he's using AI. They don't sound like him.
    I'd much rather have your unvarnished thoughts than have them polished up into something that may or may not resemble them. If a shipmate had a personal secretary I'd think it extremely odd and not a little insulting for them to run their posts by them rather than engaging directly. If you're struggling to assemble thoughts in a particular thread it's ok to say that or simply to withdraw.
    Agree. I'd add that the obvious and important difference between using AI and a personal secretary is that the secretary is a human being. At least we'd be still be conversing with a person, if not actually the person posting, if someone were using a secretary.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Caissa wrote: »
    Arethosemyfeet wrote "There is no "right" to participate in a discussion if you're not actually up to doing so."

    That sounds incredibly ableist although I am sure that was not your intent.

    It would perhaps be better phrased as: you are free to participate in a discussion to the extent of your capacity, just as you are free to create art to the extent of your skills. Using LLM output in a discussion is no more valid than submitting AI "art" to an exhibition. It's not discriminatory to expect that people engage on the basis of their own efforts, not those of a content theft amalgamator.
  • Gramps49Gramps49 Shipmate
    To the question how do I tell if anAI summary is accurate. Two ways. I first read the post. Then ask AI what is the post trying to say, then I re read the post to see if the AI summary is good enough. In the case of @Gamma Gamaliel's post, I missed what he was saying about visiting his wife's grave. I asked AI what Gamma Gamaliel was saying. AI explained how the Incarnation was so important to him. I re read the post with that insight. In re reading the post I sensed the emotion he has about visiting his wife's grave. I also thought of using the text from John about the importance of Jesus appearing behind closed doors. I asked AI to help me word a response that included the recognition of the importance of the Incarnation. His stated feelings when visiting his wife's grave and developing a response based on John's story. Then with a few minor edits I felt it was ready to publish.

    On the old Ship I remember the time when someone would cite a Wikipedia article about a subject and would be called out for it. Wikipedia is not accurate, they said, it does not cite its sources. It is clunky and so on. But now people readily cite it here. While it does get some facts wrong, it submits itself to peer review, and sources are cited. Its technology as advanced to the point people can accept it more.

    AI is not the end all of any response, but it is a tool for understanding and crafting a response.
  • Alan Cresswell Alan Cresswell Admin, 8th Day Host
    If anyone struggles to understand what someone else has said, I've never known anyone object to a question asking for clarity (after all, if we've been misunderstood that means people are reacting to something we didn't intend to say, and that makes any discussion meaningless). It does take more time, but good discussion shouldn't be rushed anyway. I can't see how anyone can be sure that a LLM understands what someone is saying any better, and may still result in responding to something that wasn't intended.

    This isn't an academic institution, but most such organisations have developed policies regarding the use of LLMs and AI. This is the current position of my university which includes a lot of wisdom that could potentially be relevant in the less formal situation of online discussion as well as assessed work by students. In particular, I'd say the guidance to question the outputs of AI and ensure that what's finally submitted is your own thoughts is very apt.
  • RuthRuth Shipmate
    edited April 17
    Why go through all that, @Gramps49? Why not just re-read someone's post several times, write a response, and then re-read that several times while you edit? (The Preview button is extremely handy; I use it multiple times for many posts, including this one.) Re-reading and figuring things out for yourself would be way better for your brain, and we would know we're talking to a human being. Your posts have become increasing artificial to the point where I'm not convinced that you have provided all of the ideas yourself, never mind actually written the posts. I don't believe for a moment that posts like these are really coming out of your mind, because you've never been this organized, and you don't know all this stuff:

    About scammers on the Seeing is not believing thread
    This post about feminized AI assistants, ironically enough, on the Brave New World thread
    The OP you "wrote" for the thread on whether we're entering a new dark age

    I get the impression that your concern about self-presentation has made you emphasize presentation to the detriment of self.
  • CaissaCaissa Shipmate
    What part of Gramps49 stating he has dyslexia are people missing? Various technologies, including Generative AI, can important compensatory tools for individuals with such a diagnosis.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Caissa wrote: »
    What part of Gramps49 stating he has dyslexia are people missing? Various technologies, including Generative AI, can important compensatory tools for individuals with such a diagnosis.

    Presumably @Gramps49 has always been dyslexic and still managed to post prior to LLMs becoming widely available. What we're seeing goes way beyond just checking you've read or typed something correctly and into posting actual LLM generated content. Nobody's "missing" it, it's just not relevant to the use case being discussed.
  • CaissaCaissa Shipmate
    edited April 17
    I am glad you are an expert on the functional limitations of Gramp49's diagnosis and what accommodations best address it. Technology evolves. Your dismissal sounds like some of the professors who want to deny students reasonable academic accommodations.
    ETA: I think we cannot make a hard and fast rule about Generative AI. There are many, many different tools that use a form of Generative AI.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Caissa wrote: »
    I am glad you are an expert on the functional limitations of Gramp49's diagnosis and what accommodations best address it.

    Having something write your posts for you is not an "accommodation" and in an academic context would be considered misconduct.
  • CaissaCaissa Shipmate
    Agreed, however, we are talking about a discussion board post If you look at his first post in this thread, Gramps49 describes some of the functional limitations of his disability and how Generative AI assists him in compensating for them.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Caissa wrote: »
    Agreed, however, we are talking about a discussion board post If you look at his first post in this thread, Gramps49 describes some of the functional limitations of his disability and how Generative AI assists him in compensating for them.

    There comes a point, though, where it goes beyond assistance and becomes replacement, and that is my concern. What @Gramps49 uses to help him formulate his posts is none of my business, so long as they are actually his posts but right now a number of posts of his read as verbatim LLM output. If I want to read AI slop I'll prod a LLM myself; I have no interest in 2nd hand slop.
  • RuthRuth Shipmate
    Caissa wrote: »
    I am glad you are an expert on the functional limitations of Gramp49's diagnosis and what accommodations best address it.
    You don't even know if he has a diagnosis! It's not like they were testing a lot of kids for dyslexia in the US when he was young; legislation requiring equal education for "handicapped" children passed in 1975, and it was only after that that people running schools even started to think about dyslexic kids. It's possible that he was diagnosed as an adult. It's also possible that he's self-diagnosed -- I've seen a fair number of adults who aren't strong spellers say they must be dyslexic, when it would in fact be dysgraphia if their neurology comes into it at all and it might just be that spelling in English is a pain in the ass.
    Caissa wrote: »
    Agreed, however, we are talking about a discussion board post If you look at his first post in this thread, Gramps49 describes some of the functional limitations of his disability and how Generative AI assists him in compensating for them.

    No, he's got whole posts he didn't actually write. And since these are just discussion board posts, all the more reason he should just write things himself.
  • it would be really easy for us to wind up with a discussion board full of AI responding to AI, instead of people to people.
  • DafydDafyd Hell Host
    I note also that if you enter a prompt of a couple of sentences and get the AI to write a couple of paragraphs this is much quicker than writing a couple of paragraphs oneself. Therefore it lends itself to dominating threads and boards with posts reflecting one's rough opinions.

    Writing a post and then getting the grammar and spelling checked is one thing and I can see that for some people it could be helpful. Expanding a couple of sentences into several paragraphs is quite another.
  • Gramps49Gramps49 Shipmate
    I like the way the University of Glasgow tells its students how to use AI. I try to use it as the university suggests:
    AI can help you refine your wording
    Think of AI here as a kind-of conversation partner: you can ask it to rework some writing to enhance clarity. You can ask it to refine the output to make the text more simplistic, more complex, longer, shorter, and so on.
    Do not enter entire essays or paragraphs; use the method to help improve the language of words, phrases or individual sentences. You must ask the tool to offer you an explanation of why it has made the suggestions that it has so that you can use this advice to learn how to improve your own writing the first time around in the future. AI tools commonly use more adjectives or adverbs than would be suitable for an academic essay, even if your prompt asks specifically for a response in an academic tone, so make sure that you are happy with any suggestions before adapting them into your own piece of work. Again: do not create entire paragraphs from individually AI-constructed sentences; the work must be your own.
    Importantly, however, you must remember that AI uses predictions of the most likely next word based on what has been found in the material used to train whichever tool you are using. This is all it is doing: it simulates human responses, and it does not have the true intelligence capacity to understand why things might be right, might be wrong, might be stronger, or might be weaker.
  • chrisstileschrisstiles Hell Host
    But that does not describe the post that triggered this topic.
  • jay_emmjay_emm Kerygmania Host
    Dafyd wrote: »
    I note also that if you enter a prompt of a couple of sentences and get the AI to write a couple of paragraphs this is much quicker than writing a couple of paragraphs oneself. Therefore it lends itself to dominating threads and boards with posts reflecting one's rough opinions.

    Writing a post and then getting the grammar and spelling checked is one thing and I can see that for some people it could be helpful. Expanding a couple of sentences into several paragraphs is quite another.

    And if you weren't interested in the actual conversation (i.e. to troll), it would provide massive leverage. I'm a bit surprised we haven't really seen that. (Of course they would ignore any rules against AI)

    There is a time and season. I'd definitely be curious how an AI 'sermon' goes and what it reveals under analysis. I'm not sure how we'd manage that.

    As a general rule (not directed anywhere) for posting...
    I would say "do unto others" and never rely on "what they don't know, won't hurt them" is a good start.
    If you would be unhappy if I was doing what you are doing. Then it's probably a bad idea.
  • The trouble with the U. of Glasgow rules for AI usage is there's nobody who's actually going to do that. I mean, go through a whole essay (or post) line by line, asking AI to improve it? How many lines would that be? How many queries? No, they're going to do just what Gramps did, judging by the prompts he inadvertantly supplied us with. They're going to lay out a single query for the entire post/essay and call it good.

    And that will neither teach them anything nor result in a post with any appreciable personal content.
  • Gramps49Gramps49 Shipmate
    The trouble with the U. of Glasgow rules for AI usage is there's nobody who's actually going to do that. I mean, go through a whole essay (or post) line by line, asking AI to improve it? How many lines would that be? How many queries? No, they're going to do just what Gramps did, judging by the prompts he inadvertantly supplied us with. They're going to lay out a single query for the entire post/essay and call it good.

    And that will neither teach them anything nor result in a post with any appreciable personal content.


    In the responses I posted

    The one dealing with Whimsical Christian. I had set down a three sentence thought about where I wanted to go. I was going to use another biblical citation, but AI told me the citation would not work as I had intended. I then suggested Matthew 18 and AI showed the pros and cons of the citation. I then settled on making the point that both Whimsical Christian had made certain mistakes in the personal message that prompted the thread. It actually took about ten minutes to get the message down to where I wanted it.

    With Gamma Gamliel's response it took somewhat longer. After reading about three of his paragraphs I was lost--not because he was a poor writer--but because I am a poor reader. There have been many a time he has asked me to read for comprehension. I asked AI for a short summary of what he was saying. Once I saw what he was driving at according to AI, I re read the post and I picked up on the grief Gamma Gammiel had expressed. I knew I wanted to keep the tone even, and I thought of including the story from John to show how the resurrection event for me was not in the empty tomb but when Jesus appeared
    to the disciples behind closed doors. I also wanted to include the story of the road to Emmaus was more important than the empty tomb. AI offered three possible responses using my ideas. I decided the Emmaus story was not necessary and dropped it. When I settled on what I wanted to say, I wrote what I wanted to say, then AI did the final edit.
  • In particular, I'd say the guidance to question the outputs of AI and ensure that what's finally submitted is your own thoughts is very apt.

    When you post something, or write a report, or whatever else you're doing, you claim and assert responsibility for its content. "My LLM made a mistake" is not an excuse: if you are, for example, a lawyer using a LLM to prepare a legal submission, and your LLM invents fictitious legal cases to cite, this should be no different from if you deliberately lied in your submission.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Gramps49 wrote: »
    The trouble with the U. of Glasgow rules for AI usage is there's nobody who's actually going to do that. I mean, go through a whole essay (or post) line by line, asking AI to improve it? How many lines would that be? How many queries? No, they're going to do just what Gramps did, judging by the prompts he inadvertantly supplied us with. They're going to lay out a single query for the entire post/essay and call it good.

    And that will neither teach them anything nor result in a post with any appreciable personal content.
    After reading about three of his paragraphs I was lost--not because he was a poor writer--but because I am a poor reader.

    Have you considered using text-to-speech to assist with reading?
  • BoogieBoogie Heaven Host
    I've tried it - not on the Ship - just out of interest.

    But I found it made me sound less 'me'.

    More knowledgeable, but less real.
  • DoublethinkDoublethink Admin, 8th Day Host
    edited 6:14AM
    [Admin]

    Interim ruling

    Whilst this discussion continues, any post created with generative AI involvement - I don’t mean spell check or predictive text, I mean software that creates all or part of the conceptual content - should contain the following sentence at the bottom of the post.
    This post was created with the assistance of generative ai: [insert name of tool]

    Per previous ruling on rolling policy update, AI should not be used for sourcing in serious discussion.

    Doublethink, Admin

    [/Admin]
Sign In or Register to comment.