Using AI in Ship forums

SipechSipech Shipmate
As a spin-off from a couple of other threads, at least one Shipmate seems to have been using AI to help write/edit their content, but accidentally copied & pasted the prompts. Perhaps other Shipmates have done so, but without the clumsy touch.

Without focusing on the Shipmate, but rather on the issue, what are our views about using AI to either generate or refine posts on the Ship's forums?

Obviously, it was not a possibility back when the Ship first set sail and the 10 Commandments were uttered.

I personally take a very dim view of it, but if one does deem it necessary, it should be disclosed.
Tagged:
«1

Comments

  • Gramps49Gramps49 Shipmate
    You can name me, I do not mind. As I said in my response on the Whimsical thread, I was working on two different responses to different people in different threads. Sometimes I resort to AI to help keep my thoughts organized. Quite a while ago we talked about this issue. I cannot remember who it was that said her husband often times writes letters to the editor and uses AI for editing purposes. That is the same vain in which I was using AI. My words were coming out in mixed order, hanging phrases and misspellings. I cannot afford a personal secretary to keep my responses concise and to the point. AI helps me keep things in line.
  • CaissaCaissa Shipmate
    Every time we use predictive text in a Word document we are relying on Generative AI. Using Generative AI for editing and tone is no different than when I ask one of my staff to look at something I have written for tone or wordsmithing. Generative AI has become the new technological moral crisis. The written word, the printing press and the internet have all pre-dated it as moral technological crises. This one shall pass as well. Would Grammarly, which operates as a form of Generative AL be considered off limits? Many Generative AI tools are important in the lives of people with a disability, a population with which I work.. End of rant.
  • Gramps49Gramps49 Shipmate
    Caissa wrote: »
    Every time we use predictive text in a Word document we are relying on Generative AI. Using Generative AI for editing and tone is no different than when I ask one of my staff to look at something I have written for tone or wordsmithing. Generative AI has become the new technological moral crisis. The written word, the printing press and the internet have all pre-dated it as moral technological crises. This one shall pass as well. Would Grammarly, which operates as a form of Generative AL be considered off limits? Many Generative AI tools are important in the lives of people with a disability, a population with which I work.. End of rant.

    Thank you for your response @Caissa.

    Even as I enter a response here, a predictive text is being suggested. It is the way things are working now. I have a bit of dyslexia. I have never been able to spell correctly. Sometimes I just cannot understand what a person is saying. My find wonders a lot. So I will ask AI to explain a response, as in the case of @Gamma Gamaliel's expressed concern. I will then write a note as to how I would like to respond to the post and AI helped me write something along the lines of what I wanted to say. As I said, it is much like having a personal secretary coming up with the final draft.
  • DoublethinkDoublethink Admin, 8th Day Host
    edited April 17
    There is a statement of our current policy on generative AI on the rolling policy update thread.

    At the very least if you are using generative ai, you should say so in the post.

    I do not wish to be unkind, but @Gramps49 if you don't understand the post, how do you know what chat gpt has given you in summary is correct ?
  • CaissaCaissa Shipmate
    edited April 17
    How is generative AI being defined in the rolling policy update? As I say above, predictive word text and Grammarly are both commonly used forms of Generative AI.

    ETA: I found it: Doublethink wrote: »

    We haven’t previously had a policy on referencing AI such as ChatGP. However, as an interim ruling (we will further discuss this further) AI programs should not be used as a source for serious discussion such as Purgatory or Epiphanies. (Used on any other forum for entertainment, please clearly indicate its use and ideally what you told it to get the output you are posting.)

    The reason for this, is that the AI is simply using an algorithm to scrape the internet for content - there is no guarantee anything posted is factually accurate and we would prefer not to potentially spread misinformation.

    Doublethink, Admin
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    I turn off predicted text on my devices.

    I appreciate the issue of accessibility but I don't buy it in recreational contexts, rather than practical ones. I've seen similar arguments to justify AI-generated "art" and it's bollocks. There is no "right" to participate in a discussion if you're not actually up to doing so.

    The problem, @Gramps49 , is that the posts read like LLM output, not your own thoughts. I'd much rather have your unvarnished thoughts than have them polished up into something that may or may not resemble them. If a shipmate had a personal secretary I'd think it extremely odd and not a little insulting for them to run their posts by them rather than engaging directly. If you're struggling to assemble thoughts in a particular thread it's ok to say that or simply to withdraw.
  • RooKRooK Shipmate
    Using advanced editing tools to improve access and avoid miscommunication feels entirely reasonable. However, engaging with a minimal prompt and copy/pasting statistically standardized text devoid of understanding seems to be rather missing the point of "discussion".

    People are not as dazzled by quantity as LLM proponents seems to think.
  • RooKRooK Shipmate
    Note: I use a lot of AI - for coding. Because I'm bad very rusty at coding, and because there are qualitative outputs to measure, and there is no harm in having unnecessary gibberish in the machinery as long as the outputs work. The comparative lack of meaningful evaluation of swaths of discussion text at generation is relevant.
  • CaissaCaissa Shipmate
    Arethosemyfeet wrote "There is no "right" to participate in a discussion if you're not actually up to doing so."

    That sounds incredibly ableist although I am sure that was not your intent.
  • RuthRuth Shipmate
    The problem, @Gramps49 , is that the posts read like LLM output, not your own thoughts.
    Indeed. After reading his pre-AI posts for years, it's easy for me to see now when he's using AI. They don't sound like him.
    I'd much rather have your unvarnished thoughts than have them polished up into something that may or may not resemble them. If a shipmate had a personal secretary I'd think it extremely odd and not a little insulting for them to run their posts by them rather than engaging directly. If you're struggling to assemble thoughts in a particular thread it's ok to say that or simply to withdraw.
    Agree. I'd add that the obvious and important difference between using AI and a personal secretary is that the secretary is a human being. At least we'd be still be conversing with a person, if not actually the person posting, if someone were using a secretary.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Caissa wrote: »
    Arethosemyfeet wrote "There is no "right" to participate in a discussion if you're not actually up to doing so."

    That sounds incredibly ableist although I am sure that was not your intent.

    It would perhaps be better phrased as: you are free to participate in a discussion to the extent of your capacity, just as you are free to create art to the extent of your skills. Using LLM output in a discussion is no more valid than submitting AI "art" to an exhibition. It's not discriminatory to expect that people engage on the basis of their own efforts, not those of a content theft amalgamator.
  • Gramps49Gramps49 Shipmate
    To the question how do I tell if anAI summary is accurate. Two ways. I first read the post. Then ask AI what is the post trying to say, then I re read the post to see if the AI summary is good enough. In the case of @Gamma Gamaliel's post, I missed what he was saying about visiting his wife's grave. I asked AI what Gamma Gamaliel was saying. AI explained how the Incarnation was so important to him. I re read the post with that insight. In re reading the post I sensed the emotion he has about visiting his wife's grave. I also thought of using the text from John about the importance of Jesus appearing behind closed doors. I asked AI to help me word a response that included the recognition of the importance of the Incarnation. His stated feelings when visiting his wife's grave and developing a response based on John's story. Then with a few minor edits I felt it was ready to publish.

    On the old Ship I remember the time when someone would cite a Wikipedia article about a subject and would be called out for it. Wikipedia is not accurate, they said, it does not cite its sources. It is clunky and so on. But now people readily cite it here. While it does get some facts wrong, it submits itself to peer review, and sources are cited. Its technology as advanced to the point people can accept it more.

    AI is not the end all of any response, but it is a tool for understanding and crafting a response.
  • Alan Cresswell Alan Cresswell Admin, 8th Day Host
    If anyone struggles to understand what someone else has said, I've never known anyone object to a question asking for clarity (after all, if we've been misunderstood that means people are reacting to something we didn't intend to say, and that makes any discussion meaningless). It does take more time, but good discussion shouldn't be rushed anyway. I can't see how anyone can be sure that a LLM understands what someone is saying any better, and may still result in responding to something that wasn't intended.

    This isn't an academic institution, but most such organisations have developed policies regarding the use of LLMs and AI. This is the current position of my university which includes a lot of wisdom that could potentially be relevant in the less formal situation of online discussion as well as assessed work by students. In particular, I'd say the guidance to question the outputs of AI and ensure that what's finally submitted is your own thoughts is very apt.
  • RuthRuth Shipmate
    edited April 17
    Why go through all that, @Gramps49? Why not just re-read someone's post several times, write a response, and then re-read that several times while you edit? (The Preview button is extremely handy; I use it multiple times for many posts, including this one.) Re-reading and figuring things out for yourself would be way better for your brain, and we would know we're talking to a human being. Your posts have become increasing artificial to the point where I'm not convinced that you have provided all of the ideas yourself, never mind actually written the posts. I don't believe for a moment that posts like these are really coming out of your mind, because you've never been this organized, and you don't know all this stuff:

    About scammers on the Seeing is not believing thread
    This post about feminized AI assistants, ironically enough, on the Brave New World thread
    The OP you "wrote" for the thread on whether we're entering a new dark age

    I get the impression that your concern about self-presentation has made you emphasize presentation to the detriment of self.
  • CaissaCaissa Shipmate
    What part of Gramps49 stating he has dyslexia are people missing? Various technologies, including Generative AI, can important compensatory tools for individuals with such a diagnosis.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Caissa wrote: »
    What part of Gramps49 stating he has dyslexia are people missing? Various technologies, including Generative AI, can important compensatory tools for individuals with such a diagnosis.

    Presumably @Gramps49 has always been dyslexic and still managed to post prior to LLMs becoming widely available. What we're seeing goes way beyond just checking you've read or typed something correctly and into posting actual LLM generated content. Nobody's "missing" it, it's just not relevant to the use case being discussed.
  • CaissaCaissa Shipmate
    edited April 17
    I am glad you are an expert on the functional limitations of Gramp49's diagnosis and what accommodations best address it. Technology evolves. Your dismissal sounds like some of the professors who want to deny students reasonable academic accommodations.
    ETA: I think we cannot make a hard and fast rule about Generative AI. There are many, many different tools that use a form of Generative AI.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Caissa wrote: »
    I am glad you are an expert on the functional limitations of Gramp49's diagnosis and what accommodations best address it.

    Having something write your posts for you is not an "accommodation" and in an academic context would be considered misconduct.
  • CaissaCaissa Shipmate
    Agreed, however, we are talking about a discussion board post If you look at his first post in this thread, Gramps49 describes some of the functional limitations of his disability and how Generative AI assists him in compensating for them.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Caissa wrote: »
    Agreed, however, we are talking about a discussion board post If you look at his first post in this thread, Gramps49 describes some of the functional limitations of his disability and how Generative AI assists him in compensating for them.

    There comes a point, though, where it goes beyond assistance and becomes replacement, and that is my concern. What @Gramps49 uses to help him formulate his posts is none of my business, so long as they are actually his posts but right now a number of posts of his read as verbatim LLM output. If I want to read AI slop I'll prod a LLM myself; I have no interest in 2nd hand slop.
  • RuthRuth Shipmate
    Caissa wrote: »
    I am glad you are an expert on the functional limitations of Gramp49's diagnosis and what accommodations best address it.
    You don't even know if he has a diagnosis! It's not like they were testing a lot of kids for dyslexia in the US when he was young; legislation requiring equal education for "handicapped" children passed in 1975, and it was only after that that people running schools even started to think about dyslexic kids. It's possible that he was diagnosed as an adult. It's also possible that he's self-diagnosed -- I've seen a fair number of adults who aren't strong spellers say they must be dyslexic, when it would in fact be dysgraphia if their neurology comes into it at all and it might just be that spelling in English is a pain in the ass.
    Caissa wrote: »
    Agreed, however, we are talking about a discussion board post If you look at his first post in this thread, Gramps49 describes some of the functional limitations of his disability and how Generative AI assists him in compensating for them.

    No, he's got whole posts he didn't actually write. And since these are just discussion board posts, all the more reason he should just write things himself.
  • it would be really easy for us to wind up with a discussion board full of AI responding to AI, instead of people to people.
  • DafydDafyd Hell Host
    I note also that if you enter a prompt of a couple of sentences and get the AI to write a couple of paragraphs this is much quicker than writing a couple of paragraphs oneself. Therefore it lends itself to dominating threads and boards with posts reflecting one's rough opinions.

    Writing a post and then getting the grammar and spelling checked is one thing and I can see that for some people it could be helpful. Expanding a couple of sentences into several paragraphs is quite another.
  • Gramps49Gramps49 Shipmate
    I like the way the University of Glasgow tells its students how to use AI. I try to use it as the university suggests:
    AI can help you refine your wording
    Think of AI here as a kind-of conversation partner: you can ask it to rework some writing to enhance clarity. You can ask it to refine the output to make the text more simplistic, more complex, longer, shorter, and so on.
    Do not enter entire essays or paragraphs; use the method to help improve the language of words, phrases or individual sentences. You must ask the tool to offer you an explanation of why it has made the suggestions that it has so that you can use this advice to learn how to improve your own writing the first time around in the future. AI tools commonly use more adjectives or adverbs than would be suitable for an academic essay, even if your prompt asks specifically for a response in an academic tone, so make sure that you are happy with any suggestions before adapting them into your own piece of work. Again: do not create entire paragraphs from individually AI-constructed sentences; the work must be your own.
    Importantly, however, you must remember that AI uses predictions of the most likely next word based on what has been found in the material used to train whichever tool you are using. This is all it is doing: it simulates human responses, and it does not have the true intelligence capacity to understand why things might be right, might be wrong, might be stronger, or might be weaker.
  • chrisstileschrisstiles Hell Host
    But that does not describe the post that triggered this topic.
  • jay_emmjay_emm Kerygmania Host
    Dafyd wrote: »
    I note also that if you enter a prompt of a couple of sentences and get the AI to write a couple of paragraphs this is much quicker than writing a couple of paragraphs oneself. Therefore it lends itself to dominating threads and boards with posts reflecting one's rough opinions.

    Writing a post and then getting the grammar and spelling checked is one thing and I can see that for some people it could be helpful. Expanding a couple of sentences into several paragraphs is quite another.

    And if you weren't interested in the actual conversation (i.e. to troll), it would provide massive leverage. I'm a bit surprised we haven't really seen that. (Of course they would ignore any rules against AI)

    There is a time and season. I'd definitely be curious how an AI 'sermon' goes and what it reveals under analysis. I'm not sure how we'd manage that.

    As a general rule (not directed anywhere) for posting...
    I would say "do unto others" and never rely on "what they don't know, won't hurt them" is a good start.
    If you would be unhappy if I was doing what you are doing. Then it's probably a bad idea.
  • The trouble with the U. of Glasgow rules for AI usage is there's nobody who's actually going to do that. I mean, go through a whole essay (or post) line by line, asking AI to improve it? How many lines would that be? How many queries? No, they're going to do just what Gramps did, judging by the prompts he inadvertantly supplied us with. They're going to lay out a single query for the entire post/essay and call it good.

    And that will neither teach them anything nor result in a post with any appreciable personal content.
  • Gramps49Gramps49 Shipmate
    The trouble with the U. of Glasgow rules for AI usage is there's nobody who's actually going to do that. I mean, go through a whole essay (or post) line by line, asking AI to improve it? How many lines would that be? How many queries? No, they're going to do just what Gramps did, judging by the prompts he inadvertantly supplied us with. They're going to lay out a single query for the entire post/essay and call it good.

    And that will neither teach them anything nor result in a post with any appreciable personal content.


    In the responses I posted

    The one dealing with Whimsical Christian. I had set down a three sentence thought about where I wanted to go. I was going to use another biblical citation, but AI told me the citation would not work as I had intended. I then suggested Matthew 18 and AI showed the pros and cons of the citation. I then settled on making the point that both Whimsical Christian had made certain mistakes in the personal message that prompted the thread. It actually took about ten minutes to get the message down to where I wanted it.

    With Gamma Gamliel's response it took somewhat longer. After reading about three of his paragraphs I was lost--not because he was a poor writer--but because I am a poor reader. There have been many a time he has asked me to read for comprehension. I asked AI for a short summary of what he was saying. Once I saw what he was driving at according to AI, I re read the post and I picked up on the grief Gamma Gammiel had expressed. I knew I wanted to keep the tone even, and I thought of including the story from John to show how the resurrection event for me was not in the empty tomb but when Jesus appeared
    to the disciples behind closed doors. I also wanted to include the story of the road to Emmaus was more important than the empty tomb. AI offered three possible responses using my ideas. I decided the Emmaus story was not necessary and dropped it. When I settled on what I wanted to say, I wrote what I wanted to say, then AI did the final edit.
  • In particular, I'd say the guidance to question the outputs of AI and ensure that what's finally submitted is your own thoughts is very apt.

    When you post something, or write a report, or whatever else you're doing, you claim and assert responsibility for its content. "My LLM made a mistake" is not an excuse: if you are, for example, a lawyer using a LLM to prepare a legal submission, and your LLM invents fictitious legal cases to cite, this should be no different from if you deliberately lied in your submission.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    Gramps49 wrote: »
    The trouble with the U. of Glasgow rules for AI usage is there's nobody who's actually going to do that. I mean, go through a whole essay (or post) line by line, asking AI to improve it? How many lines would that be? How many queries? No, they're going to do just what Gramps did, judging by the prompts he inadvertantly supplied us with. They're going to lay out a single query for the entire post/essay and call it good.

    And that will neither teach them anything nor result in a post with any appreciable personal content.
    After reading about three of his paragraphs I was lost--not because he was a poor writer--but because I am a poor reader.

    Have you considered using text-to-speech to assist with reading?
  • BoogieBoogie Heaven Host
    I've tried it - not on the Ship - just out of interest.

    But I found it made me sound less 'me'.

    More knowledgeable, but less real.
  • DoublethinkDoublethink Admin, 8th Day Host
    edited April 18
    [Admin]

    Interim ruling

    Whilst this discussion continues, any post created with generative AI involvement - I don’t mean spell check or predictive text, I mean software that creates all or part of the conceptual content - should contain the following sentence at the bottom of the post.
    This post was created with the assistance of generative ai: [insert name of tool]

    Per previous ruling on rolling policy update, AI should not be used for sourcing in serious discussion.

    Doublethink, Admin

    [/Admin]
  • FirenzeFirenze Shipmate, Host Emeritus
    I've always prided myself on being the literary stylist, me. I have even shunned emojis (if you want to convey a tone, nuance etc USE WORDS).

    But I am not great at the abstract argument, I need things tied into concrete examples. There are whole realms of discourse where I sort of know what's being said, but the terms are somehow ungraspable. There's every reason to think AI would only increase the nebulosity.

    No, I'll go with Yeats -

    ...lie down where all ladders start
    In the foul rag and bone shop of the heart
  • peasepease Tech Admin
    edited April 18
    For a while now, I've thought of Gramps49's user as being operated by two different entities - "Gramps49", and "AI-assisted Gramps49". They're not hard to distinguish - one of them sounds like a human being with a recognisable posting style, the other sounds increasingly like an automated system, with most of the human elements removed. I want to interact with human beings, not automated systems. I've had "discussions" with LLMs - I'm still reminded of what Douglas Adams wrote about the Sirius Cybernetics Corporation.

    But I think Caissa has a valid general point, in that this is a technology that enables people with impaired abilities to participate in this medium. And some of us do not find this an easy medium in which to operate. For me, this calls more for a judgment call than a hard and fast rule.
    Gramps49 wrote: »
    On the old Ship I remember the time when someone would cite a Wikipedia article about a subject and would be called out for it. Wikipedia is not accurate, they said, it does not cite its sources. It is clunky and so on. But now people readily cite it here. While it does get some facts wrong, it submits itself to peer review, and sources are cited. Its technology as advanced to the point people can accept it more.
    A completely inappropriate analogy - Wikipedia requires its entries to be written and edited by human beings.
  • BoogieBoogie Heaven Host
    Yes @pease Douglas Adams predicted a lot of AI conundrums. Often from the point of view of the robot.

    The Paranoid Android. If AI ever becomes sentient, I wonder if it too would be depressed for the lack of a human body?
  • Stanislaw Lem wrote some great science fiction in the 1960s about robots and AI, and Douglas Adams was clearly influenced by him and occasionally ‘borrowed’ his ideas.
    (As an example of AI hallucinating, google AI has just told me that some people consider Stanislaw Lem to be the greatest sci fi author since Douglas Adams!)
  • I'm fine with using AI for editing writing. But you still have to look over it like a spell check to make sure it's saying what you're wanting it to say.

    I'm fine with using AI to research stuff, but you still have to see if the sources are any good and they've interpreted them correctly.

    It's an excellent tool, but it still makes mistakes, so the responsibility falls on the person to make sure it hasn't got it wrong.

    We've been fine using google for decades. And that can throw up rubbish.

    It's still up to the person to sort through the rubbish, whether its google or AI.

  • peasepease Tech Admin
    edited April 18
    We've been fine using google for decades.
    That depends on your definition of "fine". (Have you you heard of Surveillance Capitalism?)
    Google Search has been around for 30 years:
    In 2007, a group of researchers observed a tendency for users to rely exclusively on Google Search for finding information, writing that "With the Google interface the user gets the impression that the search results imply a kind of totality. In fact, one only sees a small part of what one could see if one also integrates other research tools."

    In 2011, Google Search query results have been shown by Internet activist Eli Pariser to be tailored to users, effectively isolating users in what he defined as a filter bubble. Pariser holds algorithms used in search engines such as Google Search responsible for catering "a personal ecosystem of information".

    In 2012, the US Federal Trade Commission fined Google US$22.5 million for violating their agreement not to violate the privacy of users of Apple's Safari web browser.

    In August 2024, a US judge in Virginia ruled that Google held an illegal monopoly over Internet search and search advertising. The court found that Google maintained its market dominance by paying large amounts to phone-makers and browser-developers to make Google its default search engine.
  • Goodness me!

    As I was referenced by @Gramps49, I'll put in a brief appearance.

    Please don't take this the wrong way, but I prefer the 'real' Gramps49 to the AI-assisted version, although I have every sympathy with it's use if it helps with dyslexia.

    Other posters have made a good case for its use in other contexts.

    I will also admit that my loquacity leads me to post long and convoluted screeds and that won't help.

    I will aim for concision in future.

    I think the interim ruling makes sense.
  • Nick TamenNick Tamen Shipmate
    Ruth wrote: »
    The problem, @Gramps49 , is that the posts read like LLM output, not your own thoughts.
    Indeed. After reading his pre-AI posts for years, it's easy for me to see now when he's using AI. They don't sound like him.
    This is my take as well. And I’ll confess that when I see that a post reads as though it was written by AI and not by @Gramps49, I tend to stop reading and move on to the next post.

    I completely understand using forms of AI for assistance. But I’m afraid I don’t want to read an AI post or an AI opinion.


  • ChastMastrChastMastr Shipmate
    pease wrote: »
    We've been fine using google for decades.
    That depends on your definition of "fine". (Have you you heard of Surveillance Capitalism?)
    Google Search has been around for 30 years:
    In 2007, a group of researchers observed a tendency for users to rely exclusively on Google Search for finding information, writing that "With the Google interface the user gets the impression that the search results imply a kind of totality. In fact, one only sees a small part of what one could see if one also integrates other research tools."

    In 2011, Google Search query results have been shown by Internet activist Eli Pariser to be tailored to users, effectively isolating users in what he defined as a filter bubble. Pariser holds algorithms used in search engines such as Google Search responsible for catering "a personal ecosystem of information".

    In 2012, the US Federal Trade Commission fined Google US$22.5 million for violating their agreement not to violate the privacy of users of Apple's Safari web browser.

    In August 2024, a US judge in Virginia ruled that Google held an illegal monopoly over Internet search and search advertising. The court found that Google maintained its market dominance by paying large amounts to phone-makers and browser-developers to make Google its default search engine.

    Yikes! I knew about some but not all of this.

    I miss “Don’t be evil.” How much Google tried to follow it before, I don’t know.

    https://time.com/4060575/alphabet-google-dont-be-evil/
  • BullfrogBullfrog Shipmate
    edited April 19
    The idea of using AI as "adaptive equipment" intrigues me.

    If one isn't skilled enough to redact the AI's output on their own, the idea of such a person using AI to replace their own writing is unsettling.

    It's actually easier, I think, to write your own words than to use an AI to put out words and then edit them appropriately. Editing, as I'm aware, is real work.
  • MaryLouiseMaryLouise Shipmate, Host Emeritus
    edited April 19
    After a couple of years looking at supposedly edited texts sent to me by small academic publishers that have ended up as AI transcriptions (and I'm not talking about grammar or a few instances of predictive text but pages and pages of arguments or descriptions) the rhetorical devices or stylistic quirks of AI 'reasoning' or definitions tend to jump out at me like sore thumbs.

    This article in the Guardian puts it quite well.

    Many academic texts from publishers based in the UK or Europe use a 'vendor' like Springer and it is cost-effective to have content editing as well as reference checking and indexes done by freelance editors in Manila, Mumbai, Puducherry, Cape Town or Harare, with well-enough educated English-speakers who rely increasingly on AI. I'm not going to go into the use of Deep-L, the Chinese DeepSeek or Anthropic's Claude for translation of texts from French-speaking Africa or Asia, except to say I used to be able to figure out dialect continuums and recognise consistency of tone and regional inflections, slang etc, Now I am often at a loss, because AI harvests colloquial text but makes it sound much smoother and fluent than a human author could produce.
  • peasepease Tech Admin
    edited April 19
    Bullfrog wrote: »
    It's actually easier, I think, to write your own words than to use an AI to put out words and then edit them appropriately. Editing, as I'm aware, is real work.
    Maybe for you. For me, coming up with my own words in the first place is also real work.

    I can understand the attraction of doing what Gramps49 is doing. But for me, in the context of an informal discussion forum for human beings, it would be completely self-defeating. The question I'm now asking myself is whether I can live with other people doing it.
  • BoogieBoogie Heaven Host
    pease wrote: »
    Bullfrog wrote: »
    It's actually easier, I think, to write your own words than to use an AI to put out words and then edit them appropriately. Editing, as I'm aware, is real work.
    Maybe for you. For me, coming up with my own words in the first place is also real work.

    I can understand the attraction of doing what Gramps49 is doing. But for me, in the context of an informal discussion forum for human beings, it would be completely self-defeating. The question I'm now asking myself is whether I can live with other people doing it.

    It's a good question.

    But we'll have to, I think, as it will be happening more and more.

    It annoys me more with images, even photographs. They are so bland. They absolutely lack the human touch.
  • I spend much of my working life marking undergraduate essays and those written with AI assistance are usually obvious and quite bland. They also don’t score well as they rely on description and lack critical analysis, as well as accurate referencing.
    On a forum I expect to engage with human beings and their thoughts. Using AI tools to correct spelling and grammar is fine, and long established, but what is the point of posting an AI’s summary on a forum? As I say to my students when they quote large chunks of text, using other people’s words (or AI’s) doesn’t tell me anything about your knowledge and understanding.
  • pease wrote: »
    We've been fine using google for decades.
    That depends on your definition of "fine". (Have you you heard of Surveillance Capitalism?)
    Google Search has been around for 30 years:
    In 2007, a group of researchers observed a tendency for users to rely exclusively on Google Search for finding information, writing that "With the Google interface the user gets the impression that the search results imply a kind of totality. In fact, one only sees a small part of what one could see if one also integrates other research tools."

    In 2011, Google Search query results have been shown by Internet activist Eli Pariser to be tailored to users, effectively isolating users in what he defined as a filter bubble. Pariser holds algorithms used in search engines such as Google Search responsible for catering "a personal ecosystem of information".

    In 2012, the US Federal Trade Commission fined Google US$22.5 million for violating their agreement not to violate the privacy of users of Apple's Safari web browser.

    In August 2024, a US judge in Virginia ruled that Google held an illegal monopoly over Internet search and search advertising. The court found that Google maintained its market dominance by paying large amounts to phone-makers and browser-developers to make Google its default search engine.

    It is all rather horrifying, this data mining.

    Didn't know the stuff about google and court and monopolies. Where are they now?

    Nothing wrong with paying phone makers and browser developers to make their search engine a default but a monopoly should certainly ring alarm bells.
  • BoogieBoogie Heaven Host
    I spend much of my working life marking undergraduate essays and those written with AI assistance are usually obvious and quite bland. They also don’t score well as they rely on description and lack critical analysis, as well as accurate referencing.

    My friend does the same work as you, and she agrees.

    I think there will come a time when all assessment is in person or by video call.
  • chrisstileschrisstiles Hell Host
    pease wrote: »
    Bullfrog wrote: »
    It's actually easier, I think, to write your own words than to use an AI to put out words and then edit them appropriately. Editing, as I'm aware, is real work.
    Maybe for you. For me, coming up with my own words in the first place is also real work.

    That's because in this context reading is thinking and writing is also thinking. The whole point of these forums is to engage with the 'real work' of other people. Thinking through someone else's post and working out your response to it is very different from being presented with 3/4 plausible responses and picking the one you like best.
  • To my initial horror my academic brother-in-law sent me an AI generated response to something he could find online using his professorial privileges and which I couldn't.

    It made me realise what Shipmates like @Heavenlyannie and @MaryLouise are having to contend with professionally.

    To be fair, it was simply a summary of the gist of some documents I was trying to find and some rough background context, more of a springboard for further investigation than and 'end result' as it were.

    But it unnerved me to some extent.

    I don't want to see @Gramps49 hauled over the coals for using AI as an aid but I'd also like to see him stop posting undigested chunks of it in his posts.

    I'll accept my post went on and on and on - I'm too wordy.

    But the gist was clear.

    I took exception to his use of the term 'bunch of bones' to refer to the bodily remains of Christ on the grounds that this sounded disrespectful and sacrilegious.

    There was also a 'rawness' in my response due to bereavement and the emphasis my particular Christian tradition puts on mortal remains and 'matter' more generally.

    Some Shipmates responded sensitively to that. Others less so. They know who they are.

    I can't see that AI assistance was necessary in responding to my post - or rebutting it if people thought that necessary.

    That said, if Gramps49 felt the need to use AI to assist with dyslexia then fine.

    But I really don't like the idea of AI generated responses here aboard Ship, or anywhere else for that matter.
Sign In or Register to comment.