Brave New World, the AI Version.

2»

Comments

  • chrisstileschrisstiles Hell Host
    Hugal wrote: »
    How exactly is it akin to praying in tongues?

    I can understand that someone might want to use an ai to pray in the hope it will allow them to articulate what they mean better than they currently can themselves

    Praying in tongues is not a current 'tool' I would use, but in my more charismatic younger days this would describe something of the motivation to do so.

    The big difference is that when using tongues you don't actually know what they are articulating - which is why when used publicly they are accompanied by their interpretation, and when used privately "consolation" operates at a different register to the cognitive.

    Which is why I used 'sub-liturgical' above, it's something like reading out a liturgical prayer, except the composition operates outside the historical church and without prayerful consideration.

    It's kitsch.
  • HugalHugal Shipmate
    Hugal wrote: »
    How exactly is it akin to praying in tongues?

    It was this sentence from Doublethink that brought the analogy to my mind

    I can understand that someone might want to use an ai to pray in the hope it will allow them to articulate what they mean better than they currently can themselves

    Praying in tongues is not a current 'tool' I would use, but in my more charismatic younger days this would describe something of the motivation to do so.

    As someone who uses tongues I see what you mean. As pointed out it is more complicated than that but could be one way of looking at it.
  • CaissaCaissa Shipmate
    An article in Nature reports on a survey of 5 000 scientists on their attitudes to using AI in various scenarios and stages of scientific writing. The short version of the results is that the scientific community has diverse opinions. https://www.nature.com/articles/d41586-025-01463-8?
  • Lamb ChoppedLamb Chopped Shipmate
    pease wrote: »
    Interestingly (at least to me), though it’s scraped tons of my phrasing and played kaleidoscope games with it, it has managed to completely miss out on the resurrection of Jesus—and that’s tied in to everything i write. How did it miss the resurrection?
    One rather mundane answer is that there are lots of expositions of Ezekiel 23:20 out there. Most of them mention betrayal, few of them mention the resurrection. Why would an averaging machine do any different?

    It depends on what it’s averaging. I really doubt somehow that there are plenty of devotions based on this particular text available for the scraping. There are certainly plenty of my own on other topics, but the resurrection is at least mentioned in them all—sometimes glancingly, sometimes at length. I would have expected an averaging process to have noticed such a feature. But apparently not.
    It may be something I need to improve in my writing, if AI can miss it still.
  • chrisstileschrisstiles Hell Host
    pease wrote: »
    Interestingly (at least to me), though it’s scraped tons of my phrasing and played kaleidoscope games with it, it has managed to completely miss out on the resurrection of Jesus—and that’s tied in to everything i write. How did it miss the resurrection?
    One rather mundane answer is that there are lots of expositions of Ezekiel 23:20 out there. Most of them mention betrayal, few of them mention the resurrection. Why would an averaging machine do any different?

    It depends on what it’s averaging. I really doubt somehow that there are plenty of devotions based on this particular text available for the scraping.

    This is one of those cases where people - in general - have a major problem with scale. What exactly do you mean here? There's some indication that OpenAI - at the very least - used much of Project Gutenberg, there's circumstantial evidence that they did a lot more:

    https://qz.com/shadow-libraries-are-at-the-heart-of-the-mounting-cop-1850621671

    So available for scraping from your evidence of a google search ? Maybe not. Available on the internet maybe. How about available in a corpus of Lutheran, Reformed etc. writings from several centuries, plus a good selection of near current literature?

    At that point the very specificity of the query is going to drive it down certain pathways.
  • RuthRuth Shipmate
    pease wrote: »
    That last sentence rather underlines the thinking behind this decision. If there was lots of money to be made from the commercial exploitation of the internet, the US government wanted it to be made by US companies.

    Of course the powers that be think about the money in advance. They don't consider other things, like people's well-being.
  • Lamb ChoppedLamb Chopped Shipmate
    pease wrote: »
    Interestingly (at least to me), though it’s scraped tons of my phrasing and played kaleidoscope games with it, it has managed to completely miss out on the resurrection of Jesus—and that’s tied in to everything i write. How did it miss the resurrection?
    One rather mundane answer is that there are lots of expositions of Ezekiel 23:20 out there. Most of them mention betrayal, few of them mention the resurrection. Why would an averaging machine do any different?

    It depends on what it’s averaging. I really doubt somehow that there are plenty of devotions based on this particular text available for the scraping.

    This is one of those cases where people - in general - have a major problem with scale. What exactly do you mean here? There's some indication that OpenAI - at the very least - used much of Project Gutenberg, there's circumstantial evidence that they did a lot more:

    https://qz.com/shadow-libraries-are-at-the-heart-of-the-mounting-cop-1850621671

    So available for scraping from your evidence of a google search ? Maybe not. Available on the internet maybe. How about available in a corpus of Lutheran, Reformed etc. writings from several centuries, plus a good selection of near current literature?

    At that point the very specificity of the query is going to drive it down certain pathways.

    I'd have to out myself to answer this, so I'll end it here.
  • chrisstileschrisstiles Hell Host
    edited May 15
    pease wrote: »
    Interestingly (at least to me), though it’s scraped tons of my phrasing and played kaleidoscope games with it, it has managed to completely miss out on the resurrection of Jesus—and that’s tied in to everything i write. How did it miss the resurrection?
    One rather mundane answer is that there are lots of expositions of Ezekiel 23:20 out there. Most of them mention betrayal, few of them mention the resurrection. Why would an averaging machine do any different?

    It depends on what it’s averaging. I really doubt somehow that there are plenty of devotions based on this particular text available for the scraping.

    This is one of those cases where people - in general - have a major problem with scale. What exactly do you mean here? There's some indication that OpenAI - at the very least - used much of Project Gutenberg, there's circumstantial evidence that they did a lot more:

    https://qz.com/shadow-libraries-are-at-the-heart-of-the-mounting-cop-1850621671

    So available for scraping from your evidence of a google search ? Maybe not. Available on the internet maybe. How about available in a corpus of Lutheran, Reformed etc. writings from several centuries, plus a good selection of near current literature?

    At that point the very specificity of the query is going to drive it down certain pathways.

    I'd have to out myself to answer this, so I'll end it here.

    We don't really need to know the specific prompt in order to draw some inferences. Suffice to say that one concept at the heart of these systems is how texts are represented, and how they judge texts to be similar (where similarity is judged on distance between two points in an n-dimensional space). If you want to read up this is a good introduction (requires some basic maths intuition):

    https://stackoverflow.blog/2023/11/09/an-intuitive-introduction-to-text-embeddings/
    https://www.datacamp.com/blog/what-is-text-embedding-ai

    So it's processed a whole bunch of things including, your devotionals, Lutheran commentaries down history, Lutheran devotionals down history, Luthers Table Talks, Continental Reformed commentaries down history, the stories of Bo Giertz etc. (just restricting it to things that may be nearby in terms of similarity scores).

    It also has specific handling for requests for something to be 'in the style of', so assuming your work is processed inside the system in a way attributable back to your name, it's taken some of your work, used the phrasing from it, and then fitted in an interpretation in material that's most "similar" (in the sense above) to your work.

    I'm hand waving and oversimplifying here (partly because concepts aren't really modelled inside the neural network in the way in which they would be in our brain) , but hopefully you get the idea.
  • peasepease Tech Admin
    edited May 15
    pease wrote: »
    One rather mundane answer is that there are lots of expositions of Ezekiel 23:20 out there. Most of them mention betrayal, few of them mention the resurrection. Why would an averaging machine do any different?
    It depends on what it’s averaging. I really doubt somehow that there are plenty of devotions based on this particular text available for the scraping. There are certainly plenty of my own on other topics, but the resurrection is at least mentioned in them all—sometimes glancingly, sometimes at length. I would have expected an averaging process to have noticed such a feature. But apparently not.
    It may be something I need to improve in my writing, if AI can miss it still.
    It's not that AI missed it, it's that it didn't know it needed to look for it in the first place. AI doesn't "know" what the words "devotion", "style" and "topic" mean. It's just learned through billions of iterations (including copious human feedback) how to generate text in response to those terms when it sees them in a user request. The words, order of words, and punctuation (etc) in its response is determined by all the English-language texts it has scraped, then by those that relate to requests involving generating "a devotion", "in the style of" and "on the topic of". And then by whatever of your texts that it has access to, and (because the AI's developers have spent a lot of time on this feature) how that affects its response to the "in the style of" part of the request. It doesn't make any attempt to analyse your texts as a human being would, or to identify and apply common themes.

    In short, if you want it to include something about the resurrection, you need to tell it to include something about the resurrection. You could also try telling it to analyse your texts to identify common themes, and include them in its response, and see what happens.
  • The_RivThe_Riv Shipmate
    As much as I can, I'm letting A.I. pass me by. Our daughter is keen to try to stay abreast of it as far as seems necessary to or advantageous for her, and my son seems unimpressed, though he seems unimpressed about most things right now, but I'm very glad to let it go. I'll be working for a while yet, but I do plan on as analogue a retirement as possible.
  • Gramps49Gramps49 Shipmate
    The_Riv wrote: »
    As much as I can, I'm letting A.I. pass me by. Our daughter is keen to try to stay abreast of it as far as seems necessary to or advantageous for her, and my son seems unimpressed, though he seems unimpressed about most things right now, but I'm very glad to let it go. I'll be working for a while yet, but I do plan on as analogue a retirement as possible.

    I can't see how you can let AI pass you by. Every new browser upgrade depends on. Unless you are still using a blackberry Obama favored when he became president, every new smart phone released in the last couple of years is loaded with an AI search feature.
  • stetsonstetson Shipmate

    Gramps49 wrote: »
    The_Riv wrote: »
    As much as I can, I'm letting A.I. pass me by. Our daughter is keen to try to stay abreast of it as far as seems necessary to or advantageous for her, and my son seems unimpressed, though he seems unimpressed about most things right now, but I'm very glad to let it go. I'll be working for a while yet, but I do plan on as analogue a retirement as possible.

    I can't see how you can let AI pass you by. Every new browser upgrade depends on. Unless you are still using a blackberry Obama favored when he became president, every new smart phone released in the last couple of years is loaded with an AI search feature.

    Well, my pandemic-era smartphone has a shitload of features I don't use. Is there something about the AI features that compels you to use them?

    The only time I see AI-derived content on my own is in the AI boxes that now come up when you do a search on google. I usually read them, but I wouldn't make a decision or even try to settle a friendly argument using them alone.
  • peasepease Tech Admin
    Ruth wrote: »
    pease wrote: »
    That last sentence rather underlines the thinking behind this decision. If there was lots of money to be made from the commercial exploitation of the internet, the US government wanted it to be made by US companies.
    Of course the powers that be think about the money in advance. They don't consider other things, like people's well-being.
    It's sobering to consider just how much harm companies like META have been willing to facilitate in pursuit of their commercial exploitation of fear and hate in user generated content. It could all have been very different. (But not in a dystopian Brave New World.)

    It seems to me that generative AI (which is mainly what we're talking about) isn't so much a tool in search of a problem as a product in search of monetisable services.
    Gramps49 wrote: »
    I can't see how you can let AI pass you by. Every new browser upgrade depends on. Unless you are still using a blackberry Obama favored when he became president, every new smart phone released in the last couple of years is loaded with an AI search feature.
    The obtrusive inclusion of generative AI features on our devices is to persuade us to use those services. What's rather less obvious is the way in which we are all paying for those services.

    Meanwhile, AI is already being sold to the governments, institutions and businesses which mediate our lives, and is being used to make automated (and semi-automated) decisions about us.
  • RuthRuth Shipmate
    The_Riv wrote: »
    I'll be working for a while yet, but I do plan on as analogue a retirement as possible.
    I hear that, and I love not having to stare at a computer most of the day now that I'm retired. But an all-analogue life seems difficult to achieve today, and probably not a good idea anyway. At the church I worked for, I noticed that the members in early retirement in the early 2000s who refused to buy computers, go online, get email accounts, etc, tended to be a lot crankier 20 years later than the members who had kept up with basic technological changes.

    I don't see a use case for AI right now in anything I'm doing in retirement, but if my nephew or any of my younger friends tells me there's something I need to get on board with, I plan to take that seriously.
  • I don’t actively use AI but as an online university lecturer I am regularly exposed to it; it is generally easy to spot and seldom improves students’ grades for a whole variety of reasons.
    Working online has broadened my horizons so much that I intend to work part-time well into my retirement so I can easily access the history databases available via my university. If AI becomes useful for this I will use it but at the moment I can’t see it adding anything of value to my knowledge base.
  • The_RivThe_Riv Shipmate
    Gramps49 wrote: »
    The_Riv wrote: »
    As much as I can, I'm letting A.I. pass me by. Our daughter is keen to try to stay abreast of it as far as seems necessary to or advantageous for her, and my son seems unimpressed, though he seems unimpressed about most things right now, but I'm very glad to let it go. I'll be working for a while yet, but I do plan on as analogue a retirement as possible.

    I can't see how you can let AI pass you by. Every new browser upgrade depends on. Unless you are still using a blackberry Obama favored when he became president, every new smart phone released in the last couple of years is loaded with an AI search feature.

    Oh, that’s okay. SoF and a number of music and cycling accounts represent most of my online activity, along with mostly work-related email and texts. I’m buying books, CDs and vinyl, and already have a flip phone identified for when my current iPhone is no longer supported. I’m thoroughly uninterested in AI.
    Ruth wrote: »
    The_Riv wrote: »
    I'll be working for a while yet, but I do plan on as analogue a retirement as possible.

    I don't see a use case for AI right now in anything I'm doing in retirement, but if my nephew or any of my younger friends tells me there's something I need to get on board with, I plan to take that seriously.

    People said that about FB, Twitter, Instagram, etc., and all of that turned out to have terribly toxic realities. People said that computers and automation were going to make us hyper efficient and free. Can’t say that’s worked out as predicted either. So, I dunno. I’m probably becoming a bit of a curmudgeon, but okay.
  • W HyattW Hyatt Shipmate
    I found this article (gift link) titled "The Day Grok Lost Its Mind" interesting in relation to this thread. It discusses how AI companies try to control and direct the output of a fully trained LLM.

    What interests me in particular about AI is how willing many people seem to be to think it is, or could become, self-aware like a person.

    What concerns me is that there seems to be a general inclination to respond to it as though it actually understands what it comes up with.

    What scares me is how readily corporations and governments seem to trust it for making decisions.

    Achievements in AI are truly amazing, but AI does not think the way a person does.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    W Hyatt wrote: »

    Achievements in AI are truly amazing, but AI does not think the way a person does.

    I wouldn't be so sure - we've all met individuals capable of spewing out plausible sounding bullshit while demonstrating no thought or understanding of the topic at hand.
  • W HyattW Hyatt Shipmate
    Yes, but that just serves to illustrate my point: we can expect a person to be aware of what they are doing.
  • DalSegnoDalSegno Shipmate
    W Hyatt wrote: »

    Achievements in AI are truly amazing, but AI does not think the way a person does.

    I wouldn't be so sure - we've all met individuals capable of spewing out plausible sounding bullshit while demonstrating no thought or understanding of the topic at hand.
    W Hyatt wrote: »

    Achievements in AI are truly amazing, but AI does not think the way a person does.

    I wouldn't be so sure - we've all met individuals capable of spewing out plausible sounding bullshit while demonstrating no thought or understanding of the topic at hand.

    A large language model (LLM), like ChatGPT, is simply stringing one word after another in a way that matches the probability of what is the best word to come next, given the context of what has come before. The fact that it produces material that appears sensible is amazing.

    The problem is that human beings anthropomorphise absolutely anything. We therefore impute to the LLM a whole bunch of things that are not true. It is not thinking. It knows nothing about the topics it is so eloquently writing about. I like to compare this to my cat. My cat does a whole bunch of stuff that is instinctive, but I like to impute human-like thinking to its instinctive behaviour.

    So when a human spews out plausible sounding bullshit, we know that there is a mind behind it. When an LLM spews out plausible sounding bullshit, there is no mind behind it, there is just a probabilty machine churning out the most plausible sequence of words given the context.
  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    DalSegno wrote: »

    So when a human spews out plausible sounding bullshit, we know that there is a mind behind it. When an LLM spews out plausible sounding bullshit, there is no mind behind it, there is just a probabilty machine churning out the most plausible sequence of words given the context.

    What I'm suggesting, only partially facetiously, is that the evidence to support a mind behind the human generated bullshit is somewhat limited.
Sign In or Register to comment.