AI: Going backwards from the Enlightenment

in Purgatory
This is pretty crude thinking on my part so I am looking to this thread to refine and change it.
The popular notion is that the basic for the acceptance of knowledge changed with the enlightenment from building on tradition wisdom to that which looked for experiential verification. That is crude and I hope others will refine it but it will do for the OP.
AI works by using large language models, that means that it uses texts that already exist. It does not reason, but puts together facts as they appear in texts. It, however, has no external reference outside the texts. The model works by favouring what there are lots of.
The most pervasive forms of AI are those that have scraped the Internet and continue to scrape the internet. These can and are used to create new content for the Internet. The rate at which AI can produce "new" material far outweighs the rate at which humans can verify and produce new material. Inevitably the AI is going to be eventually producing the vast majority of material AI is trained on.
With our ever increasing reliance on AI we are de fact going to be entering an age where the process of knowledge acquistion is that of the pre-enlightenment.
The popular notion is that the basic for the acceptance of knowledge changed with the enlightenment from building on tradition wisdom to that which looked for experiential verification. That is crude and I hope others will refine it but it will do for the OP.
AI works by using large language models, that means that it uses texts that already exist. It does not reason, but puts together facts as they appear in texts. It, however, has no external reference outside the texts. The model works by favouring what there are lots of.
The most pervasive forms of AI are those that have scraped the Internet and continue to scrape the internet. These can and are used to create new content for the Internet. The rate at which AI can produce "new" material far outweighs the rate at which humans can verify and produce new material. Inevitably the AI is going to be eventually producing the vast majority of material AI is trained on.
With our ever increasing reliance on AI we are de fact going to be entering an age where the process of knowledge acquistion is that of the pre-enlightenment.
Tagged:
Comments
AFF
The nonsense that AI often generates does demonstrate the value of the Enlightenment - there really is no substitute for actually testing stuff and finding out for real.
Well, at least this incarnation of the technology being packaged as 'AI' uses large language models.
Both these things are true; and at least some of the 'surprising' results that people tout are caused by the AI ingesting during training far more material of a certain type than a human could in a lifetime (think of those stories where the AI 'says' it wants to break out and destroy its owners etc). Whether this is of use or not is another issue entirely.
And while I do sometimes lean a bit into the pomo frame of things, and I do appreciate the tools of deconstruction, used wisely, I can appreciate that fear.
But it is ironic as all hell to see the current ascendant right wing going all in for letting a mindless software program basically generate the same postmodern nightmare that they claimed they were so scared of in my youth.
It's like the anti-relativist fundamentalists just went and turned themselves inside out in a matter of 20 years. It's absolutely wild.
I had a professor in undergrad who was a fervent environmentalist and a preacher's kid, and he told us about a conversation he had with a woman. She explained to him that she completely understood climate change on a "knowing" level, but for her and other people, "belief" feels a lot more powerful than "knowledge." And people really like feeling powerful.
I think this explains some of the appeal of anti-science conspiracy theories. There's something entrapping about empiricism, like playing a game when the plays are predictable enough that you know you're screwed. And you can feel it in your bones. But you hate that feeling, so you'll fight it even to your last breath...poignant as you're lying on the hospital bed gasping for air as COVID-19 finishes you off. I think that story was very well.
So, "live free or die" can take some pretty dark turns sometimes. I'm not an atheist but sometimes I can see how this drives people to logical positivism. I'd rather just play video games and read fantasy novels, that way my power fantasies are very clearly fantastic, no need to confuse them with reality.
Sigh, I do feel some regret typing this around anyone who feels described by this relatable content. I try not to be a jerk. But I'm also sick and tired of watching the world take abuse from people who can't handle the pain of reality.
Which might explain why I find these people so infuriating.
A genuine commitment to objective truth, rather than a merely rhetorical deployment of the words, means believing that other people's perspectives on their own lives are more likely to be valid than one's own perspective on their lives.
Karl Popper made the point that nothing can be really be 'proved' as objections, different points of view can always be invented and considered.
For him, a fact or theory with any claim to objectivity must be subject to disproof. Even this is a slippery, though perhaps less slippery concept. I was a career research scientist and tutor and tried to teach the scientific method to all my students. As a Popperian I would ask students, "Can you think of an experiment that would disprove the hypothesis?" And, broadening the argument, "Under what circumstances would you question such and such (perhaps fondly held) belief?"
I really believe Mrs RR loves me. This is emotional truth for me and what I live by. I wouldn't want to think of a situation that would test this to the limits!
I don't think the two situations are equivalent; there is a kind of direction to post-modern deconstruction, whereas LLMs are producing a somewhat controlled logorrhea - a better parallel might be 'automatic writing' (which leads to an entirely separate bug-a-boo)
The slipperness I first came across was that it lead to needing a "proof of disproof".
The most informative book on Scientific Method(s)(including Feyerabend's Against Method is A F Chalmers What is this thing called science which I first read years ago in its first edition, but which is now completely changed in its 4th edition, for which I need to have a second reading.
I think building on traditional wisdom in the pre-enlightenment is vastly better than relying on AI.
I do not think that it leads to logorrhea, I think it create a
simulacrum of reality that is shaped by finance and populism. In so doing it has removed the search for truth from the agenda. I actually read the post-modern debate differently and see this as what they were warning as happening.
For those hoping that including logical process into the algorithms will re-introduce truth, logic is unfortunately only able to produce truth that is contained within the premises it starts with. The ancients were very good at logic and logic had the Sun going around the Earth, because of the premises it started with.
Truth can only be approached when there is something outside the system that can correct it. Over reliance on any self perpetuating system will this fail.
Golly, I read these many years ago and rated Chalmers' book very highly indeed. I didn't know there was a fourth edition. Oh dear! Now retired, I don't think I've enough time to read it again.
One (serious) question: has any chatbot thingy come up with a good joke - or any joke - yet?
I suppose what I'm asking is, can such programs show human type creativity?
I'm reminded of the old James Thurber cartoon of two ladies chatting at a party, looking at an obviously 'egghead' type standing awkwardly apart. One lady says to the other: "Oh him?" one says to the other, "No good talking to him, he's a scientist. He doesn't know anything except facts".
Except that IME scientists, engineers and related types often have a strong sense of humour - albeit often rather offbeat.
I'd pose things rather differently myself - STEM types also like and take an interest in the arts; I'm generally far more taken aback by scientific illiteracy and lack of interest in science amongst arts/humanities types than philistinism amongst scientists. YMMV. But I don't count myself that unusual that I'm a computer engineer by day and a musician and TTRPG player and designer by night. And that aquarium is not a science project either.
I don't think these two things are in conflict, locally that's exactly what it leads to, because the primary considerations become the prompt and the models own propensity to create text. On a non-local level what starts to dominate is the beige quality of the language involved and the models tendency to context-drift. If it is a simulacram, it's one with plenty of visible seams that will also put you to sleep.
This is an argument that can be developed further, but on its own has a fairly ready rebuttal - it's perfectly possible to introduce new-truths in the shape of new data about the physical world into the process.
Frankie: " a simple one would suffice"
"A simple one!" wailed Arthur.
"Yeah" said Zaphod with a sudden evil grin, " You'd just have to program it to say "What?" and "I don't understand" and "Where's the tea?"
-who'd know the difference?"
These are the forms of AI the general public is most familiar with, sure. But there are reasoning models of AI that are different from general-purpose large language models. There are well-informed people who think we will see AGI, artifical general intelligence, within the next few years: the people who run the various big outfits that work on AI don't agree on how long it will take, but they all give near-term timeframes. Lots of independent people agree with them. (NY Times reporting on all this here, gift link.)
So I think the real question is, what are we going to do when there are AI models who can do a lot of the work we think only people can do?
[A separate issue, but people have been using experimental verification for at least as long as we've had agriculture -- traditional wisdom didn't teach people how to develop grains with bigger seeds.]
That's what the article claims, and in itself is mostly an exercise in boosterism, and of course everyone working at the AI outfits claim a revolution is around the corner, their future wealth rather depends on it.
Yes like:
"lots of independent experts — including Geoffrey Hinton and Yoshua Bengio, two of the world’s most influential A.I. researchers, and Ben Buchanan, who was the Biden administration’s top A.I. expert — are saying similar things. So are a host of other prominent economists, mathematicians and national security officials."
So, yet more AI experts, and then a number of people in unrelated fields. You know who is missing from that list? Neuroscientists. Philosophers of the mind. Anyone who actually studies intelligence.
The idea that we are going to replicate a feat we barely understand - mostly accidentally - seems somewhat optimistic. In some ways I can understand it, it's tech's last roll of the dice, after all other ideas have been played out, and an exercise in justifying the latest bubble (as well as avoiding the consequences of large scale copyright infringement). So we have the CEO of one of the AI companies claiming that we are going to have artificial super-intelligence in a year or two ("believed we were a year or two away from having “a very large number of A.I. systems that are much smarter than humans at almost everything.”").
It's worth looking at the financial aspects of the bets the largest companies are placing:
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-a-financial-definition-of-agi-report/
Why have a financial definition rather than a technical one, unless you feel a technical one is unlikely to be achievable?
I think the real question is how we face down the coming onslaught from the capitalist class trying to use the promise of AI to drive down pay and working conditions for everyone else.
Are such people saying the AI experts are wrong?
This reminds me of what my spouse's stepdad said, he's a retired science prof.
Models like AI aren't particularly new in science, and for certainly limited tasks, we've known for years that they are extremely useful and can certainly accomplish things that humans could not.
But you have to limit them to those tasks. The more generalized "Let's put AI into everything" that we're seeing now sounds like what he'd call "garbage in, garbage out." Without a strictly limited task, AI's can basically get deranged very quickly.
My fear would be that AI would be used as a cheap "alternative" by humans who didn't want to invest in doing things right. So, no, it's not as good as human labor at doing a task, but it'll be very good for saving money and doing a half-assed job for a task master who simply does not care about the outcome!
I also suspect that there may be a suddenly explosion of "AI in everything" and then a recession as we figure out that, actually, AI shouldn't be in a lot of things because people don't want them there. That's a little scary if you consider it going into government where it's harder to change things once they're set into place. Once again, this American faction insists that government can't do any thing right and - once elected - strives to prove it for all space and time.
Yeah, they have been for a while, although it's become a bit hard to find via google because there's so much pro-AI work, and much is academic papers behind paywalls. Linguists like Pinker and Emily Bender have commented on the LLM approach and its inherent limitations, philosophers have made a number of critiques; starting with Dreyfus and John Searle and more recently Daniel Dennett (though he's mostly focused with the destructive consequences of fakery).
Yeah, I think this is the more immediate danger, especially for any task that can be reduced to a de-generate form of data processing.
So there are two debates here
I am sorry for mixing the two up earlier
Yes uncanny valley and several errors. Not to mention making things up. We are not there yet.
I remember a time some years back when a common criticism of AI was that it was all just algorithms. Someone would say "a computer can never play chess" and then someone else would invent a chess-playing program, and the first person would say, "oh, that's just a lot of algorithms." I think there have been several iterations of this since then.
Nowadays the question is whether there is anything more to human consciousness than a collection of very sophisticated algorithms. (I would say there is more, but who am I?)
Ezra Klein, who does, said on his podcast a few weeks ago:
Just because we're seeing a lot of AI-generated crap on the internet doesn't mean there isn't anything meaningful happening anywhere.
The article says why they have a financial definition:
This doesn't support the idea that a technical one is not likely to be achievable.
Maybe. Depends on whether people buy it. Depends on what kind of writing we're talking about. I could see technical writing being something you'd have a machine do.
There was a time when every scrap of clothing came from cloth woven or knitted by hand. Some of it was beautiful, but I don't believe for a moment that all of it was, or that all of it was serviceable, or that everyone who spent time knitting socks enjoyed it. And barring civilizational collapse, we're not going back to a time when cloth was all handmade. I'm fine buying and wearing socks made by machines, and I suspect I'll be fine with a lot of the writing machines do.
If you want to go on to argue that Microsoft's reason for wording the agreement to be able to use OpenAIs models for 'a decade or more' then that sounds they don't believe the technical timelines above. It took Google around a decade and a half to get to $100bn in cumulative profits, are we really saying that being able to create better than human intelligences - at scale - is only going to reap the same economic rewards as a search engine ?
As you asked; I have paid access and have used Anthropic, Gemini and OpenAI, the latter two via both public and private instances. I have recently got access to Deepseek, but haven't really made much use of it so far.
I think I'd find it more surprising if this weren't the case - most code is rather predictable and repetitive. And it's already in an easily-digestible ready-to-eat format.
The analogy that comes to mind are aeroplane autopilots, though I suspect more thought goes into those. Either way, the human beings are there to keep an eye on things and handle the unpredictable stuff. (But people care rather less about coders dozing off.)
(Unless AI generated code is an exception to the rule that all code has to be debugged - and I doubt it )
Is that the dark web? If so, please stay safe…
For those who are naive, all AI uses large amounts of data to "train" the model. Most of the popular AI bots that are available have gathered their data from large portions of web/internet including the non salubrious areas. This includes not just the "adult" content but pirated copies copyrighted material.
I am at present avoiding AI, basically because some of my experiences of it are making me feel sick. I mean that almost literally, it is a similar sensation to mild vertigo. I cannot quite fathom what is causing the feeling but I am steering clear.
I relate to that—when I first explored one of those generative text things, I played with it for a couple of hours, and… I felt like what you describe. Like when I looked at regular text I felt weird too for a while. Like I didn’t feel like any text was “real” at all, and it was horrible. I don’t like the uncanny valley vibe I get from visual AI either.
Oh this?
https://en.wikipedia.org/wiki/Darknet