AI - dystopia or utopia?
Or somewhere in-between?
While offering advancements in automation and convenience, AI poses significant risks to society. One major concern is job displacement, as AI systems replace human workers in various industries, leading to widespread unemployment and economic inequality. Additionally, AI lacks human empathy and moral judgment, making it unsuitable for critical decision-making in areas like healthcare, justice, and governance. Furthermore, the increasing reliance on AI systems can exacerbate privacy issues, as these technologies collect vast amounts of personal data. Lastly, there is the looming threat of AI being misused for malicious purposes, creating new vulnerabilities and security risks.
AI has the potential to revolutionize many aspects of society, offering solutions to complex challenges. It can enhance productivity by automating routine tasks, allowing humans to focus on creative and strategic endeavors. In healthcare, AI helps with faster diagnosis, personalized treatment, and drug discovery. In education, it enables personalized learning, adapting to individual needs. AI also plays a critical role in environmental monitoring, aiding in sustainable practices. By advancing fields like robotics, transportation, and data analysis, AI has the power to improve quality of life, drive economic growth, and address global challenges more efficiently.
I used AI to write these short arguments - I asked for 100 words in favour and 100 words against. It took ten seconds, quicker than I could type it. Of course, it's not in my style of writing - but I'm sure that wouldn't take long to pick up.
Like cars or the Internet, AI will soon be everywhere.
What do you think?
What have you noticed?
How do you use AI now?
While offering advancements in automation and convenience, AI poses significant risks to society. One major concern is job displacement, as AI systems replace human workers in various industries, leading to widespread unemployment and economic inequality. Additionally, AI lacks human empathy and moral judgment, making it unsuitable for critical decision-making in areas like healthcare, justice, and governance. Furthermore, the increasing reliance on AI systems can exacerbate privacy issues, as these technologies collect vast amounts of personal data. Lastly, there is the looming threat of AI being misused for malicious purposes, creating new vulnerabilities and security risks.
AI has the potential to revolutionize many aspects of society, offering solutions to complex challenges. It can enhance productivity by automating routine tasks, allowing humans to focus on creative and strategic endeavors. In healthcare, AI helps with faster diagnosis, personalized treatment, and drug discovery. In education, it enables personalized learning, adapting to individual needs. AI also plays a critical role in environmental monitoring, aiding in sustainable practices. By advancing fields like robotics, transportation, and data analysis, AI has the power to improve quality of life, drive economic growth, and address global challenges more efficiently.
I used AI to write these short arguments - I asked for 100 words in favour and 100 words against. It took ten seconds, quicker than I could type it. Of course, it's not in my style of writing - but I'm sure that wouldn't take long to pick up.
Like cars or the Internet, AI will soon be everywhere.
What do you think?
What have you noticed?
How do you use AI now?
Comments
Actually, that's not necessarily the case. It would require a reasonable through to large amount of your work for it to be able to augment it's language model appropriately (at least at this time)
ISTR use of it was discouraged on the board.
I have copilot bundled into office365 at work, so far it's been mainly useful in summarising meetings when I join them late (generally due to being double booked).
Separately I have access to openai and claude, which are both occasionally useful in a more technical context. I find you need to be very specific about what you are providing and what you expect to get the most out of them, to the point where you need a reasonable amount of domain expertise - otherwise the chances of getting something useless goes up exponentially. I find that my use is low enough that the credits I purchase last for months at a time, it's certainly a fraction of the cost of my copilot subscription.
Offline LLMs - many of them opensource - are nearly as good as many of the models I've mentioned.
I think the biggest issue is a financial one, I just don't see the potential uses of these technologies as justifying the valuations these companies currently have, and expect a correction at some point.
In one sense this has been around for centuries. The Luddites for instance or more recently the typesetters for newspapers. Both have been due to advances in technology. AI encroaches on people’s personal information and rights which is much more dangerous.
In some instances AI can produce reasonable outputs. The arguments above for and against AI are not unreasonable, because in this case the input it's using is a vast number of comment pieces in newspapers, blogs, social media etc where people have described their hopes and fears, worries and expectations, and the occasional extreme unsubstantiated opinion is filtered out by everything else (I'd also note that quite a few of those opinion pieces, including the OP of this thread, include AI generated text as examples and that AI generated text will go into the input of future requests for similar text which creates interesting options for recursive feedback). But, I'd guess everyone who has read the OP would have an interest in AI (possible exceptions for our hard working hosts) and could probably write very similar for and against positions without using a Large Language Model (LLM) to do so.
The more specific and niche the request the less relevant material available and hence the greater chances of a single extreme position dominating the output. The reasonable concerns over copyright and personal information can make things worse - publishers who hold high standards are much more likely to maintain standards on personal information and more likely to restrict access to their outputs (eg: many scientific journal articles would be behind a pay wall), the effect of which is to reduce the volume of high quality material available to an LLM and provide greater emphasis to lower quality output written by people with little or no expertise on social media or blogs.
Well, even with machine learning systems of this kind, the problem is one of working out *why* the machine is making the determination it does, they aren't necessarily guaranteed to save expert time.
There might be a need to re-examine what testing and exams are for.
Yes, I think this is what needs to happen at all levels of education. I've been an English teacher for most of my adult life, and I'm glad I'm not teaching now because it would require such a massive re-thinking of how we teach and especially how we evaluate. Right now the emphasis among educators seems to be largely on "preventing" and "catching" students from "cheating" by using something like ChatGPT to do their assignments. But that's a bandaid solution that doesn't address the real issues.
Completing a multiple-choice exam in a course like high school English "prevents cheating" (at least, the chatbot kind of cheating), but it doesn't give the student anything like the language skills or mastery of the material that they will get by reading a piece of literature and writing a thoughtful, well-researched essay in response to it. But if the student can use ChatGPT to write an essay good enough to "fool" the teacher, what's the value of that essay assignment? It's obviously not the end product, because that can be produced with no effort and no learning on the student's part. (Just as, pre-ChatGPT, the same result could be produced by plagiarizing someone else's essay -- AI tools have only exacerbated an existing problem).
So, if we still think those skills are important for a student to develop, how do we teach them? And how do we evaluate them, other than by looking solely at a finished product?
These are bigger questions that I know are being addressed at some level in education, but not as quickly or thoroughly as they need to be in order to deal with the threats posed by new technology.
As an exercise, I asked Google Gemini two very closely-related questions (I asked for Lorentz beta and gamma for a particular particle with a particular momentum). It produced a page of plausible-looking computation for each question. It got beta right, but its calculation for gamma was wrong by about 50%. If I didn't know that it was telling me a stupid number, I probably wouldn't have noticed its error.
They're also terrible for students with anxiety issues. Ask me how I know!!!
How does a student using a LLM to write an essay differ from a student who purchases an essay on one of those "we'll do your homework for you" websites, or one who submits their elder sibling's essay from last year, or otherwise submits someone else's work as their own?
I would argue that it doesn't.
It's cheaper and easier to ask ChatGPT than to do the other things, but I don't think it's fundamentally different.
From a pedagogical point of view, I like asking for intermediate steps on this kind of exercise - show me your outline, and your research, and your incremental progress. This also makes it harder to pass of some AI's work as your own (not impossible, but it requires a couple more steps than just pasting the question in to ChatGPT).
Point out that it got the answer wrong, and it'll obligingly 'correct it'.
This is to my point above that you need to be a domain expert already in order to use GenAI systems like these for actual work.
Further, the affect adopted by these LLMs is one of being obliging and polite, and in turn leads to a bunch of cognitive biases when we evaluate the output.
It's wrong to thing of these as exceptional cases (AI 'hallucinations'), generating output of this kind is simply what these systems *do*.
We've had massive engineering errors caused by humans using the wrong units (hello, Mars Climate Orbiter). I wonder when we'll have the first high profile failure caused by "I asked an AI for this computation, and it told me a load of nonsense"?
Yes, there are still many concerns about AI. As pointed up thread, though, it can be a helpful tool in summarizing a lengthy amount of data. This is just a small example. I am known to write letters to the editor of the local papers. One paper allows for 300 words per letter, another paper allows for just 250 words per letter. I can pretty well stay under the 300, but it can be a bear to pare it down to 250. But I can enter the whole letter into AI and ask it to edit the letter to 250. Less than a second later, it filters out the filler words.
Here is an edited ChatGPT version of the above paragraph:
This can also be used in writing sermons. I try hard to keep my sermons down to 15 minutes. I know how many words I can speak in a minute. If I am finding my rough draft is over 15 minutes, I can ask ChatGPT to help me pare it down to x number of words. Boom, its done.
The problem with both of these applications is that to extent they summarise, they also tend to flatten language. It's a fairly subtle effect, but if you read a page or so of their output you'll have experienced the phenomena, it's perfectly functional prose that is ultimately not particularly memorable.
For complex reasons, the direction in which current AI training is going is likely to make this issue worse rather than better.
It doesn't; that's why I made that point in my post - it's fundamentally no different from plagiarism except that it's even easier for students to access.
Many of the points/suggestions raised here about teaching and evaluation are the same ones I have been thinking of, in wondering what I would be doing if I were still in the classroom. I would hate to see the fear of AI-based plagiarism leading to a heavier reliance on high-stakes testing - I was about to make the same point Marvin did about anxiety. Most of the students I taught in my last two decades of teaching had high anxiety (because of the setting I was teaching in -- it was an adult-ed program, and a lot of them had left regular high school because their anxiety made it impossible for them to attend).
One of the other concerns with tools like ChatGPT is that it will simply make stuff up when it doesn't know. I tested it out with a few standard type high school English essay questions. If you're asking it about a well-known and widely used text, it will spit out five-paragraph essays on To Kill a Mockingbird or Lord of the Flies all day. Ask about something published in the last few years, or something published by a small press or lesser-known author (I tested it on my own books, with hilarious results) -- it won't say it doesn't have the information; it will confidently generate completely false character names and plot points, which sound plausible but have no connection to the actual book.
The funniest result of this so far was the student who confidently told one of my co-workers that the play Julius Caesar was written by Shakespeare for Julius Caesar and performed in his presence in Rome (Act 3 must have been quite a shock to Caesar). When her instructor patiently explained the historic timeline which made this impossible, the student continued to confidently assert that she was right, because ChatGPT had told her so.
Not to mention water. Musk's xAI has been accused of trying to bridge the energy supply gap by running unlicensed gas turbines next to their data centre:
https://www.reuters.com/business/environment/musks-xai-operating-gas-turbines-without-permits-data-center-environmental-group-2024-08-28/
I also remember how often and loudly we were told how personal computers were going to make us so much more efficient, and facilitate Work being moved more to the side so we could enjoy more of life. That did not happen, and arguably personal computing (now including social media and online gaming and betting) made things worse re: Work-Life balance, and then some. There's no reason to assume that AI won't just be a reiteration of that, and perhaps even more severely. I'm not one of these "Cyberdyne Systems is Skynet" types, but I'm more leery than hopeful about AI.
BTW, you realize that the paragraph ChatGPT edited for you is only six words shorter than your original? And, as @chrisstiles says, for the sake of losing those six words, the edited version has replaced your distinctive voice (e.g., “it can be a bear to pare it down”) with text that has all of the personality of the instructions on a shampoo bottle.
Seriously. Mind, so are the oral presentations that have been suggested here as a way to avoid cheating.
Also, because AI is bad with facts, most people who are not professional factcheckers or are otherwise experts on the topic in question cannot accurately use generative AI to write content. And if they do, it will still need an editor because while gen AI is excellent at smooth writing, that's not all there is to good writing. Good writing is targeted to its audience. Good writing uses a specific tone chosen based on context, audience, timing, expectations, and goals, among others. Gen AI seems to aim for the same tone, always.
In fact, I think that just as many of us learned to tune out banner ads, many of us will learn to tune out the bland style of AI writing. I think that if gen AI content does become used on a mass level, then people will (within a generation) learn to see and ignore it.
My spouse, @Bullfrog , compares it to Budweiser. It's very good at producing mediocrity in fast quantities.
*I am leaving aside whether it's ethical throughout this response. That is a different question and has multiple facets.
Pardon me if I ramble a bit...
I might also note that there are some places where - commercially - it might be very advantageous to pour out mediocrity onto a page and distribute it, kind of like how there's an industry for producing catchy, formulaic advertising jingles (another job that could be replaced by AI.) If preaching is just the ordinary filling of ritual space, then sometimes it's not a bad thing to preach a mediocre sermon occasionally. It depends on the needs of the situation. Not every written work has to be given the effort of a professional novelist.
Of course, if the purpose of the writing is to demonstrate the cognitive power or understanding of the author, then AI is clearly a dishonest abuse of the system, just as using a calculator would be cheating if you're being tested on your ability to mentally do math.
Hm, something funny here about how literature is more personal than arithmetic. But if you treat written communication as something akin to a mathematical exercise, maybe AI makes more sense.
This reminds me of another great metaphor, a seminary professor told us he didn't want "Minestrone" essays, Minestrone being a soup made of "whatever comes to hand in the pantry" in Italian tradition. If you're trying to say something personal and/or important, you need to give your work more care than that.
And that's the rub with higher ed. A degree is the end, and the writing is just the means to the end. So if you're not careful, you denigrate the writing because it's only a means to getting a slip of paper that opens certain professional doors for you IRL.
The "consumer AI" products (including stand-alone products and add-ons to other applications) aimed at the general public are accessible to vast numbers of people, and whether those are accessed through subscription payments or from sites which include advertising there's a large revenue stream attached to them (even if each individual doesn't pay very much - maybe no more than the bother of dismissing ads every so often).
At the other end of the scale from the inequality arising from lack of access to AI at a national level, there is the inequality that affects individuals subject to AI-based decision-making, arising from biases embedded in AI systems themselves (typically through the use of biased training datasets, compounded by lack of diversity in AI development teams).
For example: Gender bias in AI-based decision-making systems: a systematic literature review
It doesn't differ. They are both cheating.
When I found myself marking an essay which had been purchased, I reported it. What happened subsequently was above my pay grade, but I know it would not have gone well for the student, as it was clearly academic misconduct.
My thought process whilst went like this:
Wow! This is a great essay! So much extra reading, beyond the set reading list! That's odd, the student has found some great sources, but hasn't cited anything from the set reading list? In fact, there's no evidence in this essay that the student has read the set core text. But of course, they must have read the core text! This paragraph is great! It's well beyond the basic curriculum! In fact it's so far beyond the curriculum, I'm going to have to google stuff myself before I can mark it. *googles phrase* Why is the second hit on google an essay writing company? *penny drops*
Now maybe I have marked dozens of purchased essays without noticing, but I'm guessing not. What sunk this essay was that it was a generic essay on the topic, rather than one which cited the specific sources provided to the student. From the outset, the essay had a different "feel" to the twenty-odd essays I had already marked. It was bad luck for the student that the phrase I googled took me straight to the company they had used, but that there was something "different" about the essay was obvious from the first couple of paragraphs.
As mentioned, an essay written by AI is no different to one bought from an essay mill and I’ve seen plenty of those; at least one of my students every year is caught by the university software. Usually the bought essays are submitted by previous students for ‘credits’ to get access to other essays - also an academic conduct offence. The usual clue here is that the essay doesn’t quite match the new question set but obviously the software we use identifies most of the offences.
So I don’t really consider managing the use of AI for essay writing much different from managing any other attempts at plagiarism. My university has always embraced technology, we are experts in online teaching, and our AI policy has been in place at least 18 months. We expect students to reference their use of AI, beyond simple grammatical improvements, and, where appropriate, provide appendices explaining how they made any use of AI their own work. Individual modules have also issued guidance.
In a recent interview, AARP asked Bill Gates this question:
Published interview here..
If Bill Gates is beginning to use it for medical terms, most likely it can be used as a great assistant.
We have an elderly woman in church who went totally deaf a few years ago. Her daughter loaded an AI translator on her phone so she can follow along with conversations and the sermon. She says she really likes it. Helps keep her in contact with her friends. My new Pixel AI, called Gemini, says it can do that even for me.
But, even the most powerful AI empowered voice recognition would leave Scots stuck in the lift.
Guess what? Google Translate, Open L, and Semantic all claim their AI can translate Scottish into English and respect the nuances of the language and cultural differences. The train has already left the station.
Microsoft is one of the larger investors in OpenAI.
So? It does not surprise me he would promote AI, but he does give a good example of how AI can be helpful to older people trying to understand what their doctor is saying.
Yeah, I'm not sure I'd assume Bill Gates is competent outside of his own field, which is being so spectacularly rich that he's basically beyond accountability.
He might be one of the better ones, but that category of human is objectively terrifying, even if they mean well. Pardon the video game reference, but Miquella became a villain in Elden Ring for a good reason.
I'm not even sure of that, I suspect it's just that Microsoft's predatory behaviour was reigned in somewhat in the 90s and since then there are bigger bogeymen.
Google translate does not have Scots as a language - I just checked. It has Scots Gaelic and it really struggles with some of the constructions in that - I often have to use my knowledge of the language to work out what its getting at. It's far better than nothing for me as a learner, reading social media posts and news articles in Gaelic, but for anything of any importance you'd need a human expert in the language.
Scots and Scots Gaelic are separate languages.
Update - I just tried pasting Scots into Google translate. It doesn't recognise it and doesn't translate it.
( Doric is Aberdeenshire and North East Scots)
(I might have known it wouldn’t know Doric. Spellcheck doesn’t like it either!)
Ye ken I could work oot the Scots wi'oot the English translation
It's quite good translating Welsh to English but shockingly bad the other way.
https://www.bbc.com/news/articles/cq5ggew08eyo