AI - dystopia or utopia?

BoogieBoogie Heaven Host
Or somewhere in-between?

While offering advancements in automation and convenience, AI poses significant risks to society. One major concern is job displacement, as AI systems replace human workers in various industries, leading to widespread unemployment and economic inequality. Additionally, AI lacks human empathy and moral judgment, making it unsuitable for critical decision-making in areas like healthcare, justice, and governance. Furthermore, the increasing reliance on AI systems can exacerbate privacy issues, as these technologies collect vast amounts of personal data. Lastly, there is the looming threat of AI being misused for malicious purposes, creating new vulnerabilities and security risks.

AI has the potential to revolutionize many aspects of society, offering solutions to complex challenges. It can enhance productivity by automating routine tasks, allowing humans to focus on creative and strategic endeavors. In healthcare, AI helps with faster diagnosis, personalized treatment, and drug discovery. In education, it enables personalized learning, adapting to individual needs. AI also plays a critical role in environmental monitoring, aiding in sustainable practices. By advancing fields like robotics, transportation, and data analysis, AI has the power to improve quality of life, drive economic growth, and address global challenges more efficiently.

I used AI to write these short arguments - I asked for 100 words in favour and 100 words against. It took ten seconds, quicker than I could type it. Of course, it's not in my style of writing - but I'm sure that wouldn't take long to pick up.

Like cars or the Internet, AI will soon be everywhere.

What do you think?

What have you noticed?

How do you use AI now?
«1

Comments

  • Boogie wrote: »
    I used AI to write these short arguments - I asked for 100 words in favour and 100 words against. It took ten seconds, quicker than I could type it. Of course, it's not in my style of writing - but I'm sure that wouldn't take long to pick up.

    Actually, that's not necessarily the case. It would require a reasonable through to large amount of your work for it to be able to augment it's language model appropriately (at least at this time)
    What have you noticed?

    ISTR use of it was discouraged on the board.

    I have copilot bundled into office365 at work, so far it's been mainly useful in summarising meetings when I join them late (generally due to being double booked).

    Separately I have access to openai and claude, which are both occasionally useful in a more technical context. I find you need to be very specific about what you are providing and what you expect to get the most out of them, to the point where you need a reasonable amount of domain expertise - otherwise the chances of getting something useless goes up exponentially. I find that my use is low enough that the credits I purchase last for months at a time, it's certainly a fraction of the cost of my copilot subscription.

    Offline LLMs - many of them opensource - are nearly as good as many of the models I've mentioned.

    I think the biggest issue is a financial one, I just don't see the potential uses of these technologies as justifying the valuations these companies currently have, and expect a correction at some point.
  • HugalHugal Shipmate
    There is also the rights problem. Copyright and rights to your own image and data. Does AI infringe these. It is trained on data from all sorts of places.
    In one sense this has been around for centuries. The Luddites for instance or more recently the typesetters for newspapers. Both have been due to advances in technology. AI encroaches on people’s personal information and rights which is much more dangerous.
  • Alan Cresswell Alan Cresswell Admin, 8th Day Host
    It should be remembered that AI is a very broad term, and it may not be helpful to discuss AI as though it's a single thing. The sort of AI that scans through medical scans to highlight potential issues, passing those to a doctor to assess, is a tool that massively increases the usefulness of scans simply because they all get looked at by the machine and leaves the human expert only the task of reviewing anything important. The generative AI that takes a large input of material and generates something else, something like ChatGPT, is a different issue because here there is limited control of the input material. A medical diagnosis tool is trained on date sets provided by medical experts with outputs reviewed by medical experts, ChatGPT scours anything online it can access and produces something rarely checked by an expert.

    In some instances AI can produce reasonable outputs. The arguments above for and against AI are not unreasonable, because in this case the input it's using is a vast number of comment pieces in newspapers, blogs, social media etc where people have described their hopes and fears, worries and expectations, and the occasional extreme unsubstantiated opinion is filtered out by everything else (I'd also note that quite a few of those opinion pieces, including the OP of this thread, include AI generated text as examples and that AI generated text will go into the input of future requests for similar text which creates interesting options for recursive feedback). But, I'd guess everyone who has read the OP would have an interest in AI (possible exceptions for our hard working hosts) and could probably write very similar for and against positions without using a Large Language Model (LLM) to do so.

    The more specific and niche the request the less relevant material available and hence the greater chances of a single extreme position dominating the output. The reasonable concerns over copyright and personal information can make things worse - publishers who hold high standards are much more likely to maintain standards on personal information and more likely to restrict access to their outputs (eg: many scientific journal articles would be behind a pay wall), the effect of which is to reduce the volume of high quality material available to an LLM and provide greater emphasis to lower quality output written by people with little or no expertise on social media or blogs.
  • It should be remembered that AI is a very broad term, and it may not be helpful to discuss AI as though it's a single thing. The sort of AI that scans through medical scans to highlight potential issues, passing those to a doctor to assess, is a tool that massively increases the usefulness of scans simply because they all get looked at by the machine and leaves the human expert only the task of reviewing anything important.

    Well, even with machine learning systems of this kind, the problem is one of working out *why* the machine is making the determination it does, they aren't necessarily guaranteed to save expert time.
  • BoogieBoogie Heaven Host
    I can see the time when all assessments at university are written or oral exams due to AI generated course work.
  • Boogie wrote: »
    I can see the time when all assessments at university are written or oral exams due to AI generated course work.

    There might be a need to re-examine what testing and exams are for.
  • TrudyTrudy Shipmate, Host Emeritus
    Boogie wrote: »
    I can see the time when all assessments at university are written or oral exams due to AI generated course work.

    There might be a need to re-examine what testing and exams are for.

    Yes, I think this is what needs to happen at all levels of education. I've been an English teacher for most of my adult life, and I'm glad I'm not teaching now because it would require such a massive re-thinking of how we teach and especially how we evaluate. Right now the emphasis among educators seems to be largely on "preventing" and "catching" students from "cheating" by using something like ChatGPT to do their assignments. But that's a bandaid solution that doesn't address the real issues.

    Completing a multiple-choice exam in a course like high school English "prevents cheating" (at least, the chatbot kind of cheating), but it doesn't give the student anything like the language skills or mastery of the material that they will get by reading a piece of literature and writing a thoughtful, well-researched essay in response to it. But if the student can use ChatGPT to write an essay good enough to "fool" the teacher, what's the value of that essay assignment? It's obviously not the end product, because that can be produced with no effort and no learning on the student's part. (Just as, pre-ChatGPT, the same result could be produced by plagiarizing someone else's essay -- AI tools have only exacerbated an existing problem).

    So, if we still think those skills are important for a student to develop, how do we teach them? And how do we evaluate them, other than by looking solely at a finished product?

    These are bigger questions that I know are being addressed at some level in education, but not as quickly or thoroughly as they need to be in order to deal with the threats posed by new technology.
  • Alan Cresswell Alan Cresswell Admin, 8th Day Host
    I think that most options to determine how well students have understood a topic and are able to put that into practice (which IMO is the main purpose of assessment) which are not prone to cheating require a lot more work from teachers. Giving a presentation to the class and answering questions would work (after accounting for greater or lesser ability at speaking in public), but takes time to a) come up with suitable assignments and b) listen to all the talks. Plus, if these are talks to the class (rather than just to a small number of assessors) then everyone in the class would need a different assignment (which adds complications in maintaining parity of assessment). The traditional exam, where the answers are written out in an isolated space without access to the internet, isn't perfect (often these only assess ability to remember rather than necessarily understand).
  • In some instances AI can produce reasonable outputs.

    As an exercise, I asked Google Gemini two very closely-related questions (I asked for Lorentz beta and gamma for a particular particle with a particular momentum). It produced a page of plausible-looking computation for each question. It got beta right, but its calculation for gamma was wrong by about 50%. If I didn't know that it was telling me a stupid number, I probably wouldn't have noticed its error.
  • The traditional exam, where the answers are written out in an isolated space without access to the internet, isn't perfect (often these only assess ability to remember rather than necessarily understand).

    They're also terrible for students with anxiety issues. Ask me how I know!!!
  • Trudy wrote: »
    But if the student can use ChatGPT to write an essay good enough to "fool" the teacher, what's the value of that essay assignment?

    How does a student using a LLM to write an essay differ from a student who purchases an essay on one of those "we'll do your homework for you" websites, or one who submits their elder sibling's essay from last year, or otherwise submits someone else's work as their own?

    I would argue that it doesn't.

    It's cheaper and easier to ask ChatGPT than to do the other things, but I don't think it's fundamentally different.

    From a pedagogical point of view, I like asking for intermediate steps on this kind of exercise - show me your outline, and your research, and your incremental progress. This also makes it harder to pass of some AI's work as your own (not impossible, but it requires a couple more steps than just pasting the question in to ChatGPT).
  • In some instances AI can produce reasonable outputs.

    As an exercise, I asked Google Gemini two very closely-related questions (I asked for Lorentz beta and gamma for a particular particle with a particular momentum). It produced a page of plausible-looking computation for each question. It got beta right, but its calculation for gamma was wrong by about 50%. If I didn't know that it was telling me a stupid number, I probably wouldn't have noticed its error.

    Point out that it got the answer wrong, and it'll obligingly 'correct it'.

    This is to my point above that you need to be a domain expert already in order to use GenAI systems like these for actual work.

    Further, the affect adopted by these LLMs is one of being obliging and polite, and in turn leads to a bunch of cognitive biases when we evaluate the output.

    It's wrong to thing of these as exceptional cases (AI 'hallucinations'), generating output of this kind is simply what these systems *do*.
  • This is to my point above that you need to be a domain expert already in order to use GenAI systems like these for actual work.

    We've had massive engineering errors caused by humans using the wrong units (hello, Mars Climate Orbiter). I wonder when we'll have the first high profile failure caused by "I asked an AI for this computation, and it told me a load of nonsense"?
  • Gramps49Gramps49 Shipmate
    I remember it was not too long ago Wikipedia as a source of information was discouraged. Now, it is often used, maybe not as a primary source, but as a way of finding where to look for a source.

    Yes, there are still many concerns about AI. As pointed up thread, though, it can be a helpful tool in summarizing a lengthy amount of data. This is just a small example. I am known to write letters to the editor of the local papers. One paper allows for 300 words per letter, another paper allows for just 250 words per letter. I can pretty well stay under the 300, but it can be a bear to pare it down to 250. But I can enter the whole letter into AI and ask it to edit the letter to 250. Less than a second later, it filters out the filler words.

    Here is an edited ChatGPT version of the above paragraph:
    Yes, there are still many concerns about AI. However, as mentioned earlier, it can be a useful tool for summarizing large amounts of data. This is just one example. I often write letters to the editor of local newspapers. One paper allows up to 300 words per letter, while another limits it to just 250. I can usually stay under the 300-word limit, but trimming it down to 250 can be challenging. Thankfully, I can input the entire letter into an AI tool and ask it to edit the text to fit the 250-word limit. In less than a second, it efficiently removes unnecessary words.

    This can also be used in writing sermons. I try hard to keep my sermons down to 15 minutes. I know how many words I can speak in a minute. If I am finding my rough draft is over 15 minutes, I can ask ChatGPT to help me pare it down to x number of words. Boom, its done.

  • Gramps49 wrote: »
    I can pretty well stay under the 300, but it can be a bear to pare it down to 250. But I can enter the whole letter into AI and ask it to edit the letter to 250. Less than a second later, it filters out the filler words.

    This can also be used in writing sermons. I try hard to keep my sermons down to 15 minutes. I know how many words I can speak in a minute. If I am finding my rough draft is over 15 minutes, I can ask ChatGPT to help me pare it down to x number of words. Boom, its done.

    The problem with both of these applications is that to extent they summarise, they also tend to flatten language. It's a fairly subtle effect, but if you read a page or so of their output you'll have experienced the phenomena, it's perfectly functional prose that is ultimately not particularly memorable.

    For complex reasons, the direction in which current AI training is going is likely to make this issue worse rather than better.
  • CaissaCaissa Shipmate
    I teach a course to first year students designed to give them the tools to be successful university students. The essay is still one of the main means of communication in the academia. We build up to the final essay with a smaller goal assignment, a critical thinking assignment and a source evaluation assignment. The sources they find use in the source evaluation assignment need to be integrated into the final essay. I have standard sit down tests during the term designed to use some of the test taking tools I introduce them to during term. For the final, I give them a take home exam with application questions. I hope my students do not use generative AI. It's quite possible some of them do. Ultimately, they are going to have to develop critical thinking and writing skills at some point. Often, the use of generative AI that you can get away with is as much if not more work than doing it yourself.
  • HarryCHHarryCH Shipmate
    The answer to the title question is "yes". As for the last question in the OP: most of us are using or interacting with AI already (and have for some time), often without knowing it.
  • TrudyTrudy Shipmate, Host Emeritus
    Trudy wrote: »
    But if the student can use ChatGPT to write an essay good enough to "fool" the teacher, what's the value of that essay assignment?

    How does a student using a LLM to write an essay differ from a student who purchases an essay on one of those "we'll do your homework for you" websites, or one who submits their elder sibling's essay from last year, or otherwise submits someone else's work as their own?

    I would argue that it doesn't.

    It doesn't; that's why I made that point in my post - it's fundamentally no different from plagiarism except that it's even easier for students to access.

    Many of the points/suggestions raised here about teaching and evaluation are the same ones I have been thinking of, in wondering what I would be doing if I were still in the classroom. I would hate to see the fear of AI-based plagiarism leading to a heavier reliance on high-stakes testing - I was about to make the same point Marvin did about anxiety. Most of the students I taught in my last two decades of teaching had high anxiety (because of the setting I was teaching in -- it was an adult-ed program, and a lot of them had left regular high school because their anxiety made it impossible for them to attend).

    One of the other concerns with tools like ChatGPT is that it will simply make stuff up when it doesn't know. I tested it out with a few standard type high school English essay questions. If you're asking it about a well-known and widely used text, it will spit out five-paragraph essays on To Kill a Mockingbird or Lord of the Flies all day. Ask about something published in the last few years, or something published by a small press or lesser-known author (I tested it on my own books, with hilarious results) -- it won't say it doesn't have the information; it will confidently generate completely false character names and plot points, which sound plausible but have no connection to the actual book.

    The funniest result of this so far was the student who confidently told one of my co-workers that the play Julius Caesar was written by Shakespeare for Julius Caesar and performed in his presence in Rome (Act 3 must have been quite a shock to Caesar). When her instructor patiently explained the historic timeline which made this impossible, the student continued to confidently assert that she was right, because ChatGPT had told her so.
  • HugalHugal Shipmate
    This reminds me of of the spellcheck problems. We overcame those but this is a different beast
  • la vie en rougela vie en rouge Purgatory Host, Circus Host
    I recently proofread (i.e. corrected the English) a learned report by some very intelligent people on a subject I had not even considered: the effects of AI on global inequality. AI is extremely expensive to develop and mostly in the hands of a small number of companies based in high income countries. Its expansion is not necessarily an unmitigated good for low and middle income countries.
  • Alan Cresswell Alan Cresswell Admin, 8th Day Host
    AI is also a massive energy user, which is putting a strain on electricity supplies in places where the computing power is located, to the extent that some providers of these services are contemplating building nuclear power stations just to supply their own needs.
  • AI is also a massive energy user

    Not to mention water. Musk's xAI has been accused of trying to bridge the energy supply gap by running unlicensed gas turbines next to their data centre:

    https://www.reuters.com/business/environment/musks-xai-operating-gas-turbines-without-permits-data-center-environmental-group-2024-08-28/
  • CaissaCaissa Shipmate
    One minute ago, I just received an email inviting me to a 60 minute workshop tomorrow on ChatGPT for Excel. Am I being chased around the internet?
  • The_RivThe_Riv Shipmate
    edited January 13
    I tend to think that people my age and older (I'm 54) are more prone to be wary of and look for pitfalls in something as potentially sweeping as AI. Those of us with ample pre-Internet memories may be the wrong generation to ask about such a broad judgment as the OP requests. My life is spent in the Humanities, so there is, I think, something hunkered down within me (akin to what Philip Larkin describes in "Toads") that's generally opposed to AI on that score. We need, IMO, as a species and a society, more humanity, not less, and Culture is far, far more than information.

    I also remember how often and loudly we were told how personal computers were going to make us so much more efficient, and facilitate Work being moved more to the side so we could enjoy more of life. That did not happen, and arguably personal computing (now including social media and online gaming and betting) made things worse re: Work-Life balance, and then some. There's no reason to assume that AI won't just be a reiteration of that, and perhaps even more severely. I'm not one of these "Cyberdyne Systems is Skynet" types, but I'm more leery than hopeful about AI.
  • Gramps49 wrote: »
    I remember it was not too long ago Wikipedia as a source of information was discouraged. Now, it is often used, maybe not as a primary source, but as a way of finding where to look for a source.

    Yes, there are still many concerns about AI. As pointed up thread, though, it can be a helpful tool in summarizing a lengthy amount of data. This is just a small example. I am known to write letters to the editor of the local papers. One paper allows for 300 words per letter, another paper allows for just 250 words per letter. I can pretty well stay under the 300, but it can be a bear to pare it down to 250. But I can enter the whole letter into AI and ask it to edit the letter to 250. Less than a second later, it filters out the filler words.

    Here is an edited ChatGPT version of the above paragraph:
    Yes, there are still many concerns about AI. However, as mentioned earlier, it can be a useful tool for summarizing large amounts of data. This is just one example. I often write letters to the editor of local newspapers. One paper allows up to 300 words per letter, while another limits it to just 250. I can usually stay under the 300-word limit, but trimming it down to 250 can be challenging. Thankfully, I can input the entire letter into an AI tool and ask it to edit the text to fit the 250-word limit. In less than a second, it efficiently removes unnecessary words.

    This can also be used in writing sermons. I try hard to keep my sermons down to 15 minutes. I know how many words I can speak in a minute. If I am finding my rough draft is over 15 minutes, I can ask ChatGPT to help me pare it down to x number of words. Boom, its done.
    I would rather make the editing choices myself, especially in something like a sermon. And to be honest, I’d much rather hear a sermon written by someone who made those choices themself.

    BTW, you realize that the paragraph ChatGPT edited for you is only six words shorter than your original? And, as @chrisstiles says, for the sake of losing those six words, the edited version has replaced your distinctive voice (e.g., “it can be a bear to pare it down”) with text that has all of the personality of the instructions on a shampoo bottle.


  • GwaiGwai Epiphanies Host
    The traditional exam, where the answers are written out in an isolated space without access to the internet, isn't perfect (often these only assess ability to remember rather than necessarily understand).

    They're also terrible for students with anxiety issues. Ask me how I know!!!

    Seriously. Mind, so are the oral presentations that have been suggested here as a way to avoid cheating.
  • GwaiGwai Epiphanies Host
    edited January 13
    As a professional editor and improver of text, generative AI currently is decisively mediocre as a writer or editor. It can produce boring text that kind of answers the question. It will not be nearly as persuasive or interesting as a good writer. If one is a bad writer with no goal of being better, it's probably useful.* However, I would ask why one is bothering to write if one has no interest at improving.
    Also, because AI is bad with facts, most people who are not professional factcheckers or are otherwise experts on the topic in question cannot accurately use generative AI to write content. And if they do, it will still need an editor because while gen AI is excellent at smooth writing, that's not all there is to good writing. Good writing is targeted to its audience. Good writing uses a specific tone chosen based on context, audience, timing, expectations, and goals, among others. Gen AI seems to aim for the same tone, always.

    In fact, I think that just as many of us learned to tune out banner ads, many of us will learn to tune out the bland style of AI writing. I think that if gen AI content does become used on a mass level, then people will (within a generation) learn to see and ignore it.

    My spouse, @Bullfrog , compares it to Budweiser. It's very good at producing mediocrity in fast quantities.

    *I am leaving aside whether it's ethical throughout this response. That is a different question and has multiple facets.
  • Gwai wrote: »
    My spouse, @Bullfrog , compares it to Budweiser. It's very good at producing mediocrity in fast quantities.
    A perfect way to put it!

  • CaissaCaissa Shipmate
    I share Bullfrog's analysis of Budweiser as any good Canuck would. For all of the reasons you state above, Gwai, professors are getting better and better spotting essays that rely in whole or in part on generative AI. One of my jobs at the university is that of student advocate which includes representing students charged with academic offenses at appeal hearings. The cases put forward my profs suggesting that students have used AI seem to be getting stronger. The threshold required for a student to be found to have committed an academic offense is balance of probabilities. At that threshold more students are being found to have committed an academic offense. In the earlier days, relying on AI detectors almost exclusively, students have been not found to have committed academic offenses. Most of the research to date on AI detectors say they have too many challenges to be used as evidence of AL usage.
  • BullfrogBullfrog Shipmate
    Glad to see my high quality metaphor is being savored like a fine microbrew.

    Pardon me if I ramble a bit...

    I might also note that there are some places where - commercially - it might be very advantageous to pour out mediocrity onto a page and distribute it, kind of like how there's an industry for producing catchy, formulaic advertising jingles (another job that could be replaced by AI.) If preaching is just the ordinary filling of ritual space, then sometimes it's not a bad thing to preach a mediocre sermon occasionally. It depends on the needs of the situation. Not every written work has to be given the effort of a professional novelist.

    Of course, if the purpose of the writing is to demonstrate the cognitive power or understanding of the author, then AI is clearly a dishonest abuse of the system, just as using a calculator would be cheating if you're being tested on your ability to mentally do math.

    Hm, something funny here about how literature is more personal than arithmetic. But if you treat written communication as something akin to a mathematical exercise, maybe AI makes more sense.

    This reminds me of another great metaphor, a seminary professor told us he didn't want "Minestrone" essays, Minestrone being a soup made of "whatever comes to hand in the pantry" in Italian tradition. If you're trying to say something personal and/or important, you need to give your work more care than that.

    And that's the rub with higher ed. A degree is the end, and the writing is just the means to the end. So if you're not careful, you denigrate the writing because it's only a means to getting a slip of paper that opens certain professional doors for you IRL.
  • Alan Cresswell Alan Cresswell Admin, 8th Day Host
    I just thought it might be better to respond to this from the new Labour Government thread here
    Louise wrote: »
    Now it looks to me like they've gone full snake-oil on economic growth over AI. There's a current thread on AI itself. Some AI applications do specialist things well but the kind being pushed by the big corporate firms at the moment tend not to be of that sort and have many issues.
    The reason is that the applications where AI does well are specialist, niche areas which provide very little opportunity to make large profits. Many of those niches have been filled by systems developed on the back of research grants, and have thus often excluded the big corporate firms - a couple of researchers in a university IT department collaborating with a group in the medical school to develop a diagnostic tool will grab bits of open source code, but they're not going to spend a significant portion of their grant paying a big tech firm for something if they can avoid it.

    The "consumer AI" products (including stand-alone products and add-ons to other applications) aimed at the general public are accessible to vast numbers of people, and whether those are accessed through subscription payments or from sites which include advertising there's a large revenue stream attached to them (even if each individual doesn't pay very much - maybe no more than the bother of dismissing ads every so often).
  • peasepease Tech Admin
    edited January 14
    I recently proofread (i.e. corrected the English) a learned report by some very intelligent people on a subject I had not even considered: the effects of AI on global inequality. AI is extremely expensive to develop and mostly in the hands of a small number of companies based in high income countries. Its expansion is not necessarily an unmitigated good for low and middle income countries.
    I think this is the only post here that addresses inequality.

    At the other end of the scale from the inequality arising from lack of access to AI at a national level, there is the inequality that affects individuals subject to AI-based decision-making, arising from biases embedded in AI systems themselves (typically through the use of biased training datasets, compounded by lack of diversity in AI development teams).

    For example: Gender bias in AI-based decision-making systems: a systematic literature review
    [Introduction] Artificial intelligence (AI)-based decision-making systems are now used in various industry sectors at an increasing rate and continue to penetrate all aspects of our daily lives.
    ...
    However, while AI-based decision-making systems may offer solutions to various problems faced in different disciplines, they may simultaneously create unintended harmful effects, including gender-biased outcomes affecting individuals or minorities of a certain race, gender, or colour...
  • North East QuineNorth East Quine Purgatory Host
    edited January 15
    Trudy wrote: »
    But if the student can use ChatGPT to write an essay good enough to "fool" the teacher, what's the value of that essay assignment?

    How does a student using a LLM to write an essay differ from a student who purchases an essay on one of those "we'll do your homework for you" websites,

    It doesn't differ. They are both cheating.

    When I found myself marking an essay which had been purchased, I reported it. What happened subsequently was above my pay grade, but I know it would not have gone well for the student, as it was clearly academic misconduct.

    My thought process whilst went like this:
    Wow! This is a great essay! So much extra reading, beyond the set reading list! That's odd, the student has found some great sources, but hasn't cited anything from the set reading list? In fact, there's no evidence in this essay that the student has read the set core text. But of course, they must have read the core text! This paragraph is great! It's well beyond the basic curriculum! In fact it's so far beyond the curriculum, I'm going to have to google stuff myself before I can mark it. *googles phrase* Why is the second hit on google an essay writing company? *penny drops*

    Now maybe I have marked dozens of purchased essays without noticing, but I'm guessing not. What sunk this essay was that it was a generic essay on the topic, rather than one which cited the specific sources provided to the student. From the outset, the essay had a different "feel" to the twenty-odd essays I had already marked. It was bad luck for the student that the phrase I googled took me straight to the company they had used, but that there was something "different" about the essay was obvious from the first couple of paragraphs.





  • ArethosemyfeetArethosemyfeet Shipmate, Heaven Host
    edited January 15
    I'm currently in the process of trying to corral and regulate the use of AI in schools by both staff and pupils. Trying to navigate a path between a free-for-all and a blanket ban is not easy. Right now we're planning to block AI as an internet filtering category then specifically exempt tools where we have an agreed use case and guidance. My biggest concern is staff using the likes of ChatGPT to "research" something they're going to be teaching (particularly topic work in primary) and serving kids absolute mince as a result.
  • HeavenlyannieHeavenlyannie Shipmate
    edited January 15
    Generic essays are a big clue, which is why my distance learning university expects undergraduates to use our module materials as the basis for essays and any external materials are supplementary to this.
    As mentioned, an essay written by AI is no different to one bought from an essay mill and I’ve seen plenty of those; at least one of my students every year is caught by the university software. Usually the bought essays are submitted by previous students for ‘credits’ to get access to other essays - also an academic conduct offence. The usual clue here is that the essay doesn’t quite match the new question set but obviously the software we use identifies most of the offences.
    So I don’t really consider managing the use of AI for essay writing much different from managing any other attempts at plagiarism. My university has always embraced technology, we are experts in online teaching, and our AI policy has been in place at least 18 months. We expect students to reference their use of AI, beyond simple grammatical improvements, and, where appropriate, provide appendices explaining how they made any use of AI their own work. Individual modules have also issued guidance.
  • Gramps49Gramps49 Shipmate
    Just traded my old Apple phone in for a Pixel 9 Pro. Guess what? It comes with an AI program. Ready or not.

    In a recent interview, AARP asked Bill Gates this question:
    Will artificial intelligence (AI) have an impact on improving the health of folks over 50?

    Oh, fantastic, and AI is improving rapidly. As you get older, you think about those things more. I’d encourage your readers, when they get an MRI or a medical bill, feed it into even today’s AIs. I know a lot about health terminology, but when I got an MRI, even I said, “Oh, that’s what that means.” That’s very typical for people my age. In the future, the AI will essentially participate in your session with the doctor, summarize those meetings, and you can go back and ask questions, and share with your relatives — “Here’s what he said about my neurological diagnostic, what do you think?” So [AI’s impact on] health is so phenomenal, including accelerating research on new cures.​

    Published interview here..

    If Bill Gates is beginning to use it for medical terms, most likely it can be used as a great assistant.

    We have an elderly woman in church who went totally deaf a few years ago. Her daughter loaded an AI translator on her phone so she can follow along with conversations and the sermon. She says she really likes it. Helps keep her in contact with her friends. My new Pixel AI, called Gemini, says it can do that even for me.
  • Alan Cresswell Alan Cresswell Admin, 8th Day Host
    Do these automatic captioning systems really use AI? Isn't it simply voice recognition? Certainly the times I've seen them used they're awful, putting in strange words that sort of sound like what was said, but make no sense in the context - surely an AI empowered system would at least attempt to find a word that fits the context?

    But, even the most powerful AI empowered voice recognition would leave Scots stuck in the lift.
  • Gramps49Gramps49 Shipmate
    Do these automatic captioning systems really use AI? Isn't it simply voice recognition? Certainly the times I've seen them used they're awful, putting in strange words that sort of sound like what was said, but make no sense in the context - surely an AI empowered system would at least attempt to find a word that fits the context?

    But, even the most powerful AI empowered voice recognition would leave Scots stuck in the lift.

    Guess what? Google Translate, Open L, and Semantic all claim their AI can translate Scottish into English and respect the nuances of the language and cultural differences. The train has already left the station.
  • Gramps49 wrote: »
    Published interview here..

    If Bill Gates is beginning to use it for medical terms, most likely it can be used as a great assistant.

    Microsoft is one of the larger investors in OpenAI.
  • Gramps49Gramps49 Shipmate
    Gramps49 wrote: »
    Published interview here..

    If Bill Gates is beginning to use it for medical terms, most likely it can be used as a great assistant.

    Microsoft is one of the larger investors in OpenAI.

    So? It does not surprise me he would promote AI, but he does give a good example of how AI can be helpful to older people trying to understand what their doctor is saying.

  • Nick TamenNick Tamen Shipmate
    edited January 15
    Gramps49 wrote: »
    Gramps49 wrote: »
    Published interview here..

    If Bill Gates is beginning to use it for medical terms, most likely it can be used as a great assistant.

    Microsoft is one of the larger investors in OpenAI.

    So? It does not surprise me he would promote AI, but he does give a good example of how AI can be helpful to older people trying to understand what their doctor is saying.
    No, it gives a good example of how Bill Gates is promoting AI. It says nothing about whether there is any substance behind that promotion; it’s an infomercial example.


  • BullfrogBullfrog Shipmate
    Nick Tamen wrote: »
    Gramps49 wrote: »
    Gramps49 wrote: »
    Published interview here..

    If Bill Gates is beginning to use it for medical terms, most likely it can be used as a great assistant.

    Microsoft is one of the larger investors in OpenAI.

    So? It does not surprise me he would promote AI, but he does give a good example of how AI can be helpful to older people trying to understand what their doctor is saying.
    No, it gives a good example of how Bill Gates is promoting AI. It says nothing about whether there is any substance behind that promotion; it’s an infomercial example.


    Yeah, I'm not sure I'd assume Bill Gates is competent outside of his own field, which is being so spectacularly rich that he's basically beyond accountability.

    He might be one of the better ones, but that category of human is objectively terrifying, even if they mean well. Pardon the video game reference, but Miquella became a villain in Elden Ring for a good reason.
  • Bullfrog wrote: »
    He might be one of the better ones, but that category of human is objectively terrifying, even if they mean well.

    I'm not even sure of that, I suspect it's just that Microsoft's predatory behaviour was reigned in somewhat in the 90s and since then there are bigger bogeymen.


  • LouiseLouise Epiphanies Host
    edited January 15
    Gramps49 wrote: »
    Do these automatic captioning systems really use AI? Isn't it simply voice recognition? Certainly the times I've seen them used they're awful, putting in strange words that sort of sound like what was said, but make no sense in the context - surely an AI empowered system would at least attempt to find a word that fits the context?

    But, even the most powerful AI empowered voice recognition would leave Scots stuck in the lift.

    Guess what? Google Translate, Open L, and Semantic all claim their AI can translate Scottish into English and respect the nuances of the language and cultural differences. The train has already left the station.

    Google translate does not have Scots as a language - I just checked. It has Scots Gaelic and it really struggles with some of the constructions in that - I often have to use my knowledge of the language to work out what its getting at. It's far better than nothing for me as a learner, reading social media posts and news articles in Gaelic, but for anything of any importance you'd need a human expert in the language.

    Scots and Scots Gaelic are separate languages.

    Update - I just tried pasting Scots into Google translate. It doesn't recognise it and doesn't translate it.
  • LouiseLouise Epiphanies Host
    Also bonus - it thinks Doric is Afrikaans!

    ( Doric is Aberdeenshire and North East Scots)
  • BroJamesBroJames Purgatory Host
    edited January 15
    Ah michta kent it wouldna ken the Doric.

    (I might have known it wouldn’t know Doric. Spellcheck doesn’t like it either!)
  • Gramps49Gramps49 Shipmate

    Ye ken I could work oot the Scots wi'oot the English translation
  • HugalHugal Shipmate
    My Welsh speaking friends tell me it is not great with Welsh

  • KarlLBKarlLB Shipmate
    Hugal wrote: »
    My Welsh speaking friends tell me it is not great with Welsh

    It's quite good translating Welsh to English but shockingly bad the other way.
  • DardaDarda Shipmate
    BBC News - Apple suspends error-strewn AI generated news alerts
    https://www.bbc.com/news/articles/cq5ggew08eyo
Sign In or Register to comment.