@Martin54, I didn’t have those little a, b, c things in my post… this is awkward.
Why so? I can't think of another way to address each issue without the bind of quote unquote. I'll put the points in italics or something. Another encumbrance. But you're worth it!
No, I’m saying beauty, justice, and love are in their essence abstract philosophical concepts, full stop. You can’t get a cup of beauty or a quarter pound of justice or a kilogram of love. This is what “abstract” means.
Science has not made “inroads” into measuring beauty, justice, or love.
Me: “If those concepts have real meaning…” You: “They don’t.” Well, there we are then. We disagree. Moving on…
I care, but then I don’t believe love is merely a survival mechanism.
@Martin54, I didn’t have those little a, b, c things in my post… this is awkward.
Why so? I can't think of another way to address each issue without the bind of quote unquote. I'll put the points in italics or something. Another encumbrance. But you're worth it!
Kind to say, but since we’re at the same impasse we keep winding up at, I don’t see much point in continuing with this. 6:08 am here and I need to get some sleep.
No, I’m saying beauty, justice, and love are in their essence abstract philosophical concepts, full stop. You can’t get a cup of beauty or a quarter pound of justice or a kilogram of love. This is what “abstract” means.
Science has not made “inroads” into measuring beauty, justice, or love.
Me: “If those concepts have real meaning…” You: “They don’t.” Well, there we are then. We disagree. Moving on…
I care, but then I don’t believe love is merely a survival mechanism.
Sleep well. They are complex human perceptions. Talking about them is another thing. Together that's full stop. The latter is a full stop at the end of no sentence. Feelings, perceptions, reactions aren't abstract.
Science measures all of those.
Those concepts have no more absolute real meaning than any other. Concepts have relative, subjective, dispositional meaning on a spectrum of consensus. Which is perfectly real.
No, I’m saying beauty, justice, and love are in their essence abstract philosophical concepts, full stop.
An essentialist philosopher might think that, a philosopher who wasn't an essentialist would be unlikely to do so. I suspect they would both be able to appreciate the beauty of a rose or a summer's day, and possibly even compare notes on them.
You can’t get a cup of beauty or a quarter pound of justice or a kilogram of love. This is what “abstract” means.
You can't get half a pound of shadow either - that doesn't make it an abstract philosophical concept.
No, I’m saying beauty, justice, and love are in their essence abstract philosophical concepts, full stop.
An essentialist philosopher might think that, a philosopher who wasn't an essentialist would be unlikely to do so. I suspect they would both be able to appreciate the beauty of a rose or a summer's day, and possibly even compare notes on them.
You can’t get a cup of beauty or a quarter pound of justice or a kilogram of love. This is what “abstract” means.
You can't get half a pound of shadow either - that doesn't make it an abstract philosophical concept.
Well spotted. The essentialism. Love the half a pound of shadow. I'll be sure to order some!
An essentialist philosopher might think that, a philosopher who wasn't an essentialist would be unlikely to do so.
Yes. Thus, @Martin54 and I are consistently at our same impasse.
And you, hope you slept well. I've just discovered antihumanism 60 years late.
Oh dear. Er... what does this mean for you, exactly?
Shows me that there are lifetimes of material I'll never master. That as I began to suspect a couple of years ago, philosophy is where it's at, along with other things.
An essentialist philosopher might think that, a philosopher who wasn't an essentialist would be unlikely to do so.
Yes. Thus, @Martin54 and I are consistently at our same impasse.
And you, hope you slept well. I've just discovered antihumanism 60 years late.
Oh dear. Er... what does this mean for you, exactly?
Shows me that there are lifetimes of material I'll never master. That as I began to suspect a couple of years ago, philosophy is where it's at, along with other things.
Oh, I thought you meant you were becoming "anti-humanist" in some other specifically negative sense (nihilism or something like that), and was worried.
An essentialist philosopher might think that, a philosopher who wasn't an essentialist would be unlikely to do so.
Yes. Thus, @Martin54 and I are consistently at our same impasse.
And you, hope you slept well. I've just discovered antihumanism 60 years late.
Oh dear. Er... what does this mean for you, exactly?
Shows me that there are lifetimes of material I'll never master. That as I began to suspect a couple of years ago, philosophy is where it's at, along with other things.
Oh, I thought you meant you were becoming "anti-humanist" in some other specifically negative sense (nihilism or something like that), and was worried.
No way. But this I like. I'd come across the term only recently and only just wikied it.
Ah, semiotics, yes. I'm definitely someone who believes in the Transcendent Signified, rather than that we are all in an endless web of signifiers with no ultimate reference point.
Ah, Derrida's illusion of an ultimate referent, to provide closure for meaning, whereas every signified functions as a signifier in an endless play of signification.
Ah, Derrida's illusion of an ultimate referent, to provide closure for meaning, whereas every signified functions as a signifier in an endless play of signification.
I don’t know if Derrida believed in that ultimate referent, though. But I definitely do. Again, I’m not an existentialist, nor a (Derrida-style) deconstructionist.
(“Deconstruction” has another, different meaning currently, in which people are trying to figure out what they believe and why, rather than anything to do with “binary opposition,” “undecidability,” “freeplay,” “Différance,” etc.)
Ah, Derrida's illusion of an ultimate referent, to provide closure for meaning, whereas every signified functions as a signifier in an endless play of signification.
I don’t know if Derrida believed in that ultimate referent, though. But I definitely do. Again, I’m not an existentialist, nor a (Derrida-style) deconstructionist.
(“Deconstruction” has another, different meaning currently, in which people are trying to figure out what they believe and why, rather than anything to do with “binary opposition,” “undecidability,” “freeplay,” “Différance,” etc.)
An essentialist philosopher might think that, a philosopher who wasn't an essentialist would be unlikely to do so.
Yes. Thus, @Martin54 and I are consistently at our same impasse.
In what sense is it an impasse? You and Martin54 have different worldviews. There are at least as many different worldviews in evidence on these threads as there are people posting - it's one of the things that drives the debates.
An essentialist philosopher might think that, a philosopher who wasn't an essentialist would be unlikely to do so.
Yes. Thus, @Martin54 and I are consistently at our same impasse.
And you, hope you slept well. I've just discovered antihumanism 60 years late.
Oh dear. Er... what does this mean for you, exactly?
Shows me that there are lifetimes of material I'll never master. That as I began to suspect a couple of years ago, philosophy is where it's at, along with other things.
Oh, I thought you meant you were becoming "anti-humanist" in some other specifically negative sense (nihilism or something like that), and was worried.
Do worry! : ) I'm certainly am ultimate nihilist. ChatGPT4: Nihilism is a belief that all values are baseless and that nothing can be known or communicated. But not in the real world, I do not go anywhere near this, It is often associated with extreme pessimism and a radical skepticism that condemns existence. A true nihilist would believe in nothing, have no loyalties, and no purpose other than, perhaps, an impulse to destroy.
An essentialist philosopher might think that, a philosopher who wasn't an essentialist would be unlikely to do so.
Yes. Thus, @Martin54 and I are consistently at our same impasse.
And you, hope you slept well. I've just discovered antihumanism 60 years late.
Oh dear. Er... what does this mean for you, exactly?
Shows me that there are lifetimes of material I'll never master. That as I began to suspect a couple of years ago, philosophy is where it's at, along with other things.
Oh, I thought you meant you were becoming "anti-humanist" in some other specifically negative sense (nihilism or something like that), and was worried.
Do worry! : ) I'm certainly am ultimate nihilist. ChatGPT4: Nihilism is a belief that all values are baseless and that nothing can be known or communicated. But not in the real world, I do not go anywhere near this, It is often associated with extreme pessimism and a radical skepticism that condemns existence. A true nihilist would believe in nothing, have no loyalties, and no purpose other than, perhaps, an impulse to destroy.
It's stolen the first paragraphs verbatim from the Internet Encyclopedia of Philosophy from a section written by Alan Pratt. It claims to be a peer-reviewed publication.
It's stolen the first paragraphs verbatim from the Internet Encyclopedia of Philosophy from a section written by Alan Pratt. It claims to be a peer-reviewed publication.
But you don't know that until you've analysed the text and checked where it came from. It could have cited a random blog which may or may not have been accurate - how would you tell ?
Conversely, you could just cite an accepted source.
I think the whole thread has begun on an unfortunate basis. For me ‘fact’ is about whether a statement or piece of information coheres with reality, ‘belief’ is about my response to a piece of information.
To say, “I believe…” some statement or piece of information is to assert that I think it is a fact. I would distinguish between facts and opinions. Mozart is an eighteenth century composer is, IMO, simply a fact. Mozart is the world’s greatest composer is a matter of opinion.
There are some unarguable facts - the operation of the law of gravity for example - which you ignore or disbelieve at your peril. There are other things which some believe to be facts, but others do not.
That doesn’t mean that they are not facts, just that some don’t believe in them. (It staggers me that for some this includes the flat rather than global earth.) Equally it doesn’t mean that they are facts either.
To some extent, whether a person believes in them or not depends on what evidence they are willing to accept. Some facts are susceptible of contemporary experimental proof, they can be empirically verified. Others are not, notably distant historical events.
So for Caesar’s invasion of Britain in 55BC we are generally looking at the earliest surviving manuscripts of his own account being from the late 9th Century into the 12th Century AD. Then of course there’s the fact that Caesar may have had a bit if an axe to grind when writing them. For many (most?) people that, with other similarly dated manuscripts, some originally composed a century or more after the event, is enough evidence to ‘prove’ that the invasion took place at that time.
Of course both the veracity and the authenticity of the texts have been challenged, but most people including well-qualified historians accept the event and date of the invasion as a fact i.e. something they believe to be a fact. (For events that long ago we can’t reasonably expect to have better evidence than that.)
If someone said to me ‘Do you believe that Caesar invaded Britain in 55BC?’, my answer would be, ‘Yes that is my belief. I believe it is a fact that he invaded Britain in 55BC‘
So, in short, a fact is something true to reality. Belief is when I think something is true to reality.
It's stolen the first paragraphs verbatim from the Internet Encyclopedia of Philosophy from a section written by Alan Pratt. It claims to be a peer-reviewed publication.
So, in other words, ChatGPT did what it was supposed to do. It gathered information from an accepted source and reported back what it found.
Hahaha. Ok yeah, that's what it did. Problem is that this is functionally identical to completely hallucinating an answer (which it is also prone to do).
Also, the article was a lot more detailed than the first paragraph. Surely it would make more sense to spend, let's be generous, 1 minute finding a better source.
It's stolen the first paragraphs verbatim from the Internet Encyclopedia of Philosophy from a section written by Alan Pratt. It claims to be a peer-reviewed publication.
So, in other words, ChatGPT did what it was supposed to do. It gathered information from an accepted source and reported back what it found.
Hahaha. Ok yeah, that's what it did. Problem is that this is functionally identical to completely hallucinating an answer (which it is also prone to do).
There's a word for gathering uncredited information from accepted sources and reporting back what you find, which is plagiarism.
Buyer beware. Certainly.
Yes, I would prefer that ChatGPT knew what its sources were and reported those as well. I have talked with it about the matter. It is unable to do that.
Buyer beware.
Understanding what ChatGPT is and what it does is important. I'm not sure, if it's actually plagerism, if the tool and the user both state how the information was acquired. Which was done here.
This is not the same as a researcher presenting something generated by ChatGPT as original work. Which is plagerism.
Copilot in response to "Beliefs are not facts" discuss.
That's an interesting topic! Let's break it down.
**Beliefs** are personal convictions or opinions that individuals hold to be true. They are often influenced by cultural, religious, or personal experiences. Beliefs can vary widely from person to person and are not necessarily based on empirical evidence.
**Facts**, on the other hand, are objective realities that can be proven through evidence and observation. Facts are consistent and verifiable, regardless of personal opinions or beliefs.
### Key Differences:
1. **Subjectivity vs. Objectivity**:
- **Beliefs** are subjective and can differ greatly between individuals.
- **Facts** are objective and remain the same regardless of who observes them.
2. **Evidence**:
- **Beliefs** may not require evidence and can be based on faith or personal conviction.
- **Facts** require evidence and can be tested and verified.
3. **Changeability**:
- **Beliefs** can change over time as individuals are exposed to new experiences or information.
- **Facts** remain constant and do not change based on personal perspective.
### Why the Distinction Matters:
Understanding the difference between beliefs and facts is crucial for effective communication and decision-making. It helps in:
- **Critical Thinking**: Evaluating information based on evidence rather than personal bias.
- **Conflict Resolution**: Recognizing that disagreements may stem from differing beliefs rather than factual inaccuracies.
- **Scientific Inquiry**: Advancing knowledge through evidence-based research rather than unverified assumptions.
What are your thoughts on this distinction? Do you think it's always clear-cut, or are there grey areas?
Search came up with the same in essence,
Thoughts are not facts; they’re just mental events influenced by our emotions and past experiences, whilst facts stand on their own and can be proven. If we believe every thought is true, it can cause anxiety, lead to impulsive decisions, and affect how we view reality.
Thoughts Are Not Facts: Why Believing All Your Thoughts Leads T…
mindowl.org/thoughts-are-not-fact/
But you don't know that until you've analysed the text and checked where it came from. It could have cited a random blog which may or may not have been accurate - how would you tell ?
And if ChatGPT were a human, I'd have just failed it for plagiarism.
Quoting a definition from a (reasonably) authoritative source is a perfectly sensible thing to do, and better than just making stuff up, but you lose the authority of your source if you don't reference it.
Sometimes, this doesn't matter. If I tell you that the battle of Hastings was in AD1066, I'm not going to quote my source. My earliest source for this is probably the class teacher I had when I was six, and I don't know what her source was, but the central point here is that this fact is not in dispute. I don't need to cite an authority to lend weight to my claim for the battle's date, because there's no question about it.
By contrast, if I'm trying to argue the correctness of some particular position, argument by assertion isn't an argument at all. If I say that this or that established authority claims that my position is the consensus agreement amongst experts, that statement is worth something. If I tell you that I, personally, assert that some position is correct, then that's worthless unless you know that I, personally, have some sort of serious background with which to make that claim.
ChatGPT is a hyperlexic GIGO machine. It has no foundation on which to make claims of its own.
@Doublethink, ruling acknowledged, thank you. And can one not compare the AI output above with what's been said here? It passes the Turing test I'd say.
There is some woeful misunderstanding of regenerative AI/LLMs on this thread.
The key is not thinking that you are talking with a person but remembering that it is a machine programmed to give you the statistically most appropriate response to your query.
It's like talking to your fridge, only it replies and talks bollocks.
The trouble is, it looks so authoritative, and Google sticks it at the top of the search results. Kinda like keeping the rat poison mixed in with the groceries.
For many years, Wikipedia has had a bad reputation. But I'd like to suggest that it is analogous to the process of science, at least to the extent that revisions are made, others make changes to those revisions and the whole thing degenerates into a fight about relatively minor details. It's still not great, particularly in the areas where there are few volunteers who are interested to edit. But it's a lot better than it was, and is at least as reliable as most basic textbooks in a very wide range of topics.
AI LLMs in contrast mix together whatever they can find as "feedstock" to answer questions. That might be Wikipedia or a big database like Reddit or xitter. It has no way to weigh what it is feeding on because there is nothing there, no intelligence. It is as likely to repeat a rumour it picked up on social media as from any other source. If there are a lot of people talking utter twaddle about a topic, it is very likely to repeat that in an answer.
It can't be trusted in anything, particularly as it is impossible to look under the hood to see how it is working.
The idea that one could ask an AI LLM about changing the way that it works is a monumental misunderstanding of what it is.
My son refers to it as "spicy autocorrect," and he's not wrong. IMHO the reason people trust it is because they fundamentally misunderstand what computers are and how they work. There is no brain there, no judgement, no ability to tell true from false and right from wrong.
They can't discriminate unless they've been told exactly how to do so--say, "Pick out the five most popular websites, and take your data from there. Quote paragraphs that include 3 or more of the keywords from the query." No, I'm not a professional IT guy, but you can see the instructions have to be something along those lines, because of how the AI goes obviously wrong at times. It follows the letter, not the spirit, of its instructions, and so a request for "animals that live in Vietnam" might return a list including dragons, because nobody specified non-fictional animals, and the dragon does feature prominently on a number of popular websites about Vietnam--though as a part of the folklore, not the fauna. (yes, I made up this example, I can't recall the exact details of the last time it did this to me, though it was only three-four days ago.)
They also can't extrapolate from their original instructions, as even a child of five can do. So you tell a child, "Don't pull the cat's tail" and reasonably expect them to understand you shouldn't do the same to a dog either, or a horse, a cheetah... The computer will need to have its instructions rephrased or added to in order to meet all these cases.
AI is not at all a waste of time when it's let to do the things it does superbly well--identifying cancer cells, for instance. It is good at picking up subtle differences in a mass of data and finding patterns. Those functions don't require judging your source material, as the users have already done that (by feeding in a particular pile of images, for instance. Though even then, things can still go wrong--there was one case where the AI figured out that photos before a certain date were more likely to include actual cancer, because nobody back then was bothering to take photos of noncancerous tissue. And so it promptly started using photo age as a decision-maker on whether the pictured cell had cancer, which led to Not Good results. So yeah, somebody has to do the judging of source material very, very carefully. But in the case of AI answering general knowledge questions based on scrapes from the internet, the only judging of source material seems to be based on popularity, with none of the safeties a human being would know to consider--and popularity is a bullshit way of deciding if something is true and worthwhile.
An essentialist philosopher might think that, a philosopher who wasn't an essentialist would be unlikely to do so.
Yes. Thus, @Martin54 and I are consistently at our same impasse.
In what sense is it an impasse? You and Martin54 have different worldviews. There are at least as many different worldviews in evidence on these threads as there are people posting - it's one of the things that drives the debates.
Re “In what sense is it an impasse?” Because we keep coming up against the same disagreement, and I don’t really see a way of bridging it right now. And I’d rather not go back and forth with “A is true,” “No, A is false,” “No, A is true,” fifty billion times again.
Hahaha. Ok yeah, that's what it did. Problem is that this is functionally identical to completely hallucinating an answer (which it is also prone to do).
Also, don't you think a source should be given?
I always think a source should be given -- particularly from humans, and particularly when they make claims in a discussion forum about "the vast majority of theologians/historians/anthropologist/entomologists/homeopaths/wizards." However, it's well established that ChatGPT doesn't do that. Just like many forum participants, who don't have the limitations of AI.
The tool didn’t state how the information was acquired.
Must resist the obvious joke..
The obvious joke is that @Martin54 stated that he used ChatGPT, and ChatGPT very clearly states that it cannot cite specific sources it used.
Although, I chatted with it tonight, and it is starting to do a better job of mentioning some of the sources it gleans information from.
Re “In what sense is it an impasse?” Because we keep coming up against the same disagreement, and I don’t really see a way of bridging it right now. And I’d rather not go back and forth with “A is true,” “No, A is false,” “No, A is true,” fifty billion times again.
Rather than either, "I disagree" or "No/Yes/No/Yes" it would be good to know why you hold the position you do that causes you to disagree.
Comments
Touché! Like wiki eh?
Is there any unnatural reliable source of information?
Why so? I can't think of another way to address each issue without the bind of quote unquote. I'll put the points in italics or something. Another encumbrance. But you're worth it!
Science has not made “inroads” into measuring beauty, justice, or love.
Me: “If those concepts have real meaning…” You: “They don’t.” Well, there we are then. We disagree. Moving on…
I care, but then I don’t believe love is merely a survival mechanism.
Kind to say, but since we’re at the same impasse we keep winding up at, I don’t see much point in continuing with this. 6:08 am here and I need to get some sleep.
Sleep well. They are complex human perceptions. Talking about them is another thing. Together that's full stop. The latter is a full stop at the end of no sentence. Feelings, perceptions, reactions aren't abstract.
Science measures all of those.
Those concepts have no more absolute real meaning than any other. Concepts have relative, subjective, dispositional meaning on a spectrum of consensus. Which is perfectly real.
You can't get half a pound of shadow either - that doesn't make it an abstract philosophical concept.
Well spotted. The essentialism. Love the half a pound of shadow. I'll be sure to order some!
@pease said:
Yes. Thus, @Martin54 and I are consistently at our same impasse.
And you, hope you slept well. I've just discovered antihumanism 60 years late.
Oh dear. Er... what does this mean for you, exactly?
Shows me that there are lifetimes of material I'll never master. That as I began to suspect a couple of years ago, philosophy is where it's at, along with other things.
Oh, I thought you meant you were becoming "anti-humanist" in some other specifically negative sense (nihilism or something like that), and was worried.
No way. But this I like. I'd come across the term only recently and only just wikied it.
I don’t know if Derrida believed in that ultimate referent, though. But I definitely do. Again, I’m not an existentialist, nor a (Derrida-style) deconstructionist.
(“Deconstruction” has another, different meaning currently, in which people are trying to figure out what they believe and why, rather than anything to do with “binary opposition,” “undecidability,” “freeplay,” “Différance,” etc.)
Justice is hard to define in detail, and the varying law systems suggest that there's quite a variety of ideas about what it should look like.
He obviously didn't.
Do worry! : ) I'm certainly am ultimate nihilist. ChatGPT4: Nihilism is a belief that all values are baseless and that nothing can be known or communicated. But not in the real world, I do not go anywhere near this, It is often associated with extreme pessimism and a radical skepticism that condemns existence. A true nihilist would believe in nothing, have no loyalties, and no purpose other than, perhaps, an impulse to destroy.
Appalling mangled sentence there! Sorry. I'm certainly ultimately nihilist.
https://iep.utm.edu/nihilism/
Which illustrates a different problem about regenerative AI!
So, in other words, ChatGPT did what it was supposed to do. It gathered information from an accepted source and reported back what it found.
Conversely, you could just cite an accepted source.
To say, “I believe…” some statement or piece of information is to assert that I think it is a fact. I would distinguish between facts and opinions. Mozart is an eighteenth century composer is, IMO, simply a fact. Mozart is the world’s greatest composer is a matter of opinion.
There are some unarguable facts - the operation of the law of gravity for example - which you ignore or disbelieve at your peril. There are other things which some believe to be facts, but others do not.
That doesn’t mean that they are not facts, just that some don’t believe in them. (It staggers me that for some this includes the flat rather than global earth.) Equally it doesn’t mean that they are facts either.
To some extent, whether a person believes in them or not depends on what evidence they are willing to accept. Some facts are susceptible of contemporary experimental proof, they can be empirically verified. Others are not, notably distant historical events.
So for Caesar’s invasion of Britain in 55BC we are generally looking at the earliest surviving manuscripts of his own account being from the late 9th Century into the 12th Century AD. Then of course there’s the fact that Caesar may have had a bit if an axe to grind when writing them. For many (most?) people that, with other similarly dated manuscripts, some originally composed a century or more after the event, is enough evidence to ‘prove’ that the invasion took place at that time.
Of course both the veracity and the authenticity of the texts have been challenged, but most people including well-qualified historians accept the event and date of the invasion as a fact i.e. something they believe to be a fact. (For events that long ago we can’t reasonably expect to have better evidence than that.)
If someone said to me ‘Do you believe that Caesar invaded Britain in 55BC?’, my answer would be, ‘Yes that is my belief. I believe it is a fact that he invaded Britain in 55BC‘
So, in short, a fact is something true to reality. Belief is when I think something is true to reality.
Hahaha. Ok yeah, that's what it did. Problem is that this is functionally identical to completely hallucinating an answer (which it is also prone to do).
Also, don't you think a source should be given?
Buyer beware. Certainly.
Yes, I would prefer that ChatGPT knew what its sources were and reported those as well. I have talked with it about the matter. It is unable to do that.
Buyer beware.
Understanding what ChatGPT is and what it does is important. I'm not sure, if it's actually plagerism, if the tool and the user both state how the information was acquired. Which was done here.
This is not the same as a researcher presenting something generated by ChatGPT as original work. Which is plagerism.
The tool didn’t state how the information was acquired.
Must resist the obvious joke..
Search came up with the same in essence,
I'm sure it does to all the... purists (Puritans?) here. Being as it is ChatGPT 4. As is Search. I just wanted to clarify what said what.
Are they wrong?
I will now do so. Please will thread participants observe it going forward.
Thanks,
Doublethink, Admin
And if ChatGPT were a human, I'd have just failed it for plagiarism.
Quoting a definition from a (reasonably) authoritative source is a perfectly sensible thing to do, and better than just making stuff up, but you lose the authority of your source if you don't reference it.
Sometimes, this doesn't matter. If I tell you that the battle of Hastings was in AD1066, I'm not going to quote my source. My earliest source for this is probably the class teacher I had when I was six, and I don't know what her source was, but the central point here is that this fact is not in dispute. I don't need to cite an authority to lend weight to my claim for the battle's date, because there's no question about it.
By contrast, if I'm trying to argue the correctness of some particular position, argument by assertion isn't an argument at all. If I say that this or that established authority claims that my position is the consensus agreement amongst experts, that statement is worth something. If I tell you that I, personally, assert that some position is correct, then that's worthless unless you know that I, personally, have some sort of serious background with which to make that claim.
ChatGPT is a hyperlexic GIGO machine. It has no foundation on which to make claims of its own.
@Doublethink, ruling acknowledged, thank you. And can one not compare the AI output above with what's been said here? It passes the Turing test I'd say.
The key is not thinking that you are talking with a person but remembering that it is a machine programmed to give you the statistically most appropriate response to your query.
It's like talking to your fridge, only it replies and talks bollocks.
The trouble is, it looks so authoritative, and Google sticks it at the top of the search results. Kinda like keeping the rat poison mixed in with the groceries.
AI LLMs in contrast mix together whatever they can find as "feedstock" to answer questions. That might be Wikipedia or a big database like Reddit or xitter. It has no way to weigh what it is feeding on because there is nothing there, no intelligence. It is as likely to repeat a rumour it picked up on social media as from any other source. If there are a lot of people talking utter twaddle about a topic, it is very likely to repeat that in an answer.
It can't be trusted in anything, particularly as it is impossible to look under the hood to see how it is working.
The idea that one could ask an AI LLM about changing the way that it works is a monumental misunderstanding of what it is.
They can't discriminate unless they've been told exactly how to do so--say, "Pick out the five most popular websites, and take your data from there. Quote paragraphs that include 3 or more of the keywords from the query." No, I'm not a professional IT guy, but you can see the instructions have to be something along those lines, because of how the AI goes obviously wrong at times. It follows the letter, not the spirit, of its instructions, and so a request for "animals that live in Vietnam" might return a list including dragons, because nobody specified non-fictional animals, and the dragon does feature prominently on a number of popular websites about Vietnam--though as a part of the folklore, not the fauna. (yes, I made up this example, I can't recall the exact details of the last time it did this to me, though it was only three-four days ago.)
They also can't extrapolate from their original instructions, as even a child of five can do. So you tell a child, "Don't pull the cat's tail" and reasonably expect them to understand you shouldn't do the same to a dog either, or a horse, a cheetah... The computer will need to have its instructions rephrased or added to in order to meet all these cases.
AI is not at all a waste of time when it's let to do the things it does superbly well--identifying cancer cells, for instance. It is good at picking up subtle differences in a mass of data and finding patterns. Those functions don't require judging your source material, as the users have already done that (by feeding in a particular pile of images, for instance. Though even then, things can still go wrong--there was one case where the AI figured out that photos before a certain date were more likely to include actual cancer, because nobody back then was bothering to take photos of noncancerous tissue. And so it promptly started using photo age as a decision-maker on whether the pictured cell had cancer, which led to Not Good results. So yeah, somebody has to do the judging of source material very, very carefully. But in the case of AI answering general knowledge questions based on scrapes from the internet, the only judging of source material seems to be based on popularity, with none of the safeties a human being would know to consider--and popularity is a bullshit way of deciding if something is true and worthwhile.
Re “In what sense is it an impasse?” Because we keep coming up against the same disagreement, and I don’t really see a way of bridging it right now. And I’d rather not go back and forth with “A is true,” “No, A is false,” “No, A is true,” fifty billion times again.
I always think a source should be given -- particularly from humans, and particularly when they make claims in a discussion forum about "the vast majority of theologians/historians/anthropologist/entomologists/homeopaths/wizards." However, it's well established that ChatGPT doesn't do that. Just like many forum participants, who don't have the limitations of AI.
Thank you for your concern and correction. Although, since you spelled pladgerizsem (sic) correctly, you needn't imply you didn't with "sic."
The obvious joke is that @Martin54 stated that he used ChatGPT, and ChatGPT very clearly states that it cannot cite specific sources it used.
Although, I chatted with it tonight, and it is starting to do a better job of mentioning some of the sources it gleans information from.
It's learning.
But nattering about ChatGPT is an enormous distraction from the discussion which veered off course, when people latched on at @Martin54's dark post about nihilism here: https://forums.shipoffools.com/discussion/comment/681553/#Comment_681553
and continued right past @BroJames' post about beliefs and facts here:
https://forums.shipoffools.com/discussion/comment/681568/#Comment_681568 .
Rather than either, "I disagree" or "No/Yes/No/Yes" it would be good to know why you hold the position you do that causes you to disagree.