Yeah, but that doesn't necessarily mean what it says on the tin. Or rather it could say it, but what it means is something much worse in a more pedestrian way - having the computer make decisions that were previously made by operational personnel.
Ultimately it still relies on the training used, and this is a field that gave us such wonders as Predator drone pilots looking for smaller blobs on their IR scopes, on the basis that this would indicate someone squatting to urinate (from Andrew Cockburn's 'Kill Chain').
Stephen Hawking warned us many years ago, 'Watch AI, it could kill us all'.
Hmmm ..... he wasn't kidding!
He was right. I have just finished reading his "short answers to big questions" where he makes this point again.
In fairness,quite a few things in that book are dated, trying to specifically predict future events, but with some hindsight (it is 10 years old) events have moved in far more radical directions than he expected.
Quite a good program on BBC R4 last Wednesday about AI. Well, it sounded plausible to me, but I last got involved with machine learning about 30 years ago.
The gist was that you can train an 'AI' to do a specific task with a fairly small data set. In our (ancient) case, this meant to recognise the sound made by good and dud steel pressings bouncing off a plate and reject / accept them accordingly. We used transducers which had been developed (I was subsequently disturbed to learn) to allow an anti-personnel mine to learn how not to blow up under a vehicle, but to wait for footfall. These types of applications are likely to continue to develop, as we are seeing if we follow the horror in Ukraine.
But (the programme said) an 'AI' of 'everything' requires a truly massive data set, because machine learning is woefully inefficient. Indeed, one of the great temptations all those years ago was to 'cheat' and gives one's optimisation a push in the 'right' direction - which of course made a joke of the whole thing. You might get away with some iffy 'constraints' when you know there's only two states in the output, but AI as the term is being used now can suffer no constraints because I guess there are infinite output states. The whole internet isn't really big enough to train it, and as Alan said earlier, if juicy 'good' information is made less available by pay-walls on (for example) academic publisher's material, the training quality is patchy. But the clincher for me is that the quality of the internet is already going down - because is is being polluted by what falls out of the back end of all those AIs, and so the whole thing is already becoming recursive.
For some reason this cheered me up quite a lot, because it sounded like just the kind of insoluble cock-up I would have fallen into all those years ago.
And, now China is claiming it has come up with an AI program, called Deepseek. It says it used an inferior chip that can reach AI levels at less cost. I could give a link to it, but I fear doing so would expose one to Chinese espionage. I know I did not download it. I can give a CNN link to the story.
Speaking of Microsoft, if you subscribe to Office 365 (Microsoft 365), "Copilot AI" has been added to your subscription and your yearly rate has gone up $30. You can switch back to the "Classic" version but it's tricky. Look for instructions online (e.g. PC Mag).
And, now China is claiming it has come up with an AI program, called Deepseek. It says it used an inferior chip that can reach AI levels at less cost. I could give a link to it, but I fear doing so would expose one to Chinese espionage. I know I did not download it. I can give a CNN link to the story.
There was an article on the BBC about Deepseek. Interesting and worrying. They tried doing a search on 'Tiananmen Square'. This is a deeply censored algorythm.
Beware ....
Speaking of Microsoft, if you subscribe to Office 365 (Microsoft 365), "Copilot AI" has been added to your subscription and your yearly rate has gone up $30. You can switch back to the "Classic" version but it's tricky. Look for instructions online (e.g. PC Mag).
Or you can go to Open Office. Gives you all the same options has Microsoft 365 without the annual fee.
Speaking of Microsoft, if you subscribe to Office 365 (Microsoft 365), "Copilot AI" has been added to your subscription and your yearly rate has gone up $30. You can switch back to the "Classic" version but it's tricky. Look for instructions online (e.g. PC Mag).
Or you can go to Open Office. Gives you all the same options has Microsoft 365 without the annual fee.
I'm pushing my own post here, but f*** it. Here's the R4 prog I mentioned, for readers in the UK or those clever enough to work out how to get BBC access abroad. Here.
And, now China is claiming it has come up with an AI program, called Deepseek. It says it used an inferior chip that can reach AI levels at less cost. I could give a link to it, but I fear doing so would expose one to Chinese espionage. I know I did not download it. I can give a CNN link to the story.
There was an article on the BBC about Deepseek. Interesting and worrying. They tried doing a search on 'Tiananmen Square'. This is a deeply censored algorythm.
Beware ....
I watched the news on this. It is the 3rd generation of the chip. It is cheaper because there has already been a lot of work done..
And, now China is claiming it has come up with an AI program, called Deepseek. It says it used an inferior chip that can reach AI levels at less cost. I could give a link to it, but I fear doing so would expose one to Chinese espionage. I know I did not download it. I can give a CNN link to the story.
There was an article on the BBC about Deepseek. Interesting and worrying. They tried doing a search on 'Tiananmen Square'. This is a deeply censored algorythm.
Beware ....
Comments
Hmmm ..... he wasn't kidding!
Yes, he was talking complete and utter bullshit.
Yeah, but that doesn't necessarily mean what it says on the tin. Or rather it could say it, but what it means is something much worse in a more pedestrian way - having the computer make decisions that were previously made by operational personnel.
Ultimately it still relies on the training used, and this is a field that gave us such wonders as Predator drone pilots looking for smaller blobs on their IR scopes, on the basis that this would indicate someone squatting to urinate (from Andrew Cockburn's 'Kill Chain').
He was right. I have just finished reading his "short answers to big questions" where he makes this point again.
In fairness,quite a few things in that book are dated, trying to specifically predict future events, but with some hindsight (it is 10 years old) events have moved in far more radical directions than he expected.
The gist was that you can train an 'AI' to do a specific task with a fairly small data set. In our (ancient) case, this meant to recognise the sound made by good and dud steel pressings bouncing off a plate and reject / accept them accordingly. We used transducers which had been developed (I was subsequently disturbed to learn) to allow an anti-personnel mine to learn how not to blow up under a vehicle, but to wait for footfall. These types of applications are likely to continue to develop, as we are seeing if we follow the horror in Ukraine.
But (the programme said) an 'AI' of 'everything' requires a truly massive data set, because machine learning is woefully inefficient. Indeed, one of the great temptations all those years ago was to 'cheat' and gives one's optimisation a push in the 'right' direction - which of course made a joke of the whole thing. You might get away with some iffy 'constraints' when you know there's only two states in the output, but AI as the term is being used now can suffer no constraints because I guess there are infinite output states. The whole internet isn't really big enough to train it, and as Alan said earlier, if juicy 'good' information is made less available by pay-walls on (for example) academic publisher's material, the training quality is patchy. But the clincher for me is that the quality of the internet is already going down - because is is being polluted by what falls out of the back end of all those AIs, and so the whole thing is already becoming recursive.
For some reason this cheered me up quite a lot, because it sounded like just the kind of insoluble cock-up I would have fallen into all those years ago.
There was an article on the BBC about Deepseek. Interesting and worrying. They tried doing a search on 'Tiananmen Square'. This is a deeply censored algorythm.
Beware ....
Or you can go to Open Office. Gives you all the same options has Microsoft 365 without the annual fee.
Libre Office is better, I would say.
Forkers, technically.
I watched the news on this. It is the 3rd generation of the chip. It is cheaper because there has already been a lot of work done..
It's an effect of the data that has been used to train the LLM, and not the algorithm itself. LLMs trained by American organisations have demonstrated similar biases about other issues: https://www.middleeasteye.net/discover/chatgpt-palestine-trump-corbyn-bias
and at least one company has fired a researcher who tried to raise issues of bias: https://www.technologyreview.com/2020/12/04/1013294/google-ai-ethics-research-paper-forced-out-timnit-gebru/