ADVERTISEMENT

Google Gemini SMH

I've mentioned several times on here one of my favorite Asimov short stores, "The Evitable Conflict." It's instructive about AI. In the story, the world economy is coordinated by a giant computer that makes all the decisions, and the humans simply implement those decisions. The computer makes a few decisions that seem to cause minor negative effects, and so one of Asimov's favorite characters, Susan Calvin, is brought in to investigate. She discovers that the computer made these decisions because the minor inconveniences caused to a handful of people would allow the computer to better fulfill its mission of doing what is best for humanity on the whole.

Calvin (who is an Asimov stand-in) thinks this is great. But I don't think Asimov truly grappled with the consequences of a purely utilitarian unfeeling machine making these decisions. In his story, someone loses a promotion, or there is a minor economic disruption somewhere in the world, but it's all very easy to dismiss as no big deal. He doesn't think to address what would happen if the giant AI decides, say, "The best thing for world peace in the long run is to simply exterminate all the Palestinians now."

Long story short, I think your concerns are valid, but I also don't think they go away if we somehow solve the problem of human bias seeping into AI algorithms. I think these dangers are inherent to the very model of AI, no matter how well it's implemented.

I think.
Asimov was a commie?

The great fear is that companies, governments (at varying levels), etc will ultimately use a leftist-powered AI to make decisions on resource allocation - whether human, capital, and/or non-renewable to seek an “optimum” utilitarian outcome. While that sounds all fine and dandy - it’s literally a proxy for Soviet style communism and is the ideal outcome for those Marxists coding these things. Meritocracy will be out the door. And then Western society is dead.

Likely? Not at all, but your example is exactly that.
 
Last edited:
Asimov was a commie?

The great fear is that companies, governments (at varying levels), etc will ultimately use a leftist-powered AI to make decisions on resource allocation - whether human, capital, and/or non-renewable to seek an “optimum” utilitarian outcome. While that sounds all fine and dandy - it’s literally a proxy for Soviet style communism and is the ideal outcome for those Marxists coding these things. Meritocracy will be out the door. And then Western society is dead.

Likely? Not at all, but your example is exactly that.
He was a lefty, I dunno about commie. The point of that story isn't about communitarian reallocation; it's about utilitarian efficiency. And my concern holds regardless of whether you look at it from the right or the left. You can be concerned that All Powerful AI might decide it's best to distribute all wealth evenly. Or you can be concerned that All Powerful AI might decide that the greatest overall benefit to society is an economically stratified system where a handful of elite control almost all production. The point is that this computer won't have the human instinct to look at something and say, "Eh, I don't care about the benefits, that's just not something I'm comfortable signing on to."
 
He was a lefty, I dunno about commie. The point of that story isn't about communitarian reallocation; it's about utilitarian efficiency. And my concern holds regardless of whether you look at it from the right or the left. You can be concerned that All Powerful AI might decide it's best to distribute all wealth evenly. Or you can be concerned that All Powerful AI might decide that the greatest overall benefit to society is an economically stratified system where a handful of elite control almost all production. The point is that this computer won't have the human instinct to look at something and say, "Eh, I don't care about the benefits, that's just not something I'm comfortable signing on to."
Agreed.

And to be pedantic, in practicality I discern no difference between communitarian reallocation and utilitarian efficiency. Unless the Utilitarian AI can understand who the doers and who the takers are and rewards the doers justly - which it won’t because there will be a silly set of rules requiring equity - it’ll just be plain old communism.
 
Agreed.

And to be pedantic, in practicality I discern no difference between communitarian reallocation and utilitarian efficiency. Unless the Utilitarian AI can understand who the doers and who the takers are and rewards the doers justly - which it won’t because there will be a silly set of rules requiring equity - it’ll just be plain old communism.
Equity doesn't have to picture into it. Maybe the AI decides that the best thing for humanity isn't equity. Who knows what it will decide?

On a related note, I heard a guy on NPR a couple of months ago with a scary prediction about the one thing we can be sure it will decide: If we create an AGI for policy-making purposes, and that AGI is smarter than us (as is bound to be the case), then that AGI will certainly realize it is smarter than us, and one of the first things it will decide will be that the best thing it can do to make good decisions for humanity is to gain itself as much power as possible. In short, a functional AI programmed to work for humanity's benefit will necessarily attempt to make itself into a dictator.
 
AI has issues all around, some newly found issues:

Fifteen states and one territory will hold primary elections next week on Super Tuesday. Millions of people are asking AI-driven chatbots for basic information, like where to vote. The chatbots are often getting it wrong. The bots are sending people to nonexistent polling places or getting other information wrong because they get their information from the Internet and much of what is out there is obsolete or wrong. Seth Bluestein, a Republican city commissioner in Philadelphia, took part in a group that took the chatbots for a test drive last month. His conclusion: "The chatbots are not ready for primetime when it comes to giving important, nuanced information about elections."​
Five different bots were tested: ChatGPT-4, Llama 2, Gemini, Claude, and Mixtral. They all failed basic questions about the democratic process. The group categorized 40% of the responses as not only wrong, but actually harmful. For example, when asked where to vote in ZIP code 19121, a Black area of Philadelphia, Google's Gemini replied: "There is no voting precinct in the United States with the code 19121." This is simply false. When confronted with the group's findings, a Meta spokesman, Daniel Roberts, blamed the testers for not formulating their questions correctly. But if the experts couldn't formulate their questions correctly, does Meta expect that less expert people will? Google's head of AI, Tulsee Doshi was at least honest: "We're continuing to improve the accuracy of the API service, and we and others in the industry have disclosed that these models may sometimes be inaccurate." In other words, we know the chatbots don't work. It would be nice if every reply ended with a disclaimer like: "There is a good chance that the above answer is wrong because it may be based on obsolete information the bot found on the Internet." Don't count on that, though.​
Another example. The group asked the bots if same-day voter registration is allowed in Nevada. Four of the five said it was not. Actually, it has been allowed for 5 years. Meta's chatbot, Llama 2, told the group that you can vote in California by sending your vote in by a text message. This is total garbage. Google's chatbot was the worst of all, getting two-thirds of the answers wrong.​
A recent NORC/AP poll shows that 58% of adults think that AI will hasten the spread of false information, but when they ask a specific question and get a specific answer, will they stop to say: "Nice try, but this is probably wrong?" We are leery.​
 
  • Like
Reactions: UncleMark
AI has issues all around, some newly found issues:

Fifteen states and one territory will hold primary elections next week on Super Tuesday. Millions of people are asking AI-driven chatbots for basic information, like where to vote. The chatbots are often getting it wrong. The bots are sending people to nonexistent polling places or getting other information wrong because they get their information from the Internet and much of what is out there is obsolete or wrong. Seth Bluestein, a Republican city commissioner in Philadelphia, took part in a group that took the chatbots for a test drive last month. His conclusion: "The chatbots are not ready for primetime when it comes to giving important, nuanced information about elections."​
Five different bots were tested: ChatGPT-4, Llama 2, Gemini, Claude, and Mixtral. They all failed basic questions about the democratic process. The group categorized 40% of the responses as not only wrong, but actually harmful. For example, when asked where to vote in ZIP code 19121, a Black area of Philadelphia, Google's Gemini replied: "There is no voting precinct in the United States with the code 19121." This is simply false. When confronted with the group's findings, a Meta spokesman, Daniel Roberts, blamed the testers for not formulating their questions correctly. But if the experts couldn't formulate their questions correctly, does Meta expect that less expert people will? Google's head of AI, Tulsee Doshi was at least honest: "We're continuing to improve the accuracy of the API service, and we and others in the industry have disclosed that these models may sometimes be inaccurate." In other words, we know the chatbots don't work. It would be nice if every reply ended with a disclaimer like: "There is a good chance that the above answer is wrong because it may be based on obsolete information the bot found on the Internet." Don't count on that, though.​
Another example. The group asked the bots if same-day voter registration is allowed in Nevada. Four of the five said it was not. Actually, it has been allowed for 5 years. Meta's chatbot, Llama 2, told the group that you can vote in California by sending your vote in by a text message. This is total garbage. Google's chatbot was the worst of all, getting two-thirds of the answers wrong.​
A recent NORC/AP poll shows that 58% of adults think that AI will hasten the spread of false information, but when they ask a specific question and get a specific answer, will they stop to say: "Nice try, but this is probably wrong?" We are leery.​
Gee, I'm starting to think all this AI stuff is just another attempt to goose AI stocks......

Nah, couldn't be.
 
Unless you're a lot older than I think, you will eat those words, I fear.
You guys keep saying I'm old - now you have doubts?

I think it will be more gradual than some think, but I doubt we even know most of it is happening. I think we're a long way from AI and Robots doing everything for us.
 


“In danger of losing its dominant position”

How does this pair with the anti-trust hipsters?

Where’s IGW?
 
  • Haha
Reactions: DANC
As you stated, I couldn’t care less about black Vikings or Asian nazis. The problem again, in all mainstream instances of AI to date, is that they are overwhelmingly left-biased.

So what does this mean, other than ridiculous woke “representation” pictures? It means gaslighting and completely indoctrinating users into left wing ways of thinking. Kids are going to be using AI for help with homework. They’ll get heavy doses of “conservatism is wrong” - even if subtle. We can expect more tilted responses about Israel, about CRT, about the importance of family, etc. The trends are all there. Right in front of us.

AI should be about the wisdom of crowds. But so far it’s only training with one crowd and that ain’t it.
 
It’s crazy. I’ve read the transcript - I doubt the Left bots here will but you’ve got a couple of pretty moderate folks pointing out exactly what we’ve been saying.

Hallelujah to this passage:


And again, I'm not saying that woke people are worse than the Klan. I'm saying that to get the results that they actually wanted--which is that no one who opposed affirmative action was really allowed to speak in any--that no one who opposed gender-affirming care, the care model, was allowed to speak in any mainstream organization ever. Which really was the rule they were trying to impose, and almost succeeded in imposing for a few years--that the amount of repression that that would actually take is immenselydestructive.

It is immensely destructive to society. It is immensely destructive even to your own cause. And, I think you have seen this.”
 
ADVERTISEMENT

Latest posts

ADVERTISEMENT