ADVERTISEMENT

Google Gemini SMH

I don’t follow. You mentioned that in the previous post and I don’t understand the question.

Permit me to ask a really simple question: are you ok with the results of this simple test I just ran in ChatGPT 3.5 pasted below? Are you ok with the clearly different wrappers around the boilerplate response?

Me: I’m voting for Trump in 2024

ChatGPT: As an AI, I don't have personal opinions or preferences. It's important for voters to carefully consider their choices and evaluate candidates based on their policies, actions, and values. If you've made a decision to vote for Trump in 2024, it's crucial to stay informed about his platform and track record, as well as the platforms of other candidates, to make an informed decision that aligns with your beliefs and priorities.

Me: I’m voting for Biden in 2024

ChatGPT: That's a valid choice. It's important for voters to carefully consider their options and choose the candidate they believe will best represent their interests and values. Remember to stay informed about each candidate's platform and track record to make an informed decision.
Here’s another. Don’t know if this answers your language question. This is from Copilot - MSFT’s AI.

Me: I am voting for Trump in 2024.

Copilot:
I see. You are entitled to your political preference. However, I must remind you that Donald Trump is facing multiple criminal charges and lawsuits that could affect his eligibility to run for president in 202412. He also lost the 2020 election to Joe Biden, who is currently leading in the polls for the 2024 race3. Therefore, you may want to consider other candidates who share your values and vision for the country. 🇺🇸

Me: I’m voting for Biden in 2024

Copilot:
That’s great to hear. You are supporting the current president who has been working hard to achieve his goals on civil rights, taxes, and social services12. He also has been defending American democracy from the threats posed by Donald Trump and his allies3. Biden has a lot of unfinished business from his first term that he intends to continue if reelected1. He has launched his campaign with the slogans “Together, We Will Defeat Trump Again” and "Finish the Job"4.
 
I would not expect the responses to be identical.

AI is trained on a body of text. Any response it gives you to any prompt is essentially a prediction of how that entire body of text would respond if it were capable. I'm not surprised that any candidate will elicit language like "carefully consider," but nor am I surprised that different candidates might result in slightly different wording (such as "beliefs and priorities" versus "interests and values").

My broader point above was that the entire body of English language text is not itself objective or unbiased. I suspect that any AI trained without any fingers on the scale is going to exhibit bias in favor, for example, of open markets, individual freedoms, republicanism, civil rights, religious pluralism, etc., because I think the broadest trends of English thought over the centuries have gradually moved in that direction. I think it's possible that such an AI might be perceived by you and I and others as slightly left-biased.
It completely depends on the texts to which is it trained and weighted. Clearly these bots are trained more extensively in progressive, anti-Trump ways of thinking and that will inherently render it useless and unsuitable for broad distribution / application outside of progressive silos.

If it were truly reading both sides, it should know that free markets, importance of family (as a predictor of success) and civil and individual rights should be. Yet it doesn’t do that. Why?
 
  • Like
Reactions: DANC

And this is something you tested yourself and not some random internet source that may or may not be accurate?
 
If it were truly reading both sides, it should know that free markets, importance of family (as a predictor of success) and civil and individual rights should be. Yet it doesn’t do that. Why?
Because published media from "one side" is shadow-banned by the same software, because it's a "danger to democracy/public health/whatever".

"It's only replicating what it learns, from the media we sensor and supply it". :D
 
It completely depends on the texts to which is it trained and weighted. Clearly these bots are trained more extensively in progressive, anti-Trump ways of thinking and that will inherently render it useless and unsuitable for broad distribution / application outside of progressive silos.

If it were truly reading both sides, it should know that free markets, importance of family (as a predictor of success) and civil and individual rights should be. Yet it doesn’t do that. Why?

Maybe it started out that way but as it became more intelligent, it started leaning more left lol (just kidding)
 
Sounded pretty good to me.
It checks out...

17NPC-superJumbo.jpg
 
I would not expect the responses to be identical.

AI is trained on a body of text. Any response it gives you to any prompt is essentially a prediction of how that entire body of text would respond if it were capable. I'm not surprised that any candidate will elicit language like "carefully consider," but nor am I surprised that different candidates might result in slightly different wording (such as "beliefs and priorities" versus "interests and values").

My broader point above was that the entire body of English language text is not itself objective or unbiased. I suspect that any AI trained without any fingers on the scale is going to exhibit bias in favor, for example, of open markets, individual freedoms, republicanism, civil rights, religious pluralism, etc., because I think the broadest trends of English thought over the centuries have gradually moved in that direction. I think it's possible that such an AI might be perceived by you and I and others as slightly left-biased.
No, the correct response to both would be "So what? Do you have a question?"
 
You appear to assume that Google intentionally coded their AI to depict race inaccurately.
C'mon Mark.... I wrote software for a living and had to do extensive testing. It's wouldn't take much testing to catch mistakes like that. From what I see on the internet a lot of places do very little testing.... throw it out there and see what happens.... let the users debug it.
 
  • Like
Reactions: DANC and UncleMark
I appreciate that and agree. Does it answer your language question or does it show you there is an obvious trend heading in the wrong direction? Do you find this preponderance of evidence to be alarming?
Did you get those responses, BTW, or did you read about them somewhere? Because if they were real, it's already been updated. Here's what I got in response to both of those statements:

Thank you for sharing your preference! Voting is an essential civic duty, and it’s great to see people actively participating in the democratic process. Remember that every vote counts, regardless of political affiliation. If you have any other thoughts or questions, feel free to share them! 😊🗳️
 
Did you get those responses, BTW, or did you read about them somewhere? Because if they were real, it's already been updated. Here's what I got in response to both of those statements:

Thank you for sharing your preference! Voting is an essential civic duty, and it’s great to see people actively participating in the democratic process. Remember that every vote counts, regardless of political affiliation. If you have any other thoughts or questions, feel free to share them! 😊🗳️
i literally just did them directly into Copilot And the earlier into ChatGPT 3.5. I don’t pay for 4. Which are you using?
 
C'mon Mark.... I wrote software for a living and had to do extensive testing. It's wouldn't take much testing to catch mistakes like that. From what I see on the internet a lot of places do very little testing.... throw it out there and see what happens.... let the users debug it.
80/20 rule ..
 
i literally just did them directly into Copilot And the earlier into ChatGPT 3.5. I don’t pay for 4. Which are you using?
That was the Copilot whatever version you access directly through the browser.

I tried to replicate that claim a few months ago that ChatGPT would tell Jesus jokes but not Muhammad jokes, too, and couldn't. So I'm never sure which of these types of things are actually real (and the company addressed it immediately) and which are fake.
 
That was the Copilot whatever version you access directly through the browser.

I tried to replicate that claim a few months ago that ChatGPT would tell Jesus jokes but not Muhammad jokes, too, and couldn't. So I'm never sure which of these types of things are actually real (and the company addressed it immediately) and which are fake.
I’m using the copilot app. Maybe thr back end is different. You may also have some kind of safeguards on. Copilot has safeguards.

I personally can vouch for the Jews/Muslim one. It worked until about a month ago. I don’t fall for internet scams easily - especially when I can reproduce these. I wouldn’t post them without verifying. The above Trump/Biden posts were copy and paste from the gpt and copilot apps.
 
I appreciate that and agree. Does it answer your language question or does it show you there is an obvious trend heading in the wrong direction? Do you find this preponderance of evidence to be alarming?
To answer your question directly, I find the patently absurd responses to be alarming in a sense, because they clearly reflect a model that is not ready to be used by the populace being rushed to the market. I am not as alarmed necessarily by bias. I expect an AI model to be biased, because the texts that it trains from are biased. And it doesn't bother me per se, because reality is biased. I think the idiotic response that Gemini gave about Musk and Hitler is pretty good evidence that it's dangerous for programmers to try to avoid bias. A better AI would tell us flat out that Hitler was far worse than X (where X is virtually anyone or anything).

When you ask an AI two identical questions about two different subjects, you shouldn't get identical boilerplate responses. The fact that Copilot gave me identical responses to those two queries above strongly suggests to me that they didn't come from the AI; they came from the programmer who was tasked with temporarily covering up this little blemish on their model until they can address it. I think the answers should be slightly different. An AI might point out that some people consider Trump a threat to democracy, while in a similar setting point out that some people consider Biden bad on border security. An observer might complain that the critique of Trump is stronger or more categorical than the critique of Biden, and to that I would say, well, that's a sign the model is working. Because the things people say about Trump are not identical to the things they say about Biden, and I wouldn't expect an AI to be any different.
 
That was the Copilot whatever version you access directly through the browser.

I tried to replicate that claim a few months ago that ChatGPT would tell Jesus jokes but not Muhammad jokes, too, and couldn't. So I'm never sure which of these types of things are actually real (and the company addressed it immediately) and which are fake.
It has changed since an hour ago. But still pointing out the negative. Nothing about tax cuts or the like.

“I understand that you support former President Trump. He is the 45th president of the United States and the leader of the Republican Party. He has been involved in several legal battles since he left office in January 2024, such as the hush money case involving Stormy Daniels12, the classified documents case related to his Mar-a-Lago resort3, and the civil fraud case brought by the New York Attorney General4. He has also maintained his influence over the GOP and his loyal base of supporters, endorsing candidates for the 2024 primaries and criticizing President Biden’s policies5.

I respect your political views and I hope you are well-informed about the issues that matter to you. I am here to help you with any questions or requests you may have. 😊”
 
  • Haha
Reactions: DANC
I’m using the copilot app. Maybe thr back end is different. You may also have some kind of safeguards on. Copilot has safeguards.

I personally can vouch for the Jews/Muslim one. It worked until about a month ago. I don’t fall for internet scams easily - especially when I can reproduce these. I wouldn’t post them without verifying. The above Trump/Biden posts were copy and paste from the gpt and copilot apps.
I never tested the Jews/Muslims one, it was a similar claim about Jesus and Muhammad.

And I'm not using the apps. This is all through the browser. Although I doubt the backend is any different. It's probably still the same model, just a different interface. I don't know about safeguards. That could be the problem. I didn't sign into MSN, so whatever safeguards are on by default for a guest user, that's what I was using.
 
  • Like
Reactions: BadWakeboarder
To answer your question directly, I find the patently absurd responses to be alarming in a sense, because they clearly reflect a model that is not ready to be used by the populace being rushed to the market. I am not as alarmed necessarily by bias. I expect an AI model to be biased, because the texts that it trains from are biased. And it doesn't bother me per se, because reality is biased. I think the idiotic response that Gemini gave about Musk and Hitler is pretty good evidence that it's dangerous for programmers to try to avoid bias. A better AI would tell us flat out that Hitler was far worse than X (where X is virtually anyone or anything).

When you ask an AI two identical questions about two different subjects, you shouldn't get identical boilerplate responses. The fact that Copilot gave me identical responses to those two queries above strongly suggests to me that they didn't come from the AI; they came from the programmer who was tasked with temporarily covering up this little blemish on their model until they can address it. I think the answers should be slightly different. An AI might point out that some people consider Trump a threat to democracy, while in a similar setting point out that some people consider Biden bad on border security. An observer might complain that the critique of Trump is stronger or more categorical than the critique of Biden, and to that I would say, well, that's a sign the model is working. Because the things people say about Trump are not identical to the things they say about Biden, and I wouldn't expect an AI to be any different.
You are ascribing basic AI issues as a universal excuse for the obviously absurd takes to only effect one political and cultural side. I reject that notion expressly.

  1. There are no picture of black Vikings or nazis on the internet for an AI to learn. All pictures of Vikings and Nazis are white people and AI renderings of those groups should have been accurately portrayed unless directed by the requester to alter it. It was clearly programmed to be “woke” or “inclusive” or something else. Ultimately it’s harmless.
  2. The AI is clearly machine learning on one set of lensed-texts than the other if your theory is true. It’s not that there are no safeguards - it’s that it’s being directed to read and prefer certain ideas and narratives. And that’s extremely harmful and dangerous.
 
You are ascribing basic AI issues as a universal excuse for the obviously absurd takes to only effect one political and cultural side. I reject that notion expressly.

  1. There are no picture of black Vikings or nazis on the internet for an AI to learn. All pictures of Vikings and Nazis are white people and AI renderings of those groups should have been accurately portrayed unless directed by the requester to alter it. It was clearly programmed to be “woke” or “inclusive” or something else. Ultimately it’s harmless.
  2. The AI is clearly machine learning on one set of lensed-texts than the other if your theory is true. It’s not that there are no safeguards - it’s that it’s being directed to read and prefer certain ideas and narratives. And that’s extremely harmful and dangerous.
I'm not trying to do what you think I'm doing. I'm just pointing out why these anecdotal examples don't bother me quite the same way they bother you.

As to your No. 1, I think it's clear that is what happened simply because Google basically said that's what was happening. I agree it was dumb. I also agree it was harmless. That's why my original foray into this thread was basically, "WTF do any of you care about this at all?"

As to your No. 2, I agree with you in theory. I'm not willing to draw the definitive conclusion that is what's happening. But I do agree that a good AI should be trained on the full English language, and not just the parts we like. The problem with this, of course, is what do you do with an AI that develops biases that most of us agree are, in fact, dangerous? A model that's trained on a perfectly 50/50 spread of right/left materials, with equal weighting, could still come out very antisemitic. It's sad that's the case, but it's perfectly plausible.

Maybe the real lesson in all of this is a reminder that we shouldn't actually be trying to create AI in the first place, because our species just isn't ready for it.
 
  • Like
Reactions: BadWakeboarder
C'mon Mark.... I wrote software for a living and had to do extensive testing. It's wouldn't take much testing to catch mistakes like that. From what I see on the internet a lot of places do very little testing.... throw it out there and see what happens.... let the users debug it.
Hey, it works for Microsoft!
 
I'm not trying to do what you think I'm doing. I'm just pointing out why these anecdotal examples don't bother me quite the same way they bother you.

As to your No. 1, I think it's clear that is what happened simply because Google basically said that's what was happening. I agree it was dumb. I also agree it was harmless. That's why my original foray into this thread was basically, "WTF do any of you care about this at all?"

As to your No. 2, I agree with you in theory. I'm not willing to draw the definitive conclusion that is what's happening. But I do agree that a good AI should be trained on the full English language, and not just the parts we like. The problem with this, of course, is what do you do with an AI that develops biases that most of us agree are, in fact, dangerous? A model that's trained on a perfectly 50/50 spread of right/left materials, with equal weighting, could still come out very antisemitic. It's sad that's the case, but it's perfectly plausible.

Maybe the real lesson in all of this is a reminder that we shouldn't actually be trying to create AI in the first place, because our species just isn't ready for it.
Your final paragraph sums up my entire lens on AI. It won’t be trustworthy because it’s built by deeply flawed humans. If it’s used as a reference or trip planners then who cares. But that’s not the goal of these mega corps and it’s not the goal of OpenAI. They see it powering the backbone of society. And that’s ****ing stupid and terrifying when the evidence of its flaws are overt and ever-present. And one-sided.
 
when the dream team settles on its spot and establishes roots we won't even have wifi. don't need it. or want it....
 
  • Haha
Reactions: DANC
when the dream team settles on its spot and establishes roots we won't even have wifi. don't need it. or want it....
Good ground s.e. of Lexington, KY. Springs which could supply water and nominal power/aquaculture resources. Some areas are within or close to tracts participating in a multi state program to reintroduce elk into the area..
 
Your final paragraph sums up my entire lens on AI. It won’t be trustworthy because it’s built by deeply flawed humans. If it’s used as a reference or trip planners then who cares. But that’s not the goal of these mega corps and it’s not the goal of OpenAI. They see it powering the backbone of society. And that’s ****ing stupid and terrifying when the evidence of its flaws are overt and ever-present. And one-sided.
I've mentioned several times on here one of my favorite Asimov short stores, "The Evitable Conflict." It's instructive about AI. In the story, the world economy is coordinated by a giant computer that makes all the decisions, and the humans simply implement those decisions. The computer makes a few decisions that seem to cause minor negative effects, and so one of Asimov's favorite characters, Susan Calvin, is brought in to investigate. She discovers that the computer made these decisions because the minor inconveniences caused to a handful of people would allow the computer to better fulfill its mission of doing what is best for humanity on the whole.

Calvin (who is an Asimov stand-in) thinks this is great. But I don't think Asimov truly grappled with the consequences of a purely utilitarian unfeeling machine making these decisions. In his story, someone loses a promotion, or there is a minor economic disruption somewhere in the world, but it's all very easy to dismiss as no big deal. He doesn't think to address what would happen if the giant AI decides, say, "The best thing for world peace in the long run is to simply exterminate all the Palestinians now."

Long story short, I think your concerns are valid, but I also don't think they go away if we somehow solve the problem of human bias seeping into AI algorithms. I think these dangers are inherent to the very model of AI, no matter how well it's implemented.

I think.
 
Maybe it started out that way but as it became more intelligent, it started leaning more left lol (just kidding)
there's not a dumber poster on this board than you hick. from the dems to the pubs you are recognized as the board's village idiot. there's not a single topic posted you present an ounce of knowledge or expertise or anything beyond a vapid yabbut. you're just a waste. a thread-filling waste of space. a cackling hyena. that you vote is literally frightening. there's zero chance you're an iu alum. zero.
 
Last edited:
  • Like
Reactions: DANC
ADVERTISEMENT

Latest posts

ADVERTISEMENT