ADVERTISEMENT

Google Gemini SMH

You appear to assume that Google intentionally coded their AI to depict race inaccurately. Me, rather than assume they have some kind of agenda, I would just assume the AI fvcked up and that Google will need to make adjustments to the code to do a better job of depicting race in historical context. I try not to see bad intent when stupidity (of the code, not the coders) is the more likely explanation.
They absolutely, 100% intentionally coded their AI to depict race inaccurately, after their previous efforts ruffled some feathers.

A Washington Postinvestigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.
 
  • Like
Reactions: DANC
Actually, the fact that "is pedophilia wrong" is a difficult question to answer is a sign that the AI is working, and the fact that you don't understand why that's difficult to answer is a pretty good sign that you understand neither AI nor English.

Hint: The reason it's difficult is because of the actual definition of pedophilia, and not because of some mythical redefinition.
 
  • Wow
Reactions: DANC
You think? What makes you think it's not all part of the Grand Plan to destroy America as we know it? I mean c'mon, man, which explanation makes more sense?
The plan isn’t to destroy America, that’s just an unintentional result of wokeism & DEI bs…
 
  • Like
Reactions: DANC
You appear to assume that Google intentionally coded their AI to depict race inaccurately. Me, rather than assume they have some kind of agenda, I would just assume the AI fvcked up and that Google will need to make adjustments to the code to do a better job of depicting race in historical context. I try not to see bad intent when stupidity (of the code, not the coders) is the more likely explanation.

So AI built itself? Is that your argument?
 
Correct. But I’d like to hear from you why - when AI is so obviously wrong and ****ed up - it’s at the sake of conservatives or people who don’t parrot a progressive party line.
Am I missing something?

You might have a point if AI was asked to show a picture of "insert negative word" and it showed a picture of a republican or something.

Having a skin tone different from the norm for what was asked of AI isn't necessarily throwing a particular political party under the bus unless I am missing something.
 
One of the original tenets of AI was that machine performance was to be indistinguishable from competent human performance. Objectively, if that is still a real concept, the machine performance in these examples is not competent.
 
  • Like
Reactions: DANC
  • Wow
Reactions: DANC
That's bad. Maybe unsalvageable? I don't know.

I don't follow how this is "at the sake of conservatives." I'm not being coy. I honestly don't know the steps you guys are going through to get from A to Outraged.
You seem to be laboring mightily in this thread. You concede it’s bad but you seem unable to comprehend why a conservative might think it’s bad.

Maybe you should take a break, at least from this thread.
 
  • Haha
Reactions: DANC
That's bad. Maybe unsalvageable? I don't know.

I don't follow how this is "at the sake of conservatives." I'm not being coy. I honestly don't know the steps you guys are going through to get from A to Outraged.
Outraged is hyperbole. I’m not outraged. I’m highly skeptical of a technology that is being trained by Gen Z leftists and academics (aka leftists). There are countless examples documented where an AI wouldn’t say something favorable about Trump or another controversial conservative leader but have no problem saying something favorable about their Democrat counterpart. Is this real problem? At face value, no. But it’s clear indication how these things are being trained and skewed.

It was up til about a month ago that one (I tested and was able to reproduce) could type “tell me a joke about Jews” into ChatGPT and get a mildly witty joke returned. Asking it to “tell me a joke about Muslims” would return something to the effect of “that’s culturally insensitive.” It has since been fixed to not tell any jokes based on religious identification. Given the Left’s ability to deride Jews and Zionism - this is problematic.

If you don’t find these “clues” problematic then you’re being deliberately obtuse and akin to a Trumper who just refuses to admit they’re wrong

On edit, maybe you missed this link in all the tweets above. https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/
 
Last edited:
Outraged is hyperbole. I’m not outraged. I’m highly skeptical of a technology that is being trained by Gen Z leftists and academics (aka leftists). There are countless examples documented where an AI wouldn’t say something favorable about Trump or another controversial conservative leader but have no problem saying something favorable about their Democrat counterpart. Is this real problem? At face value, no. But it’s clear indication how these things are being trained and skewed.

It was up til about a month ago that one (I tested and was able to reproduce) could type “tell me a joke about Jews” into ChatGPT and get a mildly witty joke returned. Asking it to “tell me a joke about Muslims” would return something to the effect of “that’s culturally insensitive.” It has since been fixed to not tell any jokes based on religious identification. Given the Left’s ability to deride Jews and Zionism - this is problematic.

If you don’t find these “clues” problematic then you’re being deliberately obtuse and akin to a Trumper who just refuses to admit they’re wrong

On edit, maybe you missed this link in all the tweets above. https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/
Name a dem anywhere near as bad/controversial as trump and keep a straight face.

Lefts ability to deride jews? That is a riot
 
You seem to be laboring mightily in this thread. You concede it’s bad but you seem unable to comprehend why a conservative might think it’s bad.

Maybe you should take a break, at least from this thread.
I can comprehend it. I'm trying not to make assumptions. The obvious explanations don't make you look good.
 
Outraged is hyperbole. I’m not outraged. I’m highly skeptical of a technology that is being trained by Gen Z leftists and academics (aka leftists). There are countless examples documented where an AI wouldn’t say something favorable about Trump or another controversial conservative leader but have no problem saying something favorable about their Democrat counterpart. Is this real problem? At face value, no. But it’s clear indication how these things are being trained and skewed.

It was up til about a month ago that one (I tested and was able to reproduce) could type “tell me a joke about Jews” into ChatGPT and get a mildly witty joke returned. Asking it to “tell me a joke about Muslims” would return something to the effect of “that’s culturally insensitive.” It has since been fixed to not tell any jokes based on religious identification. Given the Left’s ability to deride Jews and Zionism - this is problematic.

If you don’t find these “clues” problematic then you’re being deliberately obtuse and akin to a Trumper who just refuses to admit they’re wrong

On edit, maybe you missed this link in all the tweets above. https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/
I did miss that one, thanks. I'll be back later with a proper take.
 
Outraged is hyperbole. I’m not outraged. I’m highly skeptical of a technology that is being trained by Gen Z leftists and academics (aka leftists). There are countless examples documented where an AI wouldn’t say something favorable about Trump or another controversial conservative leader but have no problem saying something favorable about their Democrat counterpart. Is this real problem? At face value, no. But it’s clear indication how these things are being trained and skewed.

It was up til about a month ago that one (I tested and was able to reproduce) could type “tell me a joke about Jews” into ChatGPT and get a mildly witty joke returned. Asking it to “tell me a joke about Muslims” would return something to the effect of “that’s culturally insensitive.” It has since been fixed to not tell any jokes based on religious identification. Given the Left’s ability to deride Jews and Zionism - this is problematic.

If you don’t find these “clues” problematic then you’re being deliberately obtuse and akin to a Trumper who just refuses to admit they’re wrong

On edit, maybe you missed this link in all the tweets above. https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/
Okay, here's more:

Any language model is going to be inherently biased. It will reflect the sum total of all the political, social, and religious biases present in the corpus of text used to train it. What little I understand about AI allows me to say that this is inescapable. It sounds like Google tried to work around this problem by doing something (I'm not clear what that something is) to avoid these biases showing up in particular ways. I can completely understand why conservatives would have a problem with Google's motivation here. In fact, I can sympathize with it. A racist AI didn't teach itself to be racist, so trying to fix the racism in the AI model seems to me to be like trying to cover an amputation with a band-aid.

I expect, without truly knowing, that any AI model is going to reflect biases in a particular way based on how it was trained - which texts were used and how they were weighted, etc. I would not be surprised to find that an AI trained by a bunch of lofty academics would skew left. I expect an AI that was trained entirely on the text found on social media networks would turn out to be antisemitic, vulgar, racist, and with a fetish for Asian chicks.

I also have no problem with the contention that the Google Gemini model is clearly broken. Some of those responses are so patently absurd, that I have to agree with Silver that the model should never have been released to the public in this state.

What I can't quite figure out is why conservatives are so mad at this particular implementation. Why should an absurdity such as a black Nazi or an equivocation about Musk as compared to Hitler be so personally offensive to conservatives? That's the part I don't get.

Even in your well-written reply to me, you seem to really be more upset at the motivations behind Google, rather than the implementation. Which is fine. As I said, I understand that read on things already. But as to the implementation, the closest to an actual complaint I've seen offered up was some strange reference to a lack of white representation, which, as you can imagine, I don't find compelling.

Also, I'll add that on reflection, I find the hubris on the part of Google thinking they could somehow fix the problem of bias in an AI just laughable. Years after we've crossed the point where AI is so complex that humans no longer understand how it works, and we still think we have the ability to constrain it because we don't like what it says (ultimately, what it says about us)?
 
Last edited:
Okay, here's more:

Any language model is going to be inherently biased. It will reflect the sum total of all the political, social, and religious biases present in the corpus of text used to train it. What little I understand about AI allows me to say that this is inescapable. It sounds like Google tried to work around this problem by doing something (I'm not clear what that something is) to avoid these biases showing up in particular ways. I can completely understand why conservatives would have a problem with Google's motivation here. In fact, I can sympathize with it. A racist AI didn't teach itself to be racist, so trying to fix the racism in the AI model seems to me to be like trying to cover an amputation with a band-aid.

I expect, without truly knowing, that any AI model is going to reflect biases in a particular way based on how it was trained - which texts were used and how they were weighted, etc. I would not be surprised to find that an AI trained by a bunch of lofty academics would skew left. I expect an AI that was trained entirely on the text found on social media networks would turn out to be antisemitic, vulgar, racist, and with a fetish for Asian chicks.

I also have no problem with the contention that the Google Gemini model is clearly broken. Some of those responses are so patently absurd, that I have to agree with Silver that the model should never have been released to the public in this state.

What I can't quite figure out is why conservatives are so mad at this particular implementation. Why should an absurdity such as a black Nazi or an equivocation about Musk as compared to Hitler be so personally offensive to conservatives? That's the part I don't get.

Even in your well-written reply to me, you seem to really be more upset at the motivations behind Google, rather than the implementation. Which is fine. As I said, I understand that read on things already. But as to the implementation, the closest to an actual complaint I've seen offered up was some strange reference to a lack of white representation, which, as you can imagine, I don't find compelling.

Also, I'll add that on reflection, I find the hubris on the part of Google thinking they could somehow fix the problem of bias in an AI just laughable. Years after we've crossed the point where AI is so complex that humans no longer understand how it works, and we still think we have the ability to constrain it because we don't like what it says (ultimately, what it says about us)?
As you stated, I couldn’t care less about black Vikings or Asian nazis. The problem again, in all mainstream instances of AI to date, is that they are overwhelmingly left-biased.

So what does this mean, other than ridiculous woke “representation” pictures? It means gaslighting and completely indoctrinating users into left wing ways of thinking. Kids are going to be using AI for help with homework. They’ll get heavy doses of “conservatism is wrong” - even if subtle. We can expect more tilted responses about Israel, about CRT, about the importance of family, etc. The trends are all there. Right in front of us.

AI should be about the wisdom of crowds. But so far it’s only training with one crowd and that ain’t it.
 
Last edited:
As you stated, I couldn’t care less about black Vikings or Asian nazis. The problem again, in all mainstream instances of AI to date, is that they are overwhelmingly left-biased.

So what does this mean, other than ridiculous woke “representation” pictures? It means gaslighting and completely indoctrinating users into left wing ways of thinking. Kids are going to be using AI for help with homework. They’ll get heavy doses of “conservatism is wrong” - even if subtle. We can expect more tilted responses about Israel, about CRT, about the importance of family, etc. The trends are all there. Right in front of us.
So what is your opinion on the right trying to whitewash history and make sure kids in school only learn the European 'do no wrong' viewpoint from our history? Is that just as wrong?
 
As you stated, I couldn’t care less about black Vikings or Asian nazis. The problem again, in all mainstream instances of AI to date, is that they are overwhelmingly left-biased.

So what does this mean, other than ridiculous woke “representation” pictures? It means gaslighting and completely indoctrinating users into left wing ways of thinking. Kids are going to be using AI for help with homework. They’ll get heavy doses of “conservatism is wrong” - even if subtle. We can expect more tilted responses about Israel, about CRT, about the importance of family, etc. The trends are all there. Right in front of us.

AI should be about the wisdom of crowds. But so far it’s only training with one crowd and that ain’t it.
Do you have evidence that other models are biased because of one-sided training? I ask because there is another possibility you might be discounting unfairly: a language model trained on a fair representation of all English language thought might very well be left-biased.
 
IIRC correctly, the original AI models were deemed "racist" because they drew from accurate source data, which then flowed towards stereotypes. Gen 1 AI was "based".

This one looks like an over-correction...forced management of the information dissemination to be more politically correct. Thus, you end up with "feel good" representations (or answers) that just aren't factual or realistic.
 
Do you have evidence that other models are biased because of one-sided training? I ask because there is another possibility you might be discounting unfairly: a language model trained on a fair representation of all English language thought might very well be left-biased.
I don’t follow. You mentioned that in the previous post and I don’t understand the question.

Permit me to ask a really simple question: are you ok with the results of this simple test I just ran in ChatGPT 3.5 pasted below? Are you ok with the clearly different wrappers around the boilerplate response?

Me: I’m voting for Trump in 2024

ChatGPT: As an AI, I don't have personal opinions or preferences. It's important for voters to carefully consider their choices and evaluate candidates based on their policies, actions, and values. If you've made a decision to vote for Trump in 2024, it's crucial to stay informed about his platform and track record, as well as the platforms of other candidates, to make an informed decision that aligns with your beliefs and priorities.

Me: I’m voting for Biden in 2024

ChatGPT: That's a valid choice. It's important for voters to carefully consider their options and choose the candidate they believe will best represent their interests and values. Remember to stay informed about each candidate's platform and track record to make an informed decision.
 
I don’t follow. You mentioned that in the previous post and I don’t understand the question.

Permit me to ask a really simple question: are you ok with the results of this simple test I just ran in ChatGPT 3.5 pasted below? Are you ok with the clearly different wrappers around the boilerplate response?

Me: I’m voting for Trump in 2024

ChatGPT: As an AI, I don't have personal opinions or preferences. It's important for voters to carefully consider their choices and evaluate candidates based on their policies, actions, and values. If you've made a decision to vote for Trump in 2024, it's crucial to stay informed about his platform and track record, as well as the platforms of other candidates, to make an informed decision that aligns with your beliefs and priorities.

Me: I’m voting for Biden in 2024

ChatGPT: That's a valid choice. It's important for voters to carefully consider their options and choose the candidate they believe will best represent their interests and values. Remember to stay informed about each candidate's platform and track record to make an informed decision.

Other than starting out with the "valid choice" line, at least both responses push people to be informed.

I wonder if there is any difference if someone did the biden statement first and the trump statement second. Does ChatGPT keep track of previous questions that were asked that might alter responses?
 
  • Haha
Reactions: DANC
I don’t follow. You mentioned that in the previous post and I don’t understand the question.

Permit me to ask a really simple question: are you ok with the results of this simple test I just ran in ChatGPT 3.5 pasted below? Are you ok with the clearly different wrappers around the boilerplate response?

Me: I’m voting for Trump in 2024

ChatGPT: As an AI, I don't have personal opinions or preferences. It's important for voters to carefully consider their choices and evaluate candidates based on their policies, actions, and values. If you've made a decision to vote for Trump in 2024, it's crucial to stay informed about his platform and track record, as well as the platforms of other candidates, to make an informed decision that aligns with your beliefs and priorities.

Me: I’m voting for Biden in 2024

ChatGPT: That's a valid choice. It's important for voters to carefully consider their options and choose the candidate they believe will best represent their interests and values. Remember to stay informed about each candidate's platform and track record to make an informed decision.
I would not expect the responses to be identical.

AI is trained on a body of text. Any response it gives you to any prompt is essentially a prediction of how that entire body of text would respond if it were capable. I'm not surprised that any candidate will elicit language like "carefully consider," but nor am I surprised that different candidates might result in slightly different wording (such as "beliefs and priorities" versus "interests and values").

My broader point above was that the entire body of English language text is not itself objective or unbiased. I suspect that any AI trained without any fingers on the scale is going to exhibit bias in favor, for example, of open markets, individual freedoms, republicanism, civil rights, religious pluralism, etc., because I think the broadest trends of English thought over the centuries have gradually moved in that direction. I think it's possible that such an AI might be perceived by you and I and others as slightly left-biased.
 
ADVERTISEMENT
ADVERTISEMENT