Okay, here's more:
Any language model is going to be inherently biased. It will reflect the sum total of all the political, social, and religious biases present in the corpus of text used to train it. What little I understand about AI allows me to say that this is inescapable. It sounds like Google tried to work around this problem by doing something (I'm not clear what that something is) to avoid these biases showing up in particular ways. I can completely understand why conservatives would have a problem with Google's
motivation here. In fact, I can sympathize with it. A racist AI didn't teach itself to be racist, so trying to fix the racism in the AI model seems to me to be like trying to cover an amputation with a band-aid.
I expect, without truly knowing, that any AI model is going to reflect biases in a particular way based on how it was trained - which texts were used and how they were weighted, etc. I would not be surprised to find that an AI trained by a bunch of lofty academics would skew left. I expect an AI that was trained entirely on the text found on social media networks would turn out to be antisemitic, vulgar, racist, and with a fetish for Asian chicks.
I also have no problem with the contention that the Google Gemini model is clearly broken. Some of those responses are so patently absurd, that I have to agree with Silver that the model should never have been released to the public in this state.
What I can't quite figure out is why conservatives are so mad at this particular
implementation. Why should an absurdity such as a black Nazi or an equivocation about Musk as compared to Hitler be so personally offensive to
conservatives? That's the part I don't get.
Even in your well-written reply to me, you seem to really be more upset at the motivations behind Google, rather than the implementation. Which is fine. As I said, I understand that read on things already. But as to the implementation, the closest to an actual complaint I've seen offered up was some strange reference to a lack of white representation, which, as you can imagine, I don't find compelling.
Also, I'll add that on reflection, I find the hubris on the part of Google thinking they could somehow fix the problem of bias in an AI just laughable. Years after we've crossed the point where AI is so complex that humans
no longer understand how it works, and we still think we have the ability to constrain it because we don't like what it says (ultimately, what it says about us)?