26.6 C
Melbourne
Sunday, November 17, 2024

Trending Talks

spot_img

Google’s AI isn’t too woke. It’s too rushed

[ad_1]

Google’s rushed, faulty AI isn’t alone. Microsoft’s Bing chatbot wasn’t just inaccurate, it was unhinged, telling a New York Times columnist soon after its release that it was in love with him and wanted to destroy things. Google has said that responsible AI is a top priority, and that it was “continuing to invest in the teams” that apply its AI principles to products.

Loading

OpenAI, which kick-started Big Tech’s race for a foothold in generative AI, normalised the rationale for treating us all like guinea pigs with new AI tools. Its website describes an “iterative deployment” philosophy, where it releases products like ChatGPT quickly to study their safety and impact and to prepare us for more powerful AI in the future. Google’s Pichai now says much the same. By releasing half-baked AI tools, he’s giving us “time to adapt” to when AI becomes super powerful, according to comments he made in a 60 Minutes interview last year.

When asked what keeps him up at night, Pichai said, with no trace of irony, that it was knowing that AI could be “very harmful if deployed wrongly.” So what was his solution? Pichai didn’t mention investing more in the researchers that make AI safe, accurate and ethical, but pointed to greater regulation, a solution that lay outside his control.

“There have to be consequences for creating deepfake videos which cause harm to society,” he said, referring to AI videos that could spread misinformation. “Anybody who has worked with AI for a while, you know, you realise this is something so different and so deep that we would need societal regulations to think about how to adapt.”

This is a bit like the chef of a restaurant saying, “Making people sick with salmonella is bad, and we need more food inspectors to check our raw food,” when they know full well there are no food inspectors to speak of and won’t be for years. It gives them license to continue dishing out tainted meat or fish. The same is true in AI.

Loading

With regulations in the distant future, Pichai knows the onus is on his company to build AI systems that are fair and safe. But now that he is caught up in the race to put generative AI into everything quickly, there’s little incentive to ensure that it is.

We know about Gemini’s diversity bug because of all the tweets on X, but the AI model may have other problems we don’t know about — issues that may not trigger Elon Musk but are no less insidious. The female popes and black founding fathers are products of a deeper, years-long problem of putting growth and market dominance before safety.

Expect our role as guinea pigs to continue until that changes.

Parmy Olson is a Bloomberg Opinion columnist covering technology. A former reporter for the Wall Street Journal and Forbes, she is author of We Are Anonymous.

Bloomberg

The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.

[ad_2]

Source link

Serendib News
Serendib News
Serendib News is a renowned multicultural web portal with a 17-year commitment to providing free, diverse, and multilingual print newspapers, featuring over 1000 published stories that cater to multicultural communities.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles