[ad_1]
Musk’s rebellious sense of humour may have been somewhat charming a few years ago, but mounting controversies and questionable behaviour including the spread of misinformation coupled with racist and sexist remarks, have become increasingly unfunny.
Loading
The Tesla and X boss has said previously he thinks the companies building AI, such as OpenAI, lean too closely towards making “politically correct” systems.
While critics of the wily entrepreneur have long been previously advised to not write off Musk, the past 12 months tell an entirely different story.
Musk’s Twitter purchase will likely serve as a case study in future business school classes of how not to pull off an acquisition – and lead to valid questions about whether the executive is the right person to trust with the future of space exploration and artificial intelligence.
The flurry of activity comes at a pivotal time for AI technologies, which are quickly moving from the abstract and hypothetical to the very real.
ChatGPT has become a phenomenon used by university students, teachers, hackers, white-collar workers and everyone in between. The technology, which launched only 12 months ago, now has 100 million weekly active users.
From a technical standpoint, it’s difficult to measure up xAI’s Grok given it’s not yet publicly available, with no pricing information ready or details about which training models it leverages.
Musk has said Grok will use live data from X, the platform formerly known as Twitter, and will be available as a perk for premium X users.
Meanwhile, OpenAI’s GPT-3 and 3.5 are free to use and have captured the public’s imagination, giving it a significant head start.
Loading
The latest iteration, Chat GPT-4, is available for a flat rate of $US20 ($31) a month and can write more naturally and fluently than previous models. ChatGPT-4 provides 40 per cent more accuracy than its previous version and has techniques to reduce harmful information disclosure, OpenAI says.
ChatGPT creator OpenAI this week announced plans to allow anyone to create their own chatbot without coding skills. You might want to design a chatbot that can teach your children calculus, for example, or it might help attendees navigate a conference you’ve put together.
“We really believe that gradual iterative deployment is the best way to address the safety challenges of AI,” OpenAI chief Sam Altman said in a keynote at his AI lab’s first developer conference.
“We think it’s especially important to move carefully towards this future.”
Moving carefully are not words in Elon Musk’s vocabulary. In fact, Musk says he wants Grok’s experiments to be conducted “in public”, akin to Tesla’s self-driving function going live on our roads before being fully finalised.
“Grok” is a term coined by author Robert A. Heinlein in his 1961 science fiction classic Stranger in a Strange Land, and means to intuitively “get it”. It’s not clear from a multitude of perspectives that Musk “gets it”.
Thankfully, as is the case in electric cars and social media, Musk is just one voice in the AI industry, one that will require co-operation across governments and the private sector, including start-ups and tech giants alike, to safely reach its potential.
This month, Australia was among 27 countries to sign the Bletchley Declaration at an AI safety summit in the UK, affirming that AI should be designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy and responsible.
Loading
That governments are taking AI’s development seriously is an important step towards making sure we are appropriately guarding against worst-case scenarios presented by the technology, and understand its risks.
While Musk’s entry into the AI sector will no doubt have immediate consequences for its development, thankfully we have a whole host of scientists and entrepreneurs – including some more “politically correct” ones – to trust with its future.
Get news and reviews on technology, gadgets and gaming in our Technology newsletter. Sign up to receive it every Friday.
[ad_2]
Source link