13.8 C
Melbourne
Tuesday, December 24, 2024

Trending Talks

spot_img

Why we should worry about AI

[ad_1]

Less than a fortnight ago, OpenAI’s co-founder and chief executive, Sam Altman, was ousted by the board of the non-profit entity that sits above the for-profit company that Microsoft owns 49 per cent of and which commercialises the group’s AI products.

A revolt by the staff – 95 per cent of whom threatened to leave unless Altman was reinstated – and a Microsoft announcement that it would employ Altman and set up a new internal generative AI business led to Altman returning to OpenAI and to the displacement of a board that had a charter to prioritise safety over profits with new directors who may or may not have the same priorities.

Amazon unveiled a new product this week.

Amazon unveiled a new product this week. Credit: AP

Reuters and The Information have reported that the catalyst for Altman’s original sacking was a letter from OpenAI researchers to the board that warned of a breakthrough in AI the company had made.

Q apparently has the ability to solve fairly basic mathematical problems, which requires a level of understanding of maths, abstract concepts and an ability to reason logically and make deductions that would be a step beyond what AI models have been capable of.

The reason this might have agitated OpenAI researchers is that it would be a step towards artificial general intelligence (AGI), or a level of intelligence approaching, or eventually surpassing, that of humans. The founder of Google’s DeepMind AI lab, Shane Legg, has said he believed there was a 50-50 chance that AGI would be achieved by the end of this decade.

Loading

There are plenty within the AI research community who have downplayed the implications of Q*, if it is as it has been described, but the outcome of the battle between OpenAI’s not-for-profit directors and Altman and Microsoft sends a clear signal.

If there’s a choice between safety and profit, profit will win.

That’s partly from necessity – developing the large language models that power generative AI devours cash – but also because, to attract executives and developers, the companies need to compete with other tech companies offering big salaries and huge equity upside.

OpenAI has been valued (for the purpose of creating an opportunity for staff in the for-profit entity to cash out some of their equity) at $US86 billion. When that sort of money is on the table any qualms about risks to humanity would be relegated to, at best, second order issues.

The outcome at OpenAI – the removal of those who prioritised safety over profits – says quite strongly that, in their scramble to develop generative AI and keep up with their competitors, the big tech companies can’t be trusted to self-regulate. The legislators will need to do that.

Joe Biden issued an executive order late last month requiring companies to share the results of their safety testing and to develop tools and tests to ensure the models are safe. The Europeans have passed draft laws forcing transparency and restricting some of the perceived more risky applications.

The pace at which AI developments are moving, however, creates the risk that the legislators will either significantly lag the development of AI or, if they are too heavy-handed, limit what might otherwise be transformative innovation.

It’s not just the big US and, to a much lesser degree, UK and European tech companies that are ploughing vast amounts of capital into AI.

In the first half of this year China shaded the US for the number of AI start-ups receiving funds. China’s tech giants, Tencent Holdings, Baidu and Alibaba, are investing in start-ups while also pouring capital into their own large language models.

If there’s a choice between safety and profit, profit will win.

China’s government is also a major funder of AI developers (it has a particular interest in facial recognition and in military applications) and, given the authoritarian nature of the state and sheer size of the population, has vast amounts of data that can be drawn on to develop the models in a race for AI supremacy that is reliant on access to huge amounts of data.

The geopolitical implications of AI development add to the pressure on governments and regulators to ensure that legislation and regulation driven by safety concerns don’t inhibit their developers.

Loading

There will inevitably be tensions, and probably controversies, as regulators and the companies themselves wrestle with the conflicts inherent in trying to reconcile the economic and strategic imperatives of being at the leading edge of AI development while keeping humans safe.

Given the nature of the companies leading the race – and the outcome of the brief but brutal boardroom battle for control of OpenAI’s developments – it’s hard not to be pessimistic about the outcomes.

Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.

[ad_2]

Source link

Serendib News
Serendib News
Serendib News is a renowned multicultural web portal with a 17-year commitment to providing free, diverse, and multilingual print newspapers, featuring over 1000 published stories that cater to multicultural communities.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles