13.8 C
Melbourne
Tuesday, December 24, 2024

Trending Talks

spot_img

EU sets a high benchmark to control the arms race

[ad_1]

The French and Germans, who lead Europe’s development of generative AI, are unhappy that – with the continent’s development of the large language models that power AI already badly lagging the US and China, and probably a little the UK – the new rules will undermine European innovation.

It is apparent that the EU legislation, the first attempt by a major Western economy to regulate AI, will be a benchmark for legislators elsewhere.

That’s at odds with the view expressed by EU Commissioner Thierry Breton, who described the agreement as historic and a “launch pad for EU start-ups and researchers to lead the global AI race.”

It is apparent that the EU legislation, the first attempt by a major Western economy to regulate AI, will be a benchmark for legislators elsewhere. Almost inevitably, US and UK legislators will have lighter touch regulation, placing more emphasis on innovation than the Europeans.

The dilemma confronting legislators is that balance between the risks of generative AI and the potential rewards from innovation, with the technology capable of driving massive gains in efficiency and productivity and transforming economies and societies but also, some AI experts fear, capable of destroying humanity.

It was that confrontation between the risk-averse and those excited by the transformative (and wealth-generating) potential of AI that played a role in the recent bizarre corporate governance struggle within OpenAI, the parent of the ChatGPT model, whose launch a year ago ignited a generative AI arms race.

In that contest, those wanting to push ahead with the development of more sophisticated AI models and commercialise them prevailed over those advocating more cautious progress.

All the major tech companies are scrambling to extend the boundaries on what their AI models are capable of, with the developments accelerating at a pace that legislators and regulators will struggle to keep pace with.

Whether it’s the commercial rewards for sector leadership, the pursuit of technology leadership or the need to protect against competitive disadvantage, companies will push the boundaries of what their legislators allow.

In the US and UK, and probably here as well, it appears the emphasis will be more on transparency and the disclosure of how models are trained, the data fed to them and the risk protections companies have in place than the more heavy-handed approach of the Europeans.

Loading

In China, AI companies appear likely to need a licence before releasing generative AI models, but the emphasis there is on ensuring AI content embodies communist values and that it doesn’t undermine the party’s authority rather than trying to regulate the rate of its development.

AI has powerful commercial implications, but it also has, as the Europeans recognised, military applications. The notion of AI as an arms race isn’t purely metaphorical.

By moving first and probably hardest among the major Western economic regions, the Europeans will (assuming the legislation remains intact) affect not so much the development of AI, but the way its applications are deployed internationally.

The mega tech companies that operate internationally on global platforms, or at least platforms that are interconnected globally, will have to either tailor the development and use of their models to the EU’s AI regime, or treat Europe as a discrete jurisdiction with different functions and risk management processes to those deployed elsewhere.

Some aspects of the concerns that led the EU to its conclusions (after years of intense discussion) are universal.

Loading

The high-risk categories of AI are the same in the US, UK or Australia as they are in Europe. The use of AI in education or employment, its potential in facial recognition technologies and the prospect of AI being used to mislead or manipulate individuals or societies are shared concerns. There will probably be – and should be –some broad harmonisation of regulations.

As occurred in the OpenAI imbroglio, however, in economies like the US that are more market driven than Europe, it will be profit incentives and the need and ability to attract the vast amounts of capital required to build and develop large language models that will probably influence the thrust of policy towards lighter touch regulation.

It will require hindsight to learn which approach – Europe’s prescriptive legislation or a more principles-based approach that relies more on transparency – is the more effective in protecting individuals and societies from the risks of AI while best promoting the desired forms of innovation.

If the doomsayers are right, of course, by then it might be too late already.

The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.

[ad_2]

Source link

Serendib News
Serendib News
Serendib News is a renowned multicultural web portal with a 17-year commitment to providing free, diverse, and multilingual print newspapers, featuring over 1000 published stories that cater to multicultural communities.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles