[ad_1]
“The matter was completed. The offending material was removed and they released version 4, replacing version 3.5,” Hood told this masthead.
“I did not launch court action. While the material was clearly defamatory it would have been arguable to what extent it was published [in terms of] how many people read the false information.
Loading
“And it is the second test that determines what damage was done and what compensation may apply.”
The fourth version of ChatGPT, which was released last year and powers Microsoft’s Bing chatbot, avoids the mistakes of its predecessor. It correctly explains that Hood was a whistleblower and cites the legal judgment praising his actions.
Hood said his media campaign had been successful, reaching some 680 million people worldwide.
“It stated the true facts and put the spotlight on how unreliable the ChatGPT facility can be and also drew attention to the inadequate regulation around artificial intelligence,” he said. “There was also the not insignificant factor of the cost of an individual launching court action against a large (and generally faceless) overseas corporation.”
ChatGPT maker OpenAI, which is based in San Francisco, did not respond to a request for comment. According to the latest available data, ChatGPT is one of the fastest-growing services ever and currently has around 180 million users globally. Its website generated 1.6 billion visits in December 2023.
Australia is considering laws to ensure stronger AI protections, with Industry and Science Minister Ed Husic telling reporters last month that the threat of rampant disinformation looms larger than any other danger from the nascent technology.
NewsGuard, an organisation that tracks misinformation, found that between May and December last year, websites hosting AI-created false articles skyrocketed by more than 1000 per cent.
“The biggest thing that concerns me around generative AI is just the huge explosion of synthetic data; the way generative AI can just create stuff you think is real and organically developed but it’s come out of a generative model,” Husic told reporters in Canberra.
“The big thing I’m concerned about is not that the robots take over but the disinformation does.”
The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.
[ad_2]
Source link