13.9 C
Melbourne
Monday, December 23, 2024

Trending Talks

spot_img

AI fake journalist scandal at Sports Illustrated sparks debate about future of journalism

[ad_1]

Regardless, the scandal poses a very real question: how will we know what’s fake and what’s not?

While AI tools like ChatGPT are already being widely used for tasks such as planning trips and recommending movies, and more sophisticated applications such as facial recognition, its use in newsrooms has sparked consternation about the potentially existential threat it poses to journalists.

Image of a fictional journalist generated by Australian AI platform Leonardo.ai.

Image of a fictional journalist generated by Australian AI platform Leonardo.ai.Credit: The Age

AI tools can be used to quickly produce copy – although often factually inaccurate – and unlike humans, they don’t demand pay rises, require annual leave or get burnt out.

Earlier this year, US technology news website CNET was caught publishing dozens of AI-generated articles under the byline of “CNET Money Staff,” with stories including the day’s mortgage and refinance rates. Many of the stories contained basic factual errors, and the company later added the disclaimer that “this article was assisted by an AI engine.”

One US-based CNET journalist speaking on the condition of anonymity said the saga destroyed morale inside the newsroom.

The incident not only upset employees but also galvanised them to unionise, the journalist said.

BuzzFeed has also begun publishing AI-generated content, including AI-written quizzes and travel guides, though it is clearly marked as being the work of a computer.

Experts say there are plenty of positives in AI, including for journalists, but that newsrooms need to be transparent and upfront about how they use it.

MEAA media director Cassie Derrick.

MEAA media director Cassie Derrick.

“Of course we need to be worried,” RMIT associate professor of journalism Alexandra Wake says.

“AI brings advantages for journalists. For one, it’ll write formulaic stories better in many ways than so many. But it can’t dig for new information not already in a computer, it can’t see new information, and it can’t tell when someone is lying.

“Making up people is stupid in so many ways – not only does it erode trust in journalism, it breaks just about every kind of ethical code – not just journalistic ones.”

Cassie Derrick, the media director for the Australian journalists’ union, the Media, Entertainment and Arts Alliance (MEAA), agrees and says generative AI tools can be useful when used correctly.

Earlier this year, the MEAA wrote to News Corp Australia boss Michael Miller amid revelations the company was using AI to help produce thousands of hyper-local news stories on weather, traffic, fuel prices, court listings, deaths and funeral notices.

Loading

Those stories aren’t written under fake names – they typically carry the byline of the company’s data journalism editor Peter Judd. They are written by a machine, though News Corp insists a human then oversees the generated copy. News Corp CEO Robert Thomson has warned that AI will lead to a “tsunami” of job losses.

“Smaller newsrooms and higher consumption of news means journalists are under more pressure than ever to meet deadlines, and tools like generative AI can be very effective in helping with those workload pressures – but accountability is key,” Derrick says.

“For news to be trusted, transparency is essential. The public deserves to know who is providing their information, where sources have come from, and what scrutiny has been applied. When AI authors a whole article without genuine oversight from journalists who abide by the Journalist Code of Ethics, those essential elements of transparency are obscured.

“Media outlets need to work with journalists to build policies and practices that balance the benefits of AI as a tool without compromising on ethical, transparent journalism.”

Independent technology journalist Joan Westenberg says she sees undisclosed AI journalism as a threat to the foundational principles upon which journalism was built.

‘When articles are AI-generated without disclosure, it’s not just a breach of journalistic ethics, it’s a betrayal of reader trust.’

Joan Westenberg, technology journalist

She says that in her years covering technology trends, she’s seen how digital tools can enhance storytelling, but that AI’s lack of human insight is a critical deficit.

“It undermines the trust built between journalists and readers, obscures necessary transparency, and opens up alarming possibilities for narrative manipulation,” she says.

“Trust is hard-earned in journalism… It’s built on personal accountability and ethical rigour. When articles are AI-generated without disclosure, it’s not just a breach of journalistic ethics, it’s a betrayal of reader trust.”

Loading

A study by digital magazine and newspaper subscription app Readly found many Australians are wary of an overreliance on AI in areas where human judgment plays an important role, including journalism and teaching. Just 10 per cent of Australians saw benefits of using AI in journalism, compared with the 30 per cent who deemed it harmful.

New regulations may be required to make it easier to identify content created by generative AI.

China has mandated watermarks for AI content, and has made it illegal to delete, alter or remove these. Europe is also mulling similar legislation.

The Australian government earlier this year launched an inquiry into what regulations might be needed to ensure the safe development of AI, and whether new laws might be needed. The inquiry received more than 500 submissions, including from tech giants Google, OpenAI and Microsoft, but the government is yet to issue its response.

Google in August revealed SynthID, a piece of technology that embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye but detectable for identification.

Google’s SynthID embeds a digital watermark into the pixels of an image.

Google’s SynthID embeds a digital watermark into the pixels of an image.Credit: Bloomberg

“The idea of watermarks for content identification is promising, but there’s a catch,” US-based lawyer Riley Beam, managing attorney at Douglas R Beam, told this masthead.

“Embedding personally identifiable information in metadata to establish ownership may overstep the boundaries of user privacy. This dual challenge of ensuring content authenticity while respecting privacy adds complexity to the issue.”

Riley Beam.

Riley Beam.

Current AI watermarks aren’t foolproof, Beam said, as researchers have discovered ways to easily remove or bypass them, raising questions about their effectiveness.

“Using AI watermarks, like Google’s SynthID for images, is a step towards identification, but in the grand scheme, relying solely on watermarks might not be enough to combat the rise of AI-generated misinformation. Combining these digital markers with manual content verification and fact-checking emerges as a more robust approach,” Beam said.

“News organisations must stay vigilant and adapt, employing a multi-faceted strategy to navigate the evolving landscape of AI technology, ensuring both credibility and the quality of information.”

The Market Recap newsletter is a wrap of the day’s trading. Get it each weekday afternoon.

[ad_2]

Source link

Serendib News
Serendib News
Serendib News is a renowned multicultural web portal with a 17-year commitment to providing free, diverse, and multilingual print newspapers, featuring over 1000 published stories that cater to multicultural communities.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles