[ad_1]
Loading
Its premise was simple: to be a place to connect with your friends and family online. It could be a time suck, sure, but a time suck without the toxicity and rampant misinformation that we have now.
But gradually Facebook changed. What began as a tool for connection became a hub of division and unforseen consequences. The feed became increasingly flooded with advertising, fake news and abuse. When their parents joined, many users – in particular those aged 18 to 30 – left the platform altogether, flocking instead to cooler platforms like TikTok, or switching to group chats and texting.
While Facebook’s first decade was largely successful, its second has seen it deteriorate in the face of scandal after scandal. The company has failed at every hurdle to accept responsibility for the content posted on its platform and, even two decades in, has not proven it can be trusted to act in the best interests of its users.
In 2012, the company conducted experiments on around 70,000 users without their consent, removing certain words from their newsfeeds to test how it affected their reactions to posts. It took two years for those experiments to be made public.
Nearly 10 years later, in 2021, employee-turned-whistleblower Frances Haugen testified before US Congress that the company puts profits over safety. Haugen helped release the so-called “Facebook Papers” which detailed the platform’s fading popularity with teenagers and its inability to counter hate speech.
That same year, Facebook blocked the pages of Australian charities, health organisations and government services during a pandemic and raging bushfires, all to protest a law that would force it to compensate local publishers for news.
And, in 2022, the company paid a whopping $1.1 billion to finally settle legal action relating to the Cambridge Analytica scandal, in which hundreds of millions of Facebook users had their personal data released en masse to third parties without consent.
Facebook’s algorithms, which remain shrouded in secrecy, have often fed our worst tendencies, encouraging everything from home decor envy to political extremism and violence, as has been seen in the US and Myanmar.
Earlier this month, it became apparent just how unrecognisable Facebook is now from its 2004 self. In front of the US Senate Committee, Zuckerberg and other Silicon Valley executives were forced to face parents holding photographs of their dead children, who were victims of online sexual exploitation and cyberbullying.
Senators were united across the aisle about the damage done by the likes of Facebook to the health and wellbeing of children.
“They’re responsible for many of the dangers our children face online,” the judiciary committee’s chairman, Democrat Dick Durbin said. “Their design choices, their failures to adequately invest in trust and safety, their constant pursuit of engagement and profit over basic safety have all put our kids and grandkids at risk.”
Republican Senator Josh Hawley – who couldn’t be more different politically from Durbin – also repeatedly criticised Zuckerberg.
“Your product is killing people,” he told the executive.
Alongside the user addiction and negative mental health outcomes, Facebook itself has an existential problem. Two years ago, Zuckerberg changed its parent company’s name to Meta, and splashed billions on building the “metaverse”, a three-dimensional virtual reality. Those efforts haven’t amounted to much – the buzz about artificial intelligence quickly superseded the metaverse – and the company is now having an identity crisis.
Investors aren’t fazed. Shares in Meta are sitting near their all-time high. They are clearly bullish that despite the myriad missteps, further growth is still on the horizon, whether that be in the metaverse or elsewhere. They’re trusting Zuckerberg, who maintains absolute control of the company through his share ownership, to figure it out.
Loading
But we can’t wait another 20 years for Facebook, or its CEO, to grow up.
Regulators, governments and users are more aware than ever of the pitfalls of social media. Many of those pitfalls will be amplified by the power of generative AI, which is already proliferating deepfake pornography and proving that Facebook and its ilk are still not yet ready to be trusted to police their own platforms. Zuckerberg and his peers such as Elon Musk have proven they’re incapable, or unwilling, to do that work themselves.
Hopefully, the next generation of start-up founders will learn from the mistakes of their predecessors. The entrepreneurs building whatever ends up becoming the next Facebook will likely be more thoughtful – and more adult – than what came before. And the internet will be a better place for it.
David Swan is technology editor.
The Opinion newsletter is a weekly wrap of views that will challenge, champion and inform your own. Sign up here.
[ad_2]
Source link