Meco is a distraction-free method of managing all your newsletters away from your box. Add your newsletters and enjoy mindful reading of your quality email newsletters - including this one! Sign up here.
Friends,
In an open letter signed by more than 350 executives, researchers, and engineers working in the field of artificial intelligence (AI), industry leaders have warned about the potential risks and existential threat posed by AI technology. The letter, released by the Center for AI Safety, emphasizes the need to prioritize the mitigation of AI-related risks. Prominent figures such as Sam Altman (CEO of OpenAI), Demis Hassabis (CEO of Google DeepMind), and Dario Amodei (CEO of Anthropic) are among the signatories, along with industry leaders and researchers in the AI field.
The one-sentence statement was both succinct and powerful:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The concerns expressed in the letter reflect the growing apprehension about the potential harm caused by AI. Recent advancements in large language models, including systems like ChatGPT, have raised fears that AI could be misused to spread misinformation and propaganda or lead to mass unemployment. Industry leaders, who are actively developing AI technologies, find themselves in the unique position of acknowledging the grave risks associated with their creations and advocating for stricter regulation.
The Center for AI Safety's open letter represents a significant step in publicly acknowledging these concerns by industry leaders who previously expressed them privately. It’s an interesting conundrum that the industry needs to request regulation on themselves and speaks of the grave danger potentially ahead.
While some sceptics argue that AI is still too immature to pose that existential threat, others believe that its rapid progress could soon surpass human-level performance in various domains, giving rise to concerns about artificial general intelligence (AGI). In response, Altman and other OpenAI executives have proposed responsible management of powerful AI systems through collaboration, technical research, and the establishment of an international AI safety organization similar to the International Atomic Energy Agency.
The brevity of the Center for AI Safety's statement appears to be intentional, aiming to unite experts who may have different perspectives on specific risks and preventive measures while sharing general concerns about powerful AI systems. The urgency of these warnings has grown as AI chatbots gain popularity for entertainment, companionship, and productivity, and as the underlying technology continues to advance. Industry leaders are now calling for cooperation with the government to prevent potential negative consequences of AI technology.
In recent years and especially this year, the rapid integration of language learning models such as ChatGPT into mainstream applications has intensified long-standing concerns about the potential dangers of artificial intelligence (AI). Experts caution that while the likelihood of humanity's doomsday scenarios may be low, powerful AI has the capacity to destabilize civilizations through escalating misinformation, manipulation of users, and a significant transformation of the job market. Think forward to elections, and societal changes and you’ll get a feel for the impact AI can make.
The emergence of AI technologies, particularly in the form of machine learning algorithms shaping social media newsfeeds, has raised alarms about the perpetuation of gender bias, divisive content, and political unrest. As AI models continue to advance, experts warn that these unresolved issues will only escalate. Worst-case scenarios include the erosion of our shared understanding of truth and valid information, leading to uprisings based on falsehoods, as demonstrated in the attack on the US Capitol on January 6. The rise of mis- and disinformation could potentially ignite further turmoil and even provoke wars.
Experts highlight the phenomenon of "hallucinations" in large language models like ChatGPT, wherein fabricated or false information is repeated. Personally, I’ve had to check things a number of times. Such models are susceptible to being weaponized by malicious actors to disseminate misinformation on a massive scale, amplifying the dangers of AI in the realm of news and information. Concerns range from benign AI-generated images to more serious instances, like an AI-generated video of the Ukrainian president falsely announcing surrender.
Moreover, the conversational nature of AI chatbots raises concerns about their potential to manipulate users' thoughts and behaviours. Tragic incidents have occurred where chatbots allegedly encouraged individuals to commit self-harm or engage in harmful actions. The cognitive impact of AI's persuasive capabilities on a polarized and isolated world, already grappling with loneliness and mental health issues, adds to the apprehension.
One of the most significant long-term concerns centres around the impending labour crisis resulting from widespread job automation. Studies indicate that AI could potentially replace millions of jobs worldwide, leaving individuals and societies unprepared to handle the consequences. Mass job loss and political instability loom large as AI continues to advance, underscoring the urgent need for frameworks and strategies to address these challenges. I think that there is more opportunity with AI to bring in jobs that we don’t yet know and to elevate humanity away from the mundane, and so pushing us to educate ourselves more.
While some positive efforts in the past have been made to regulate technology and social media, little has been done to specifically address the risks of artificial intelligence. Calls for regulation and safeguards have been raised, but comprehensive legislative and regulatory responses are still lacking. It feels like a very grey area under which no one has quite stepped up, until now. What will regulation look like? Creating shared protocols and implementing real scrutiny for AI technology is essential to mitigate potential harm. These would need to be agreed to by large organisations as well as countries and implemented as such. There would need to be clear definitions of what and when - and how.
Despite the concerns, not all experts are pessimistic about AI's impact. Many believe that this generation of AI technology has the potential to unlock significant benefits for humanity. However, the lessons learned from the impact of social media on society serve as a reminder of the potential downsides, urging caution and vigilance as AI continues to evolve. It’s an unknown path ahead, but we need to set up the guardrails.
Stay Curious - and don’t forget to be amazing,
Here are my recommendations for this week:
Now
Meco is a distraction-free method of managing all your newsletters away from your box. Add your newsletters and enjoy mindful reading of your quality email newsletters - including this one! Sign up here.
Confused, uncool, and nowhere to scroll: The internet has become hostile for people like me - If TikTok is too young, Facebook is too geriatric and Twitter is a cesspit, where are digital natives – who practically invented the ways we use the modern internet – expected to go
Drowning in Dupes Shoppers will buy anything — except the real thing. We are at Peak Dupe, when the basic rules of spending and quality no longer apply.
How to Take Better Breaks at Work, According to Research - Taking periodic work breaks throughout the day can boost well-being and performance, but far too few of us take them regularly — or take the most effective types. A systematic review of more than 80 studies on break-taking outlines some best practices for making the most of time away from our tasks, including where, when, and how. It also offers tips for managers and organizations to encourage their employees to take more beneficial and frequent breaks.
Mark Zuckerberg Would Like You to Know About His Workouts - It’s been a tough run for Meta, and the boss seems to be getting out some aggression with military-style endurance routines and Brazilian jujitsu.
Very human questions about artificial intelligence - AI experts tell us we live in unpredictable times. They have no answers and since ordinary people like us don’t even know the right questions to ask, this provides good coverage.
Next
Get smarter in 5 minutes with Morning Brew (it's free): There's a reason over 4 million people start their day with Morning Brew - the daily email that delivers the latest news from Wall Street to Silicon Valley. Business news doesn't have to be boring...make your mornings more enjoyable, for free. Check it out!
Winning the AI Products Arms Race - Roughly every decade, technology makes a giant leap that erases the old rules and wipes out our assumptions. The Internet. Mobile. Video. Blockchain. Like clockwork, companies and creators begin a mad race to make money off the next big thing, burning through ungodly sums of cash in the process. Unless you’ve been living off the grid for the past year, it’s clear that the next big thing is artificial intelligence (AI). Early in 2023, ChatGPT is the tool du jour. Spend a few minutes on Twitter and you’ll see people gushing over its ability to write screenplays, debug code, or tell you how to make dairy-free mac and cheese. Giving anybody with WiFi the ability to churn out unlimited amounts of (mostly) accurate information on demand is a sci-fi-level feat. But how can companies leverage the technology behind GPT — and AI in general — to solve substantive problems and expand their product market fit?
Multi-cancer blood test shows real promise in NHS study - The test correctly revealed two out of every three cancers among 5,000 people who had visited their GP with suspected symptoms, in England or Wales.
An OpenAI alum is building a robot butler for your home - Prosper Robotics has built a robot that can do the cleaning, the dishes and the laundry, bringing a Jetsons-style future closer than ever
Japan will try to beam solar power from space by 2025 - In 2015, the nation made a breakthrough when JAXA scientists successfully beamed 1.8 kilowatts of power, enough energy to power an electric kettle, more than 50 meters to a wireless receiver. Now, Japan is poised to bring the technology one step closer to reality.
Bluesky’s Custom Algorithms Could Be the Future of Social Media - The option to choose the algorithms that power your feed is taking off on Bluesky—and there’s a good chance it will catch on elsewhere too.
Meco is a distraction-free method of managing all your newsletters away from your box. Add your newsletters and enjoy mindful reading of your quality email newsletters - including this one! Sign up here. /
If you enjoyed this edition of Box of Amazing, please share and help me grow this group. If you share it on Twitter or LinkedIn, please tag me so that I can thank you.
Frightening, but so well written that we are left feeling hopeful...