In addition to this newsletter, I recommend some other great ones. All free. Check them out here.
Friends,
Two of the leading figures in AI stated that we would get to AGI (Artificial General Intelligence) within the next few years. Sam Altman claims AGI is coming in 2025 and machines will be able to 'think like humans' when it happens. Anthropic Dario Amodei the CEO Says AI Similar To Human Intelligence Could Be Around The Corner: 'We'll Get There By 2026 Or 2027'
What is AGI - and what does this mean?
Artificial General Intelligence (AGI) — the hypothetical machine intelligence with cognition abilities on par with humans — has been a subject of vibrant debate for philosophers, technologists, and ethicists alike. It’s been a louder and louder conversation since Chat-GPT was launched - and always courts controversy! The discussions often flip-flop between utopian visions of AGI solving intractable problems like disease and climate change, and dystopian fears where humans lose control over their creations, leading to potentially existential threats. But beneath the alarmism and optimism lies a fascinating evolution of how we perceive intelligence and the trajectory of current AI progress.
Our journey towards understanding AGI is marked by gradual revelations from current AI models, particularly those harnessing deep learning and neural networks. When we look at AI models, we see a process analogous to a nascent cognition - just like when you are conversing with Copilot, Chat-GPT or Claude, for example. These models, trained on vast datasets, surprise us with nuanced responses that hint at general intelligence - almost human-like responses.. The exponential improvement in AI's ability to solve increasingly complex tasks suggests a trajectory that some experts, like OpenAI’s Sam Altman, speculate might culminate in AGI within the next few years, potentially as soon as next year.
This timeline feels ambitious, particularly through the lens of historical expectations that placed AGI decades, even centuries away. Yet, what differentiates today from past projections is the technical strides in AI that break previous limitations and the shift in societal and regulatory landscapes that oscillate between speeding up and slowing down this technological ambition. As AI advances, experts now emphasise not just the technical hurdles but the societal, ethical, and regulatory cogwheels that must move in harmony to usher AGI responsibly.
Central to the feasible realisation of AGI is the progression from narrow AI — machines adept at specific tasks — to crafting systems that exhibit general-purpose intelligence. This might be the breaking down of a simple task, like automatically sourcing quotes for your next project ahead of time. Current research is making this leap conceivable. Progress is notable, especially in the realm of so-called “reasoning models,” which rely less on vast datasets and more on real-time problem-solving, closer in style to human thought processes. This evolution impacts technical feasibility and also how AI interacts within human contexts.
Despite this momentum, our journey towards AGI is interwoven with profound uncertainty. The complexity of anticipating the full societal impact of AGI parallels burgeoning questions about regulation and control over such intelligence. Voices like Elon Musk echo caution akin to handling nuclear technology due to potential misuse or unintended consequences, emphasizing the necessity for robust oversight mechanisms before AGI reaches pivotal capabilities.
Indeed, as history reminds us, technological revolutions are double-edged swords — engines of progress that simultaneously demand vigilant responsibility. The dialogue surrounding AGI — whether to accelerate towards it with open arms or approach with carefully measured steps — echoes broader philosophical questions about control, autonomy, and the future we wish to architect. We may end up in an equivalent of what Bostrom termed the Paperclip Apocolypse:
The notion arises from a thought experiment by Nick Bostrom (2014), a philosopher at the University of Oxford. Bostrom was examining the 'control problem': how can humans control a super-intelligent AI even when the AI is orders of magnitude smarter. Bostrom's thought experiment goes like this: suppose that someone programs and switches on an AI that has the goal of producing paperclips. The AI is given the ability to learn, so that it can invent ways to achieve its goal better. As the AI is super-intelligent, if there is a way of turning something into paperclips, it will find it. It will want to secure resources for that purpose. The AI is single-minded and more ingenious than any person, so it will appropriate resources from all other activities. Soon, the world will be inundated with paperclips. It gets worse. We might want to stop this AI. But it is single-minded and would realise that this would subvert its goal. Consequently, the AI would become focussed on its own survival. It is fighting humans for resources, but now it will want to fight humans because they are a threat (think The Terminator).
This AI is much smarter than us, so it is likely to win that battle. We have a situation in which an engineer has switched on an AI for a simple task but, because the AI expanded its capabilities through its capacity for self-improvement, it has innovated to better produce paperclips, and developed power to appropriate the resources it needs, and ultimately to preserve its own existence. Bostrom argues that it would be difficult to control a super-intelligent AI – in essence, better intelligence beats weaker intelligence. Tweaks to the AI’s motivation may not help. For instance, you might ask the AI to produce only a set number of paperclips, but the AI may become concerned we might use them up, and still attempt to eliminate threats. It is hard to program clear preferences, as economists well know.
This intelligent continuum, from sophisticated algorithms to AGI, ignites questions about our place in a world potentially inhabited by entities exhibiting, or even surpassing, human cognition in various realms. Our participatory role in this potential future is not merely as spectators but as architects who decide through policy, ethics, and innovation, how such intelligence will be integrated into the fabric of society. As we edge closer to the realisation or refutation of AGI's feasibility, these questions about alignment to human values and ensuring universal benefit are as pressing as those about the technologies that may bring AGI to life. These challenges and conversations shape not only the roadmap to AGI but the future trajectory of humanity itself.
I am no expert, but at the speed of what is happening and what you can already do, I am convinced that AGI is going to happen - it’s how we police its power that is most important. If no regulations are in place by 2025, risks could escalate.
If machines that rewrite their code and develop intelligence beyond human comprehension. It’s a step into the abyss. And if AI reaches a superintelligence level, it could unravel puzzles beyond human grasp. But it also poses existential risks if not properly managed. Are we prepared? Less than a decade ago, AI advancements were unpredictable. Now, the push towards superintelligence garners urgency.
Stay Curious - and don’t forget to be amazing,
Here are my recommendations for this week:
One of the best tools to provide excellent reading and articles for your week is Refind. It’s a great tool for keeping ahead with “brain food” relevant to you and providing serendipity for some excellent articles that you may have missed. You can dip in and sign up for weekly, daily or something in between -what’s guaranteed is that the algorithm sends you only the best articles in your chosen area. It’s also free. Highly recommended Sign up.
Now
You feel it every day. The world is changing. Technology is at the centre of this transformation. As technology becomes ubiquitous, the focus shifts to talent rather than access. Effective use of technology requires skill and creativity. Take learning, for instance: anyone can access almost any course on anything online, but what distinguishes those who excel is often the teacher's talent and engagement. Just as a typewriter didn’t write a book, technology doesn’t dictate outcomes—the individuals behind the tools do.
Best AI Newsletters - These are twenty of the best email newsletters that there are currently covering AI, which, as a layperson non-techie, I've found useful and inspiring.
Six Psychological Tricks Scammers Use Against You - Sometimes you wonder how anyone falls for an obvious scam; but the truth is, we’re all susceptible.
Is convenience making our lives more difficult? Everything is easier with modern technology – except fulfilling your true potential
‘The best advice I’ve ever been given’: celebrities share their wisdom - Words of encouragement, words of courage, wise words… Some of our most influential figures share the advice that’s helped them most
Next
Bitcoin Price Forecast: BTC eyes $100K, what are the key factors to watch out for? Bitcoin hit an all-time high of $93,477 on Wednesday, and prompted investors to book profits after its 25% rise in the past two weeks, since Trump’s win. According to data, investors realised profits of nearly $8 billion in 2 days.
How ChatGPT Brought Down an Online Education Giant - Chegg’s stock is down 99%, and students looking for homework help are defecting to ChatGPT
Internet freedom is fading in the new era of social control - “The evolution of digital media makes stricter regulation of online behaviour not only feasible but inevitable. Also: The scary truth about AI copyright is nobody knows what will happen next
Superhuman vision lets robots see through walls, smoke with new LiDAR-like eyes - AI-powered PanoRadar turns radio waves into 3D views, offering robots LiDAR-like vision at lower cost.
Inside the Billion-Dollar Startup Bringing AI Into the Physical World - Physical Intelligence has assembled an all-star team and raised $400 million on the promise of a stunning breakthrough in how robots learn.
Free your newsletters from the inbox: Meco is a distraction-free space for reading newsletters outside the inbox. The app has features designed to supercharge your learnings from your favourite writers. Become a more productive reader and cut out the noise with Meco - try the app today
If you enjoyed this edition of Box of Amazing, please forward this to a friend. If you share it on LinkedIn, please tag me so that I can thank you.
If you are interested in advertising in this newsletter, please see this link