In addition to this newsletter, I recommend some other great ones. All free. Check them out here.
Friends,
Time flies when you’re having fun, no?
It’s been eight years since I decided to put “fingers to keyboard” to share thoughts around what I had read, package them up and send out in a weekly email. I called it Box of Amazing. In the main, I’ve kept up Box of Amazing for essentially two subsequent presidential terms. Two terms ago, I was excited about the future.
You can see why I was excited. A year or so into launch, I penned this article on things to watch and why. You can see references to AI, but neither you nor I would know how it would dominate our lives. In my recommended reading for this week, the majority of the articles involve AI in one way or another - and that’s the story of the era we are in. And believe me when I say we are in this age - The Intelligence Age.
I hope you have found some clarity in what I have written and shared. If you have, please recommend Box of Amazing to a friend. I wrote this newsletter first to hold myself accountable for learning something new. I continue it because I believe it’s essential that, as humans, we are skilled and knowledgeable enough to cater for the future. It is evolving faster than at any time in humanity.
You need to forward this email with an ask to sign up - or send your people to boxofamazing.com. As you know, this is free - and read by close to 20K. I’d be most appreciative.
Despite the supposed arms race around AI, I’m still bullish about how it might unlock a better future. Trump’s Stargate project, which brings investment of $500B with NVIDIA, Oracle, ARM, OpenAI, Microsoft as tech partners, it says to me that the spending frenzy is only just getting started. But it was at the tail end of 2022, when ChatGPT captured the world’s attention and it seemed like our digital future arrived overnight. It was almost as if those themes I was watching were about to lead to fruition. The “large language models,” or LLMs, could produce amazing essays, poems, code snippets in seconds. I remember vividly that fears emerged just as quickly: that students would use AI to cheat on homework, or that chatbots could manipulate our private data. Yet beneath these concerns lies a deeper question about the essential promise of artificial intelligence. Could AI, far from dooming us, actually unlock a better future, one that is more inclusive, humane, and meaningful than anything we have experienced so far?
First, we should acknowledge the genuine achievement of modern AI. Systems like OpenAI’s GPT-4, Anthropic’s Claude and Google’s Gemini [and maybe even DeepSeek!] are built on immense data sets and refined with methods such as “reinforcement learning from human feedback” essentially repeated fine-tuning to minimise harmful or misleading answers. These models aren’t perfect, and early prototypes often regurgitated toxic, biased, or entirely fabricated text. But developers introduced “guardrails”. These were bans on disallowed content or mechanisms for safe completion. This made it possible for millions of people to use AI chatbots without daily scandal or constant hallucinations. That alone is a milestone. Those same systems, integrated with specialised tools, can be repurposed for tasks as diverse as coding, research assistance, or language translation.
Of course, credible criticisms persist. Large AI models are trained on vast troves of online data, frequently appropriating the work of artists and writers, raising thorny questions about consent and copyright. The environmental impact of running these models is huge, from water-cooled server farms to round-the-clock electricity usage - and it grows with each new iteration. And history warns that whenever technology accelerates, profit motives can concentrate wealth in a few hands. As technology critics usually mention, AI capitalism often replicates and amplifies existing inequalities.
Yet we should not overlook the constructive potential of AI. One field where these tools might do the greatest good is education, a sector sorely in need of innovation. Sal Khan, the founder of Khan Academy, has been experimenting with an AI tutor called “Khanmigo” to create a personalised, round-the-clock teaching assistants that can guide a student through complicated tasks, maths tests, essay writing, historical readings without handing out answers for free. Early trials suggested teachers and learners alike have grown more engaged and independent: advanced students can push ahead with challenging material while novices receive patient, step-by-step instruction. This frees teachers to focus on nuance, creativity, and human connection. If educational AI is carefully regulated to ensure data privacy and avoid the pitfalls of simple “drill-and-kill” tutoring, it could revolutionise learning on a global scale. Khan Academy was one of the first, but as I saw this week at the BETT conference in London, there are hundreds or thousands more trying to personalise education.
AI’s capacity to parse huge datasets can also be harnessed for historical study or scientific research. Rather than spending years sifting through archives, scholars can deploy these language models to identify patterns, cross-reference documents, and even generate preliminary translations of ancient texts. Early testers report that, while not flawless, advanced AI can save precious time and widen the scope of academic inquiry. Imagine historians reassembling the biographies of thousands of forgotten individuals in days rather than decades, or scientists combing through genomic data to spot disease markers with unprecedented speed. Productivity gains on this scale might help us tackle everything from climate modelling to medical breakthroughs sooner rather than later.
Beyond text generation, AI might also become a personal “attention guardian,” filtering our information streams in a more humane way. Today’s social media algorithms fine-tuned for maximum “engagement” often lead us down toxic rabbit holes. Now, the emergence of language models that adapt to our explicit instructions (not just our clicks) suggests a different path. Instead of a feed that manipulates our biases, we could instruct an AI to prioritise well-sourced articles or highlight posts that challenge our worldview constructively. That shift is not guaranteed. Big tech companies remain addicted to data harvesting, but it opens the door to a digital public sphere that genuinely respects users’ time and intelligence. Critics warn that centralising power in a new generation of AI gatekeepers is equally fraught. Yet the promise of a user-controlled, privacy-preserving “digital butler” stands in hopeful contrast to today’s data-hoarding behemoths.
Some fear that generative AI is simply the next wave of exploitation: industrial-scale plagiarism, deepened surveillance, mass “hallucinations” of facts, and huge carbon footprints. Others warn of more dire ends - “rogue” intelligence that can’t be aligned with human values, a cat-and-mouse dynamic in cybersecurity that leads to near-permanent digital vulnerability. These risks must not be minimised or caricatured. But if we overreact by categorically banning generative systems or smothering open research, we might stunt beneficial uses as well. The challenge is to shape AI so that it works transparently, respects creativity, and upholds democratic principles. For instance, reinforcement learning from AI feedback can refine moral reasoning and reduce bias, yet we must keep pressing for clearer accountability and audits.
History shows that every technological leap from steam engines to electricity to the Internet comes with new inequities and unexpected hazards. The difference now is that machine learning’s scale and speed can amplify those problems dramatically. Intelligent stewardship means building guardrails from day one. Companies that develop large models can be required to produce environmental audits, pay “data dividends” to creative workers whose content trains AI, and adhere to mandatory transparency around how models are fine-tuned. Publicly funded AI, such as educational platforms and local “digital assistants,” can help democratise access and ensure no single corporate giant monopolises the technology. Meanwhile, philosophers and ethicists should weigh in on how best to preserve the “value of the real” the human authenticity that an endlessly replicable synthetic experience might dilute.
The question of how AI might unlock a better future boils down to whether we can shape generative systems toward genuine social benefit, rather than inflated profit or dystopian dominance. Yes, AI can replicate historical injustices, spread misinformation, or threaten digital security. It can also free teachers from tedious tasks, broaden the scope of academic research, and provide mental health support or companionship on lonely days. Far from a mere trick, AI may become one of humanity’s most potent instruments for equality and invention provided we align it with the best of our ideals, not the worst of our failings.
Stay Curious - and don’t forget to be amazing,
Here are my recommendations for this week:
One of the best tools to provide excellent reading and articles for your week is Refind. It’s a great tool for keeping ahead with “brain food” relevant to you and providing serendipity for some excellent articles that you may have missed. You can dip in and sign up for weekly, daily or something in between -what’s guaranteed is that the algorithm sends you only the best articles in your chosen area. It’s also free. Highly recommended Sign up.
Now
When A.I. Passes This Test, Look Out - The creators of a new test called “Humanity’s Last Exam” argue we may soon lose the ability to create tests hard enough for A.I. models. Researchers have launched “Humanity’s Last Exam,” an ambitious new test designed to challenge advanced AI systems after they easily aced traditional benchmarks. Developed by Dan Hendrycks, of the Center for AI Safety, with Scale AI, the test compiles 3,000 tough questions sourced from experts across diverse fields, from rocket engineering to analytic philosophy. The aim is to discover if AI can surpass graduate-level tasks once deemed too specialized for machines. In trials, leading models performed dismally, yet developers predict rapid improvement. They hope these demanding questions, including obscure physics puzzles, will spotlight AI’s true progress and highlight looming future risks.
What Happened to Hanging Out on the Street? Urban pedestrians are walking faster and doing less socializing, according to a new study. Is technology to blame, or public space itself?
Trump Is Already Making America Weaker and More Vulnerable President Trump, in his second term, claims to put American security first yet swiftly undermines it. On day one, he withdrew the United States from the WHO, weakening global pandemic defences. He then pardoned Jan. 6 insurrectionists, emboldening extremists. Simultaneously, Trump revived TikTok – a data goldmine for Beijing’s surveillance machine. Despite a 2024 law demanding ByteDance’s divestment, Trump extended a deadline, championing a ‘deal’ that could preserve Chinese influence. His sudden policy reversal raises suspicion of cosy ties and questionable motives with billionaire Jeff Yass, a ByteDance investor. In practice, Trump’s actions strengthen China’s hand while eroding America’s safety.
TikTok: weapon of mass distraction China knows it is digital opium - Is TikTok a mere social media app, or a civilizational “pleasure weapon” accelerating Western decline? That’s the provocative charge in a recent essay tracing China’s ideological roots back to thinkers such as Wang Huning—who believes decadent liberalism will devour itself—and Nick Land, the “accelerationist” who sees technology as an unstoppable force hurtling society toward chaos. TikTok’s addictive algorithm, the essay argues, shortens attention spans, traps Western youth in passive consumption and degrades the intellect. Critics counter that TikTok alone is not the true villain; free market forces produce similar “scroll-holes” on other platforms. Others question whether banning the app would solve anything, since underlying addictions and market incentives remain intact. Ultimately, this debate underscores a broader anxiety: if technology is “mastering us,” perhaps our best defense lies in reclaiming conscious control over how we spend our fleeting attention.
How Disinformation Deforms Democracy - We need a public bridge across our social media silos: Democracy depends on shared truths and a robust public sphere. Nathan Gardels warns that unbridled misinformation is undermining these foundations. Biden’s admonition about a growing tech-industrial oligarchy underscores a crisis: Concentrated influence and endless narrative wars subvert credible discourse. Fueled by content moderation battles, “digital federalism” is fracturing social media into smaller, self-governed enclaves, splintering consensus even further. No single authority or community ‘referee’ can reliably regulate this ecosystem. Instead, building neutral online platforms that bridge political silos, forging spaces where diverse voices meet on equal terms. That cross-pollination is truly vital if democracy is to thrive.
Next
Tech leaders respond to the rapid rise of DeepSeek - Matching OpenAI’s top model at a fraction of the cost, DeepSeek highlights the growing prowess of open-source AI over proprietary systems. While some Western tech leaders, like Marc Andreessen and Yann LeCun, celebrate the breakthrough as a testament to collaborative innovation, others, like Mark Zuckerberg, double down on high-capital proprietary strategies, promising Meta’s Llama 4 will surpass all. The rise of DeepSeek underscores a critical debate: will AI’s future belong to cost-efficient, open models or centralized, resource-heavy giants like Meta? Also see this older article
OpenAI launches Operator, an AI agent that performs tasks autonomously: OpenAI’s new AI agent, Operator, is a groundbreaking tool designed to automate tasks autonomously. Available as a research preview for premium ChatGPT users, Operator uses a built-in web browser to perform actions like booking travel, shopping, and making reservations. Powered by a Computer-Using Agent model (CUA), it navigates websites much like a human, eliminating the need for developer APIs. Despite its potential, Operator has limitations, requiring user oversight for sensitive tasks and struggling with complex interfaces. OpenAI sees it as a step toward AI-powered efficiency, but questions remain about its broader implications for privacy, security, and digital autonomy. We Tried OpenAI’s New Agent—Here’s What We Found
'Stargate' Squares Some AI Circles: Yes - $500B - sounds like something out of a Dr Evil Manifesto - OpenAI, Microsoft, SoftBank, Oracle, Trump, MGX, ARM, and NVIDIA all get double-dip wins in the announcement
Trump signs executive order promoting crypto, paving way for digital asset stockpile - President Trump signed an executive order promoting cryptocurrency innovation in the U.S., marking a stark shift from his earlier criticism. The order outlines plans to develop a national digital asset stockpile, protect blockchain developers and miners, and support dollar-backed stablecoins globally. Trump’s crypto-friendly stance, bolstered by industry contributions to his 2024 campaign, aims to establish the U.S. as a leader in digital asset innovation. Key appointments, including pro-crypto figures at the SEC and Treasury, signal a regulatory pivot toward fostering growth. Industry leaders declared the “war on crypto” over, setting the stage for a new era of U.S. crypto dominance.
DeepMind’s Hassabis Sees AI-Designed Drug Trials This Year - Isomorphic Labs, a Google DeepMind spinoff, plans to initiate clinical trials for AI-designed drugs this year, according to CEO Demis Hassabis. Speaking at the World Economic Forum, Hassabis highlighted the potential of AI to dramatically shorten drug discovery timelines from decades to weeks. Leveraging breakthroughs like AlphaFold, which predicts protein structures, Isomorphic Labs is collaborating with pharmaceutical giants Eli Lilly and Novartis. While initial AI-driven drug data is mixed, partnerships between tech and pharma are growing. Hassabis also noted that achieving artificial general intelligence remains years away, requiring significant breakthroughs and further scaling of current technologies.
Free your newsletters from the inbox: Meco is a distraction-free space for reading newsletters outside the inbox. The app has features designed to supercharge your learnings from your favourite writers. Become a more productive reader and cut out the noise with Meco - try the app today
If you enjoyed this edition of Box of Amazing, please forward this to a friend. If you share it on LinkedIn, please tag me so that I can thank you.
If you are interested in advertising in this newsletter, please see this link
all AI learnings/updates encapsulated in one newsletter! Bravo