In addition to this newsletter, I recommend some other great ones. All free. Check them out here.
Friends,
This essay introduces an idea I’ve been developing over the past year. It's the foundation for what I believe is the most important shift in human capability that we will face in our lifetime: the rise of SuperSkills. These are the uniquely human abilities we must develop, not despite AI, but because of it. This idea started as a quiet question I kept asking myself while leading teams, advising founders, and watching AI become embedded in everything we do: what are the skills that still make us valuable? That question became a conviction and eventually, a framework: SuperSkills.
Knowledge Is No Longer Power
Francis Bacon is known for his famous declaration: ‘Knowledge is power.’ But that idea no longer fits the world we live in. It’s been upended. In the age of AI, having knowledge is easy. The edge doesn’t come from having knowledge. It comes from turning it into action, at the right time, with the right judgement and wisdom. A teenager with a smartphone can now access more information in seconds than Renaissance scholars gathered in lifetimes. Simply possessing knowledge no longer grants an advantage.
Medical physicians once relied on memorised knowledge. Today, AI systems can instantly recall every published study and protocol. The value of the physician lies in integrating AI findings with clinical judgement, patient context, and ethical guidelines beyond algorithms' comprehension. In oncology wards, AI might find the tumour, but it can’t hold your gaze when you’re scared.
Only a doctor can look you in the eye and say, ‘We’ll figure this out’ and truly mean it. This is the part machines still can’t touch: ethics, empathy, and accountability. While AI may inform decisions, the ethical weight of those decisions, especially those concerning human welfare, must remain with humans. We alone possess the capability for moral reasoning and empathy.
So what drives power now?
The key to power in the age of AI lies not in the mere possession of knowledge but in the ability to apply it effectively through judgement, creativity, and ethical considerations. Satya Nadella consistently reiterates that technology isn't eliminating knowledge work but fundamentally reshaping it. Our most human capacity becomes our ability to harness AI and technology-accessible knowledge in service of uniquely human attributes: judgement, creativity, and what Reid Hoffman calls "superagency."
This superagency, the human capacity amplified through technological collaboration, represents the higher-order thinking that transforms raw information into meaningful action. It's the difference between having the data and deriving insight. Harvard's Karim Lakhani frames this shift clearly: computational technology has dramatically lowered the cost of cognition, just as the internet slashed information transmission costs by 99% in previous decades. A marketing strategist who once spent hours reviewing campaigns now receives AI-generated analytics instantly; her competitive edge comes from deciding what to test, where to invest, and how to adjust strategy based on the additional data.
Like master gardeners who know which seeds to plant in which soil, future knowledge workers won't be valued for the information they possess but for their judgement about which analysis to cultivate, which connections to nourish, and which possibilities to prune. The tools may have changed from shears to software, but the discernment that separates flourishing from failure remains uniquely human.
Yet serious challenges accompany this transformation. Some herald technology as our intellectual salvation, but techno-optimism ignores real limitations. Berkeley researchers document how algorithmic effectiveness collapses when confronted with ambiguity or variability, precisely the messy conditions that define our most consequential problems. Machines generate flawless code for well-defined tasks, but falter when asked to determine which problems actually merit solving. Silicon Valley's promise of frictionless cognition conceals a dangerous truth: the most valuable thinking has always emerged from productive friction. What we need instead is intentional engagement with computational power while also acknowledging that human judgement and machine capabilities should evolve together, each enhancing the other's strengths.
Others argue that the real concern isn't machine capability, but ownership. In "Algorithms of Oppression," Safiya Noble argues that we must question who controls the algorithms when knowledge becomes algorithmic. The concentration of computing power in corporate hands threatens to create new knowledge monopolies, where access to enhanced cognition depends on corporate gatekeepers. This power asymmetry suggests that perhaps Bacon's maxim isn't obsolete after all, merely transformed. Knowledge remains power, but now it lies with those who control knowledge generation, not just those who possess it.
There’s a darker truth behind the buzz: not everyone controls the knowledge they consume. This power imbalance calls for a synthesis that acknowledges both the society-changing potential of these technologies and the necessity of human wisdom in their deployment. The real test is whether we can shape these tools without being shaped by them.
When students use AI to write essays or professionals generate reports without scrutiny, they do not merely produce weaker work; they contribute to intellectual atrophy, the gradual deterioration of cognitive abilities. Educational researchers are documenting a troubling trend emerging in classrooms: students produce machine-generated analyses with perfect structure but struggle to defend them when questioned. Their work becomes what we might term "factual confetti without conceptual glue," colourful fragments of information lacking the deeper understanding that connects them into cohesive meaning. As someone who's had to evolve my own learning approaches throughout my career, I recognise this struggle.
We face a paradox: as our tools for thought grow more sophisticated, our capacity for thinking may grow more shallow. This raises serious ethical questions about our relationship with cognitive technologies: Do we have a moral obligation to maintain certain thinking skills regardless of whether machines can replicate them? Is there an ethical imperative to preserve human authorship and intellectual struggle as values themselves? Many educators already recognise this challenge and are redesigning learning environments to nurture deeper human capabilities that machines cannot replicate.
The philosopher Hannah Arendt distinguished between labour (what we do to survive), work (what we create that outlasts us), and action (how we express our unique humanity). Computational technology threatens to collapse these distinctions by automating not just labour but the intellectual work that has historically defined professions. The ethical challenge becomes preserving spaces for truly human action amidst increasingly capable machines. While developed decades before modern computing, her framework offers a lens through which we better understand what remains distinctly human in an algorithmic age.
The consequence is clear: humans with technological augmentation will replace those without it. However, the most invaluable individuals will be those who understand how to leverage machines while maintaining their uniquely human abilities. Concerns from critics regarding technology taking over human jobs often miss an important point: machines don't displace humans entirely; rather, they take over certain tasks performed by humans. Economists refer to this as "task-based technological change." MIT economist David Autor argues that we should shift our focus from prioritising knowledge acquisition to nurturing intelligence, creativity, and critical thinking. A robot did not replace the factory worker displaced by automation in the 1980s. Instead, he was replaced by another worker who knew how to operate that robot.
Rightfully, those concerned with social justice fear that this transition could worsen existing inequalities. Technology historian Melvin Kranzberg's first law states: "Technology is neither good nor bad; nor is it neutral." The democratisation of knowledge through computational tools offers unprecedented opportunities for intellectual levelling, but only if we ensure universal access and education. Without deliberate intervention, the power gap between those with SuperSkills and those without could become the defining social divide of our era.
These capabilities must become public goods accessible to all, regardless of socioeconomic background, rather than privileges of the few. This question is not merely about fairness but about societal resilience. A world where only elites possess the skills to navigate the knowledge economy is unjust, squandering the diverse perspectives needed to address our most complex challenges.
Power emerges from the relationship between technological ability and human insight in today’s world. The most successful individuals and organisations will excel at this collaboration. An architecture firm that once took weeks to research building standards now employs computation to quickly compile regulations, enabling architects to concentrate on innovative design solutions beyond the reach of algorithms. While machines collect data with precision, the human architect adds the creative touch that turns buildings into works of art in steel and glass.
Knowledge, now universally accessible, has been democratised. However, wisdom, understood as the ability to discern which questions matter, which problems deserve solving, and how to apply knowledge with judgement, remains distinctly human. Philosopher Nicholas Maxwell clarifies this distinction: "Knowledge tells us how the world is; wisdom tells us how it ought to be." In our haste to develop machines that know everything, we've overlooked something vital: the power was never in the knowing. It has always been in the understanding. Perhaps our salvation lies not in competing with machines at tasks they will inevitably master, but in rediscovering the uniquely human ability to create meaning, purpose, and ethical direction in a world inundated with information yet deprived of wisdom.
This technological era demands that we develop what I call "SuperSkills," meta-capabilities that leverage technological tools while preserving uniquely human strengths. These include Big Picture Thinking (seeing patterns and connections across fragmented information domains), Curiosity (asking questions machines wouldn't formulate), and an Augmented Mindset (viewing technology as an extension of human capability rather than as a replacement). I’ve seen this firsthand when building out strategy. You can summon strategic thinking from multiple LLMs but you can’t see the bigger picture without the uniquely human context.
What distinguishes these SuperSkills? It's not merely their utility in a technological age. It's their fundamentally integrative nature. They combine analytical prowess with ethical discernment, technical understanding with humanistic values. They go beyond the misleading dichotomy between "hard" technical skills and "soft" human ones. Instead, they introduce a fresh synthesis that harnesses the strengths of both traditions.
The defining social divide of our era won't be between human and machine, but between those who have developed SuperSkills and those who haven't. As historian Yuval Noah Harari observes, "The crucial factor in the 21st century is no longer information, but the human capacity to make sense of it." As machines handle routine cognition, these SuperSkills become the true currency of power in the knowledge economy.
This framework is an evolving understanding of how we adapt to computational intelligence. The concept will necessarily expand as our relationship with technology evolves. Power no longer comes from knowing. It comes from understanding, and having the courage to act on it.
For me, this isn’t just theoretical. It’s about how we preserve what makes us human while embracing what makes technology powerful. This is the foundation of a longer project. I’m building out SuperSkills as a practical, ethical, and educational framework. If you’d like early access, sign up here. If this essay resonated with you, share it widely. We are on the cusp of a fundamental change in what it means to be human.
Stay Curious - and don’t forget to be amazing,
Rahim, Creator of SuperSkills
Here are my recommendations for this week:
One of the best tools to provide excellent reading and articles for your week is Refind. It’s a great tool for keeping ahead with “brain food” relevant to you and providing serendipity for some excellent articles that you may have missed. You can dip in and sign up for weekly, daily or something in between - what’s guaranteed is that the algorithm sends you only the best articles in your chosen area. It’s also free. Highly recommended. Sign up.
Now
The 2025 Top-100 Gen AI Use Case Report - In 2025, people are using generative AI not just for technical productivity but increasingly for emotional support and self-improvement. The top three use cases (therapy and companionship, organising one’s life, and finding purpose) signal a dramatic shift from task execution to deeper, human-centred goals. GenAI has moved from a tool for efficiency to a companion in existential navigation. Its highest-impact use cases now address psychological and emotional needs, suggesting AI is becoming embedded in how people seek meaning, support, and personal transformation. also: 100 use cases (PDF)
The most disgusting British foods ever - As a brit, I haven’t tried or heard most of these - Fish curry ice cream? No thank you!
America Is Learning the Wrong Lesson From Elon Musk’s Success - Elon Musk’s rise has wrongly taught many that fear and aggression are effective leadership tools. In reality, decades of research show that harsh treatment damages performance, suppresses collaboration, and erodes long-term trust. His success stems not from cruelty, but from vision and it persists despite his behaviour, not because of it. People often confuse correlation with causation when admiring powerful leaders. Musk’s idiosyncrasy credits, earned through innovation, have allowed poor treatment of others to be overlooked, but sustainable leadership relies on respect, not dominance.
Have they been here? When we look for extraterrestrials, we often peer into the depths of space. But alien life might be closer than you think - Alien technology might already exist within our own solar system on the Moon, Mars, or in orbit. But cultural stigma and scientific hesitation have prevented serious searches. Despite having the tools and data, researchers focus outward, ignoring what could be close to home. Scientific progress isn’t limited by technology, but by taboo. Preparedness demands intellectual courage, not just for what lies light years away, but what may be sitting in our celestial backyard. Uncomfortable.
11 Foods That Probably Aren’t as High in Fiber as You Think - Spoiler: A surprising number of common veggies fall short.
Next
Google is talking to dolphins using Pixel phones and AI - Google’s new AI model, DolphinGemma, paired with Pixel phones, is helping researchers study and potentially decode dolphin communication in real time. By analysing decades of acoustic data and predicting sound sequences, the system aims to build a two-way interface—where dolphins might understand and respond to human-constructed signals. What’s striking is how off-the-shelf consumer devices like the Pixel are now capable of running deep learning models in the field, removing the need for expensive custom hardware. This project isn’t just about interspecies dialogue—it shows how AI, embedded in everyday tools, can power frontier science and make complex research more accessible and scalable.
What’s Wrong With Apple? - Apple’s recent struggles, from the weak sales of Vision Pro to delayed and underperforming AI features, highlight deeper organisational issues, including unclear leadership, talent loss, and internal conflict. These shortcomings have coincided with significant financial pressure from geopolitical shocks such as U.S. tariffs. Once revered for its innovation, Apple is now grappling with the cultural inertia and decision-making bottlenecks typical of ageing tech giants. The company’s reputation for polish and secrecy may be hindering rather than helping in a market that now rewards speed, openness, and technical ambition, qualities that Apple seems to be losing its grip on.
Daily Pill May Work as Well as Ozempic for Weight Loss and Blood Sugar - Eli Lilly’s new daily pill, orforglipron, matches the effectiveness of Ozempic and Mounjaro for blood sugar and weight loss, offering a non-injectable alternative with global potential. If priced accessibly, it could revolutionise obesity and diabetes treatment, removing barriers tied to injections and cost, particularly in underserved populations. What this reveals is a critical shift: the future of healthcare innovation may not just depend on scientific breakthroughs, but on how equitably and efficiently those breakthroughs are delivered. A pill that’s chemically ingenious is only transformative if it reaches beyond the wealthy few.
Future chips will be heater than ever - As chips grow denser and more powerful, they’re generating unprecedented levels of heat that traditional cooling methods—air and even liquid—can’t keep up with. Advanced architectures like nanosheets and CFETs are pushing thermal limits further, while promising backside technologies designed to improve efficiency may unintentionally exacerbate hot spots. This escalating thermal crisis is no longer a side issue—it demands that chip design, power delivery, packaging, and heat management evolve in sync. Future chips won’t just require better transistors or smarter software but a wholesale rethinking of how systems are built, cooled, and optimised from the ground up.
The business of the AI labs - AI labs sit at the centre of a cutthroat ecosystem where no player holds lasting technical dominance, and value is fiercely contested across the stack, from chips to clouds to customers. The core challenge is commoditisation: breakthroughs are short-lived, easily replicated, and vulnerable to model extraction. Labs spend billions chasing fleeting advantages, while hyperscalers and end-users seek to drive down the cost of models to boost their own margins. Still, viable paths exist. Labs can embed themselves deeper into workflows, build sticky platforms or marketplaces, gather proprietary training data, or create agentic AIs that become indispensable digital colleagues. Success will hinge not on raw model power alone, but on moats built through integration, infrastructure, and control of context.
Free your newsletters from the inbox: Meco is a distraction-free space for reading newsletters outside the inbox. The app has features designed to supercharge your learnings from your favourite writers. Become a more productive reader and cut out the noise with Meco - try the app today
If you enjoyed this edition of Box of Amazing, please forward this to a friend. If you quote me, or use anything from this newsletter, please link back to it.
If you are interested in advertising in this newsletter, please see this link