Rules Before Tools
Your Ultimate AI Strategy
Now accepting keynotes for Q3/Q4 2025
The SuperSkills Era: Thriving in the Age of AI - I’m delivering 60-minute executive sessions for senior teams based on the research and frameworks from my upcoming book. Available across the UK, Europe, and globally in person or virtually. To book for your board, corporate team, or keynote event, fill in this form.
In addition to this newsletter, I recommend some other great ones. All free. Check them out here.
Friends,
People ask me for an AI strategy all the time, sometimes in the oddest of moment. They want a quick answer. Sorry to tell you this, but there isn’t one. To do it properly, you have to get under the skin of a business, its workflows, data, incentives, and culture. Without that, all you’ve got is doing AI things for show.
So what I tell them is this: read this first - and I send them what you are receiving today. It’s the closest thing I can give you to a starting point without sitting inside your organisation. Consider it free consultancy for 2025. It’s longer than a LinkedIn post, but it will help consolidate your thinking. Bookmark it, send it to your AI team leads, take your time with it. However you skew whether it is AI-first, AI-driven, or AI-supported, you’ll find something here you can use to survive what’s already happening.
This is the reality of 2025. If you’re working, you’re already in one of those categories, whether you admit it or not. Most of my current feed and email is noise. It’s the same pattern. Last week GPT-5. Previously Veo-3, but it’s always the same. New model. New product. New demo. Crowds rush in like magpies chasing the shiniest thing. Screenshots. Hype videos. Hot takes. Likes, comments, reposts, restacks. Then nothing changes in their business. Dabbling is fine if you have time, but most do not. Watching every new tool feels like standing in a stadium with 50,000 people shouting while someone tells a joke. You miss the punchline. So, choose one or two voices you trust and ignore the rest or you will drown in the hype.
The part most people don’t want to hear? The boring work is the valuable work. Data that doesn’t rot. Workflows that match the speed of the tools. Guardrails that stop you from blowing yourself up in production. Skills that compound so your team and colleagues get better as the tech does. That’s the difference between AI hype theatre and results. AI will improve your processes first. Then it will change your people’s jobs. If you get it right, it will open doors you didn’t know were there. But that only happens if you think strategically.
Bill Gates calls AI “the most important advance in technology since the graphical user interface.” Kai-Fu Lee says the winners will be “those who pair human strengths with AI speed.” Scott Galloway says, “AI will not take your job. Someone using AI will.” I don’t love that line. AI unbundles jobs task by task until the role is smaller or gone. The point still stands, though. AI is an opportunity if you choose to use it.
In my SuperSkills research, I’ve spoken with hundreds of people from CEOs, operators, technologists, teachers, shop owners. I’ve seen who’s getting sharper, who’s being replaced, and who’s still waiting for AI to arrive in their city. If you want to be in the first group, you need principles that outlast the tools.
On my recent holiday to Thailand, I met Sugar. She assures me that is her real name. She runs a beach bar, a restaurant, a Thai beachfront spa as well as an ice cream stand and a sundries shop. Two years ago her restaurant was failing. Today she dominates a busy stretch on Chawengmon beach. Her secret was multiple overnight play arounds with ChatGPT where she learned to track every competitor’s prices and menus within a twenty-minute walk. She tailored offers for niche audiences (Gluten-free, Halal etc). She struck deals with other businesses. On the surface, it looks like the same simple operation. Behind the scenes, she is running AI agents to manage suppliers, pay bills, and spot new opportunities. She is 25. It is not about looking like a tech company. It is about using tools to make faster, better decisions than anyone else.
That’s why I say: don’t be a magpie. Be more like Sugar.
The strategy stack that compounds
To keep things super simple, think about AI in layers:
Redesign process and transformation – redesign how work moves so quality rises and waste falls.
Rewire the organisation – set ownership, incentives, and routines so people and AI work in sync.
Rework the nature of jobs – break jobs into tasks and move people toward judgment, service, and creativity.
Find new growth – use AI to open markets, improve unit economics, and launch products you couldn’t before.
Get these layers right, and the tools you choose will slot in instead of falling out.
Ten rules that outlast the tools
There are probably other technical areas worth considering, but strategically, I think they are the core of how you should think. (quick summary if that’s all you want)
1) Start with tasks
Map work to the smallest useful units and kill zombie work - repetitive, low-value tasks that burn time without creating useful output. Duolingo’s Birdbrain personalises every exercise and now drafts new content faster than before because they rebuilt the workflow around it. Siemens fixed factory defects at the step level, not by chasing full automation. Kobo360 in Nigeria cut freight coordination hours with AI-driven load matching. This is one of my 7 SuperSkills in action: Curiosity applied to real work.
What to actually do: list 20 high-volume tasks, mark low-value ones, remove 3 with narrow automations.
2) Redesign the system
Pouring AI into yesterday’s process just scales yesterday’s problems. Shopify’s Magic feature can write product copy in seconds, but the real gain came when they overhauled catalogue and approvals. Maersk’s routing AI still worked under Red Sea and port congestion because they designed for resilience, not perfection. Etisalat in Abu Dhabi rebuilt service triage with AI and cut resolution times by 35 percent.
What to actually do: sketch the current process, circle delays, rebuild 1 flow with fewer handoffs and smarter checkpoints.
3) Put a human in command
Every winning AI programme has one accountable owner. DBS Bank put AI at the executive table, so credit, fraud, and service changes could land fast. Canva put AI inside product teams so launches solve real user problems.
What to actually do: name the owner, publish a one-page approval path, set clear risk appetite. This is the SuperSkill Big Picture Thinking in action.
4) Make it legible
People act on what they understand. Microsoft gives teams error analysis and explanations. Ant Group shows merchants why transactions were flagged, which cuts disputes and builds trust. mBank in Poland uses real-time fraud blocks without locking out genuine customers.
What to actually do: attach plain-language explanations where money or people are affected, set confidence thresholds for human review, keep a short purpose-and-limits note.
5) Measure impact and harm
“Feels fair” is not a metric. LinkedIn improved representation in search without hurting business outcomes by building fairness into ranking. Spotify measures how recommendations balance accuracy with discovery because it shapes who gets heard. Safaricom’s M-Pesa uses AI to flag SIM-swap scams before they hit customers, cutting incidents.
What to actually do: pilot with 5–10% of users or one region, track one business metric and one safety metric, and know when to roll back.
6) Strengthen your data backbone
Consent scarcity is real. The most valuable data is both up-to-date and gathered with explicit permission. Ping An protects financial and health data with firm access rules. Estonia’s national ID works because people trust it and it’s useful every day. Aadhaar in India handles hundreds of millions of authentications a month because the controls are clear.
What to actually do: assign data owners, document what is collected and why, delete what you don’t need.
7) Build guardrails in by default
Guardrails aren’t brakes, they’re speed. Apple processes most AI work on-device and uses Private Cloud Compute with zero retention when it can’t. Salesforce masks sensitive data fields and routes them through safe paths.
What to actually do: keep an allow-list of actions, log behaviour, red-team monthly for failure modes like prompt injection.
8) Treat capacity as a strategy
Processing power is the new supply chain. Anthropic locked in GPU supply with AWS to guarantee uptime. TCS in India balances workloads across hybrid compute so they never stall. Jensen Huang says accelerated computing is cost-effective, but only if you lock in capacity before you need it. Listen to Jensen!
What to actually do: document where each system runs, how fast it must respond, and move simple jobs to lighter options.
9) Protect dignity by design
Privacy is a growth driver. Bumble deletes verification images after checks. Estonia’s ID adoption sits near 98% because trust was designed in.
What to actually do: publish how data is used, set expiry by default, and remove anything that feels like a trick. This is Principled Innovation. Another of my SuperSkills.
10) Educate for leverage
Capability debt is the gap between the skills people have and the ones they now need. It kills more AI projects than tech failures. PwC invested big in upskilling. M-KOPA in Kenya trains teams to use handset AI to extend microloans and solar power. This is the Augmented Mindset, the most topical of my SuperSkills. It helps you and your team prepare for the future.
What to actually do: run monthly challenges on real tasks, teach managers to read simple dashboards, share before-and-after examples.
The AI governance and risk charter
If you want your teams to move fast without fear, give them rules they can remember and use. This isn’t about creating a binder no one reads. It’s about making sure people know what’s allowed, what isn’t, and who decides.
Principles. Keep it human-centric, transparent, fair, and accountable. One page. No jargon.
Risk tiers. Low, medium, high. The higher the risk, the more human review and sign-off you require. High risk means anything touching money, rights, safety, or identity.
Oversight. Have an AI council with people from product, data, legal, risk, security, and operations. Empower them to approve, pause, or stop.
Lessons from failure. Bias in early hiring systems. Deepfake scams have cost tens of millions. Prompt-injection hacks. Models that drift over time. Laws like the EU AI Act are changing mid-project. Write down the five failures you fear most, who owns each, and what will stop it from happening.
Evidence. For every live system, keep a one-page card: purpose, limits, owner, last review date, next review date, and how impact and harm are measured.
Clear guardrails speed you up. Unclear guardrails slow you down.
Managing the human shift
AI changes your workflows and it changes how people feel at work. If you ignore that, you’ll kill adoption.
People feel excited, anxious, tired, and proud, sometimes in the same week. Many haven’t noticed AI in their city or country yet. They will when back-office work shrinks, answers arrive faster, and certain jobs disappear. You have to make this a people change, not a tool rollout.
Psychological safety. Make it safe to report bias or failures. Thank people who do it. Fix the cause in public.
AI champions. Pick credible operators inside each function. Give them time to run clinics, share patterns, and surface risks before they hurt you.
Reduce AI fatigue. Kill pointless alerts and drafts. Collapse 10 low-value notifications into one decision that matters.
Clear stories. Show exactly where time is given back: the claims team that now leaves on time, the teacher with better resources for the pupil who was slipping, the picker who now solves exceptions while robots handle the grind.
One of my SuperSkills, Empathetic Communication, is exactly this: connecting the change to what people actually care about.
A few extra moves that separate the winners
Put decision rights under a microscope. For the top 10 decisions AI will touch, name the decision owner, data owner, escalation owner, and rollback owner. Shorten approval chains where you can.
Pay down evaluation debt weekly. Keep a small, live test set. Compare human-only, AI-assisted, and AI-only outputs. Route work to the right mix.
Treat trust like cash. Klarna’s AI assistant cut repeat inquiries by 25% and average handling time from 11 minutes to under 2. That’s trust you can measure.
Run a shadow-AI amnesty. Let staff declare the tools and prompts they already use. Standardise the safe ones, block the risky ones, and share the best.
These are the habits that make your AI systems last longer than the launch party buzz.
Capacity, cost, and cadence
Processing power is now a strategy. Decide what must be instant, what can be batched, what runs on-device, and what needs dedicated clusters. Plan this before you need it.
Price the full journey, not just the pilot. The integration tax (the hidden cost of going from a cool demo to something you can trust in production) is real. You’ll pay for data preparation, consent, safety checks, audits, human checkpoints, monitoring, bandwidth, and compute. The cost of doing nothing is real too: losing market share, slowing down cycle time, and lowering quality.
How to fund, run, and measure:
Fund. Keep a central pot for data, safety, and measurement. Seed small bets in teams and roll the winners into the core.
Run. Quarterly, pick three flows, rebuild one end-to-end, land two narrow wins, strengthen the data backbone. Monthly: review the scorecard and top risks. Weekly: ship one small improvement.
Measure. Track cycle time, quality, customer satisfaction, cost per decision, fairness, privacy incidents, delivery speed, and compute spend per use. Keep it on one page and look at it often.
Where to start this quarter
Pick three “boring” tasks and stand up weekly evaluations with a clear owner.
Write the incident playbook and run a drill.
Launch a shadow-AI amnesty.
Create a consent register for one dataset.
Tie every time saving to 100% reinvestment in skills for the team that earned it.
Add decision-quality reviews to your monthly rhythm.
Publish a one-page registry of live use cases so the whole company knows what’s real.
The next wave without the buzzwords
Agents will act, not just answer. Keep their scope tight, log every step, and have a stop button. Multimodal systems will read text, tables, images, and video, which changes how you design training, support, and safety reviews. More will run on-device for speed and privacy. Smaller models will quietly do 80% of the jobs for a fraction of the cost.
Yuval Harari warns that AI can “hack the operating system of human civilisation” by producing persuasive stories at scale. That’s why provenance, identity, and dignity aren’t nice-to-haves. They’re survival strategies.
Sugar, not Magpie.
I’ve met hundreds of people in this space. Some feel stretched, some feel left behind, others feel newly powerful. If you’ve already started down the AI path, pass this to your lead and make it the standard. If you’re thinking about AI, use this as your checklist to start.
If you’ve read this far, you’re looking at the next three years, not the next three weeks. Good.
Don’t be a magpie. Be more like Sugar. Make AI part of your operating system. Rethink what you do, how you do it, and how it will affect your team and customers. Do a clean sweep, decide what you need, and get someone accountable to deliver it.
If you’re not doing this now, you’ll be playing catch-up just to stay in the game.
The next model won’t save you. A better system will. Control the change before it controls you.
Stay Curious - and don’t forget to be amazing,
Here are my recommendations for this week:
One of the best tools to provide excellent reading and articles for your week is Refind. It’s a great tool for keeping ahead with “brain food” relevant to you and providing serendipity for some excellent articles that you may have missed. You can dip in and sign up for weekly, daily or something in between - what’s guaranteed is that the algorithm sends you only the best articles in your chosen area. It’s also free. Highly recommended. Sign up.
Now
I'm Worried It Might Get Really Bad - Immediate concerns about the next few months and years. Protect yourself from the cloud.
Why Does AI Feel So Different? - “A lot is changing with AI. It’s been confusing, and slightly overwhelming, for me to get a grip on what’s changing, and what it all means for me personally, for my job, and for humanity. What’s going to happen in the next 5 years? Will my skills be relevant? How do I truly add value with AI getting smarter in every way? How does it change life for me and my family?
It doesn’t fit into any of my existing mental models. I use it in every aspect of life already, and the inability to understand it in a similar way to anything in the past makes me feel unsettled.”
The go-between: how Qatar became the global capital of diplomacy The tiny, astonishingly wealthy country has become a major player on the world stage, trying to solve some of the most intractable conflicts. What’s driving this project?
Why A.I. Should Make Parents Rethink Posting Photos of Their Children Online
Artificial intelligence apps generating fake nudes, amid other privacy concerns, make “sharenting” far riskier than it was just a few years ago.
Kids Who Get Smartphones Before This Age Are Doomed. I’ll save you the read. 13. Want to know why? Read the article or read The Anxious Generation.
Next
The ‘godfather of AI’ reveals the only way humanity can survive superintelligent AI - Instead of forcing AI to submit to humans, Hinton presented an intriguing solution: building “maternal instincts” into AI models, so “they really care about people” even once the technology becomes more powerful and smarter than humans.
Companies Are Pouring Billions Into A.I. It Has Yet to Pay Off. - Corporate spending on artificial intelligence is surging as executives bank on major efficiency gains. So far, they report little effect to the bottom line.
Scientists Are Finally Making Progress Against Alzheimer’s - New drugs show promise, and research finds value in vaccines, antivirals, exercise and probiotics. Related: AI designs antibiotics for gonorrhoea and MRSA superbugs
No AGI in Sight: What This Means for LLMs - This essay dissects the widening gap between AI hype and reality, arguing that large language models have hit a plateau - the “S-curve” - despite industry claims of imminent superintelligence. It contrasts bold predictions and massive investments with underwhelming flagship releases, framing today’s AI era as less about building godlike intelligence and more about integrating imperfect tools into real-world products. The piece suggests that the true future of AI lies not in transcendence, but in the messy, necessary work of making these systems actually useful.
Apple Plots Expansion Into AI Robots, Home Security and Smart Displays - Apple Inc. is planning a comeback in artificial intelligence with new devices, including robots, a smart speaker with a display, and home-security cameras, according to people with knowledge of the matter. A tabletop robot, targeted for 2027, is the centerpiece of the AI strategy, and will feature a lifelike version of Siri and the ability to engage with users throughout the day. The devices are part of an effort to restore Apple's innovation reputation and challenge companies like Samsung Electronics Co. and Meta Platforms Inc. in new categories, with Chief Executive Officer Tim Cook saying that the product pipeline is "amazing" and that some of it will be seen soon.
Free your newsletters from the inbox: Meco is a distraction-free space for reading newsletters outside the inbox. The app has features designed to supercharge your learnings from your favourite writers. Become a more productive reader and cut out the noise with Meco - try the app today
If you enjoyed this edition of Box of Amazing, please forward this to a friend. If you quote me, or use anything from this newsletter, please link back to it.
If you are interested in advertising in this newsletter, please see this link


