In addition to this newsletter, I recommend some other great ones. All free. Check them out here.
Friends,
A few years ago we were enamoured with what AI had helped produce when we saw “DeepFake Tom Cruise”. But in short period of time we are seeing a coming together of the technology in different guises. We are living in an age where technology can create alarming imitations of reality, and the implications are profound. Synthetic reality, combined with deepfake technology, represents a significant threat not just to personal identity, but to social trust and democratic integrity. These tools can create hyper-realistic digital duplicates of ourselves and others, leading to a panorama of potential mischief that requires our urgent attention.
I’m increasingly worried about what this might mean. I covered deepfakery earlier this year, but it seems like we are walking straight into a world where we just won’t know what is real or fake.
Imagine waking up one day to find that your face, your voice, and your likeness have been used to produce an AI-generated video where you say things you never said or betray personal values you don't hold. This isn't merely a hypothetical scenario — it's happening. Deepfakes, which allow users to create realistic videos of others without their permission, can manipulate perceptions at unprecedented levels. A deepfake could depict a political leader making a controversial statement just days before an election, or perhaps worse, incite violence or unrest. The ethical ambiguities that surround these imitations are staggering, and they could alter the course of democracy as we know it.
We're already sceptical about the news. Now, imagine a world where every video and audio clip is questioned. Can we trust our eyes and ears when a technology can convincingly fabricate actions and statements? The breakdown of this trust is akin to a slow poison in civil society. When people begin to doubt legitimate news and media, the fabric of reality begins to erode. War crimes captured on tape might be labeled as fakes, and fabricated narratives could undermine legitimate protests.
In this context, consider the real-world example of a distorted video of Nancy Pelosi that went viral five years ago, manipulated to make her appear intoxicated. Such easy falsehoods have serious repercussions, and while we can sometimes debunk them, the initial outrage and misinformation spread like wildfire. If this continues, we may find ourselves living in a world where every piece of media is suspect, making genuine conversations impossible, especially when it comes to politics and war.
There's also the personal dimension. The use of deepfakes for revenge porn and personal defamation is a grim reality for many, especially for women who are frequently targeted. This digital assault can devastate careers, reputations, and mental health.
Even more troubling is the rise of synthetic reality, where AI avatars can interact and act on our behalf. One day, you might find that a digital version of you attended a meeting you never agreed to, or delivered a message that misrepresented your views. The advent of hyper-realistic avatars poses questions about agency and authenticity; if your digital twin makes decisions without your consent, what remains of your personal identity?
We must also grapple with the societal implications. Synthetic reality can be weaponised not only against individuals but against institutions. A corporate sabotage scenario could involve a deepfake of a CEO declaring bankruptcy, swinging the stock market like a pendulum. This isn’t just a digital prank; it’s a dangerous new form of warfare that blurs the line between reality and fiction in the commercial arena.
So, how do we navigate this landscape? First, awareness is crucial. Individuals and institutions must educate themselves about these technologies and their implications. Promoting digital literacy will empower users to critically evaluate the media they consume and share. Additionally, technologists must develop robust tools to authenticate media, ensuring that users can verify the legitimacy of the content they encounter.
Regulation will also play a vital role. Just as there has been a push for laws against hate speech, we’ll need a new framework to combat deepfake technology and its misuse. International cooperation will be essential, as these issues transcend national borders and require a unified approach to safeguard information integrity.
As we embrace these powerful technologies, we must also take heed of their potential to distort reality. The promise of synthetic reality offers an exciting frontier, but without checks and balances, it may lead to a dystopia rife with impersonation, mistrust, and chaos.
In a world where communication is becoming more important, Zoom is allowing us to generate our own avatars to join meetings on our own behalf. HeyGen and Synthesia are just two companies allowing you to create your own avatar. This example from HeyGen gives you a real life example of what it might be like to converse with an avatar.
But in a synthetic world, what will it be like for your own avatar to converse with someone else’s? Maybe that will be dangerous. Or Maybe we will get more time?
Stay Curious - and don’t forget to be amazing,
Here are my recommendations for this week:
One of the best tools to provide excellent reading and articles for your week is Refind. It’s a great tool for keeping ahead with “brain food” relevant to you and providing serendipity for some excellent articles that you may have missed. You can dip in and sign up for weekly, daily or something in between -what’s guaranteed is that the algorithm sends you only the best articles in your chosen area. It’s also free. Highly recommended Sign up.
Now
The quiet art of attention - In a world of TikTok and IG scrolling, this essay emphasises the practice of mindfulness as a path to inner freedom. It suggests that by observing the mind’s tendencies without judgment, we can reclaim control over our attention, simplifying our thoughts and actions. This approach encourages focusing on the essentials of life, taking small steps towards clarity and peace. The essay underscores the value of consistent, patient attention in cultivating a more meaningful, intentional life, where moments are lived fully and thoughtfully, rather than rushing through them. (Recommended) Related: TikTok is reportedly aware of its bad effects on teen users
Laziness Death Spirals Humorous exploration of how procrastination spirals out of control, turning a lazy YouTube binge into missed deadlines, bad sleep, and general life chaos. Tactics include acknowledging the spiral, analysing triggers, and slowly reclaiming willpower—because, of course, heroically fixing your life in one go is far too ambitious! Somewhat related: Avoiding sadness can backfire, here’s how to turn towards it
Here’s the deal: AI giants get to grab all your data unless you say they can’t. Fancy that? No, neither do I - Data is vital to AI systems, so firms want the right to take it and ministers may let them. We must wake up to the danger
Why Microsoft Excel won’t die - The business world’s favourite software program enters its 40th year - Excel, the ultimate hero of office boredom, turns 40. While it’s haunted many with the dreaded #VALUE! error, financial analysts and consultants love it. Even with AI swooping in, Excel’s still thriving—now with an AI sidekick to pivot your data woes. Can’t keep up!
Have We Reached Peak Human Life Span? After decades of rising life expectancy, the increases appear to be slowing. A new study calls into question how long even the healthiest of populations can live.
Next
Machines of Loving Grace - How AI Could Transform the World for the Better - Dario Amodei, CEO of Anthropic, explores the radical potential upsides of powerful AI. While acknowledging the serious risks AI poses, he emphasizes the transformative benefits AI could offer in areas like health, neuroscience, economic development, and governance. He believes that addressing AI's risks is crucial because they are the only barriers to a positive future, where AI could drastically accelerate human progress. Amodei also highlights the importance of balancing optimism with a realistic focus on mitigating AI's dangers. Must read!
Two people ‘communicate in dreams’: Inception movie-styled sci-fi turned into reality - Participants were sleeping at their homes when their brain waves and other polysomnographic data were tracked remotely by a specially developed apparatus. ASTONISHING!
OpenAI Unveils Secret Meta Prompt—And It’s Very Different From Anthropic's Approach - Under the hood, OpenAI’s structured approach to prompt generation sets it apart from Anthropic’s human-like, chatbot approach.
Adobe’s AI video model is here, and it’s already inside Premiere Pro - New tools allow users to generate videos from images and prompts and extend existing clips in Premiere Pro. Only a fool would discount Adobe in this AI race
The War on Passwords Is One Step Closer to Being Over - “Passkeys,” the secure authentication mechanism built to replace passwords, are getting more portable and easier for organizations to implement
Free your newsletters from the inbox: Meco is a distraction-free space for reading newsletters outside the inbox. The app has features designed to supercharge your learnings from your favourite writers. Become a more productive reader and cut out the noise with Meco - try the app today
If you enjoyed this edition of Box of Amazing, please forward this to a friend. If you share it on LinkedIn, please tag me so that I can thank you.
If you are interested in advertising in this newsletter, please see this link