23 Comments
User's avatar
Harold Toups's avatar

As someone who attempted (unsuccessfully) to teach high school science in an inner city setting, the unbundling of a secondary education teacher’s job through AI should be viewed as radically welcome, offering the opportunity to restore hours of busy-ness time back to their overloaded lives.

Expand full comment
Ajay Kelkar's avatar

Lovely article Rahim. What resonated deeply was your framing of barriers—not just adoption or upskilling, but the quieter, internal ones. For me, the real barrier isn't fear of AI—it is grief.

Grief that surfaces not from obsolescence, but from realizing that I am starting to confide more easily in a machine than in people. There is a strange safety in typing something into ChatGPT that I cannot yet say out loud.

I explored that in this piece: The AI Companion You Didn’t Ask For:https://open.substack.com/pub/customeriq/p/the-ai-companion-you-didnt-ask-for?r=4b7ij&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

It’s less about tools, more about the emotional dislocation AI brings—the subtle grief of being shaped by something you didn’t ask to trust.

Expand full comment
Rahim Hirji's avatar

I think we feel exactly the same way! I just skimmed your piece and will read it properly later:

Thanks for your honesty. You’ve named something that’s almost taboo in the workplace. Grief! Not from being replaced, but from slowly realising how much of your inner life is now shared with something that doesn’t judge, doesn’t react, and doesn’t care. And that somehow feels easier.

What you describe is exactly the undercurrent I’m exploring in SuperSkills, not just what we’re losing in our workflows, but what we’re surrendering, silently, in how we relate to ourselves. That quiet trust in tools over people. That erosion of shared emotional bandwidth. The comfort of not having to be witnessed.

Thank you for adding depth to the conversation. I posted this today which is in tandem to what you are saying I think. I think you will like it.

https://www.linkedin.com/posts/rahimhirji_ai-wont-take-your-job-itll-take-your-activity-7333390818789011456-S0pD

Expand full comment
Trivium Schoolhouse's avatar

I’m seeing more and more local social groups sprout up around me, people meeting intentionally to create that totally human connection that you cannot replace with AI. I think they will become more and more prevalent as we all find ourselves using machines for basic discussions and battle against that. I’m in the process of generate some materials for guided discussions through Socratic dialogue, I think people will WANT to have these intentional human connections but some will want a guide to help them connect more intentionally.

Expand full comment
Rahim Hirji's avatar

I was just talking to someone who is a coach who said just this! People aren’t just losing tasks, they’re craving connection. The more we automate the basic stuff, the more we value what only humans can create together: nuance, trust, presence.

I love the idea of guided Socratic dialogue. That’s a SuperSkill in action. Would love to see what you’re creating.

IMAGINE IF AI AI AND COVID HAPPENED AT THE SAME TIME!?

Expand full comment
Trivium Schoolhouse's avatar

"The more we automate the basic stuff, the more we value what only humans can create together: nuance, trust, presence." - SO TRUE.

I just sent a separate reply but I'd love to connect on the ideas I'm outlining. My larger goal is reshaping K-12 education to fit the world we're stepping into, but Socratic dialogue will be a major component and the focus, initially, will be on materials that facilitate that dialogue. I'm still framing this out but would love to share with you some time.

Expand full comment
MattieRoss's avatar

Excellent article and very timely, given my current career situation. I am grateful to you for the paradigm shift I experienced while reading it!

Expand full comment
Rahim Hirji's avatar

Great, glad this helps!

Expand full comment
Bob Wyman's avatar

While deep research may have done in a half hour what used to take you three days to do, you now have two days and 7.5 hours during which you can do better... Don't be satisfied with what was once the most you could do.

Expand full comment
Rahim Hirji's avatar

Exactly. The point isn’t that AI replaces the work. It’s that it resets the ceiling. What used to be the output is now just the very minmum baseline. The opportunity now is to use that reclaimed time to go deeper, think wider, and focus on the work only you can do.

I've been able to do more in other fields that I thought would never be possible. E.g. understand my own health.

Expand full comment
Sarrah Marić ✨'s avatar

This really got me!! I work in careers advising and lately it feels like I’m helping people make sense of jobs that are shapeshifting under them. Not totally disappearing…but getting weirder and harder to explain at dinner. This captured that strange in between so well -thank you!

Expand full comment
Rahim Hirji's avatar

If you liked this, you'll like my most recent piece which brings some of it together and links to relevant ones talking about the quicksand effect of work right now. Enjoy. Glad this helped! And you'll like my book coming out next year. Sign up for my newsletter and you'll be notified. All best!

https://open.substack.com/pub/boxofamazing/p/the-cult-of-productivity-is-breaking?utm_source=share&utm_medium=android&r=hjpd9

Expand full comment
Concerned Conservative's avatar

The "Perfectionist" example you cited isn't really representative of the main reason why the legal community is leery of AI. By chance, do you have a citation?

In any case, the risk isn't missing a contract clause. The risk is inadvertently committing the Deadly Sin of legal practice: falsifying case citations and thereby fabricating foundational law from whole cloth, ruining briefs and potentially ruining the case. Here's a running tally of AI-authored briefs doing exactly that: https://www.damiencharlotin.com/hallucinations/. This is something human authors virtually never do. The risk isn't "occasional failure but still better than humans", the risk is "new and severe failure mode to which humans are virtually never prone."

You could argue that an integrative workflow would mean the AI spits out briefs for humans to proofread, but this would miss the deeper point: the precedent is the bedrock reference material for the case, and if the AI falsifies that then you can no longer be confident in its value attributions and decisions regarding what to highlight and what to omit (including predicting how your opponent will frame the same body of precedent!) The question is whether or not the increase in throughput compensates for the occasional catastrophic hallucination, and in the case of law where there really aren't economies of scale, I doubt it.

Expand full comment
Rahim Hirji's avatar

Thanks for this. You're right that the legal domain presents uniquely high-stakes risks, especially around precedent integrity and hallucination. I completely agree that these are not just “mistakes”. I think we might call them failure modes that can’t be tolerated without human oversight.

My intent wasn’t to oversimplify, but to illustrate how even fields with high rigour are being forced to reckon with integration pressure. You're right to call out that the risks aren't just about accuracy but about trust in foundational logic which is a much much deeper issue, as you have implied.

Appreciate the resource. will dig in.

Expand full comment
Matt's avatar

This is definitely possible, but I think you're only really engaging with current capabilities. When the models can do all the thinking better than us, including the high level, long range strategic and creative thinking, it very well might not be just unpacking bundles of tasks and knowledge that are currently jobs and reformulating them. We might just be horses in the end.

Expand full comment
Rahim Hirji's avatar

If models get better than us, then it'll be more than unbundling! That's a different reckoning and redefinition.

Expand full comment
Matt's avatar

My only point is it's possible we get unbundled out to pasture 😆

Expand full comment
Moshe Klein's avatar

This article was about 98% written by chatgpt. Oh well.

Expand full comment
Ned's avatar
May 29Edited

This piece, ironically (or not?) has chatGPT o3’s tone and sentence structure fingerprints all over it, which does a good job of implicitly making the point that writers who can express themselves in an authentically human way will still have a lot of capital in this brave new world

Expand full comment
Trivium Schoolhouse's avatar

I love this message and how you distill the challenges and next steps so clearly, Rahim. I agree with your assessment that AI won't 'take your job' but will instead synthesis the role into what is uniquely human about it and remove much of the 'task' work. I do believe that will reshape some industry but the message--to distill the job (and the department and organization) into their task and human components--seems very on point to me.

I'm mostly concerned with the next few generations who are growing up in this world, as tech and AI is emerging to reshape what will be most valued in the workforce. It will NOT be the skills they're learning in school today--the core curriculum and the teacher-led instruction model is just not compatible with a world with AI, we need children to focus on what makes them human, their creativity, critical thinking, positive virtues, and learning how to work alongside AI. I'd love to connect some time to discuss any thoughts you may have on k-12 education in this AI-emergent world; I think this post of mine best conveys this particular concern: https://open.substack.com/pub/lovelifelogos/p/what-is-education-for?r=5lmiru&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false

Expand full comment
Elle J's avatar

I enjoyed this article. It made some great points and it left me feeling a bit more optimistic about things (though I’m not sure where my organization is at.)

That said, I’m glad I’m at the tail end of my career.

Expand full comment
Paul Meccano's avatar

As bright as you obviously are, you negated to make the choice: to not step into the space that is eroding your heart. Instead, you seem to be backing it…yes, that thing that is currently eroding your heart. You seem to be saying that you want to feed the very thing that, currently, is eroding your heart so that it doesn’t in future…!!!

Do I make my point?

Expand full comment
Rahim Hirji's avatar

Thanks for reading. Change is rarely clean or comfortable. My goal is to name the shift so people can navigate it with agency, not just fear

Expand full comment