Is this something big?
An investor's essay on how AI will disrupt nearly every job has gone mega-viral. Is he right? Should we be afraid? Or is there hope?
My last post, “We are not alone,” proved quite popular. I shared about the development of ClaudeCode and OpenClaw as startling examples of AI acceleration. While I don’t intend to write only about AI, this is a topic that refuses to leave me at peace, so I’m writing again about it today.
I’ll be honest, I am tempted to get into the Epstein affair, which I find so profoundly disturbing. The more I read, the more discouraged I am by the depth of the evil exhibited, the number of people involved, and the utter lack of accountability for anyone. US authorities have not made a single arrest of anyone except Epstein and Maxwell. It’s shocking. It’s despicable. A reckoning is coming. But that’s not today’s topic.
Today I’m returning to the rapid ramp-up of artificial intelligence, particularly what it means for the future of work. This might be not only the most important theme of the year, but the most important story of the next several hundred years. I feel so sheepish when I look at that sentence. Talk about risk of overhype! Even so, I think the significance might not be overstated. Let’s dig in
It’s coming and it’s coming quickly.
Something big is happening says AI investor Matt Shumer. He says it is already here, but most of us do not yet feel it. He likens our moment to February 2020, when people started hearing rumblings of a virus spreading in China. Then, within weeks, the world had shut down. Artificial intelligence, he says, is similarly going to change the world, but much more permanently. Most people, he says, will be surprised when it hits and we experience the equivalent of lockdowns and runs on toilet paper.
Shumer’s X.com post, “Something Big is Happening” has been viewed more than 80 million times already. He doesn’t say anything wildly different than what you might read daily on twitter, but it has struck a chord and so I thought it a good jumping-off point for thinking about how AI might very soon disrupt so many jobs.
Shumer’s core argument is that AI in 2026 has crossed a threshold and moved from "helpful tool" to something that can genuinely replace skilled workers, starting with software engineers but rapidly expanding to all knowledge work.
The whole essay is worth reading, but I’ll share some of the key ideas as a summary. Then I’ll share some of the more interesting response I’ve read this week. And I’ll also share the work of a pastor friend of mine who is working through a theology of artificial intelligence that his neither ignorant nor without hope.
1. A machine with judgment and taste?
Shumer is most amazed by the way the newest AI models not only operate largely independently but display a capacity that that feels, for the first time like “judgment”, or “taste.”
Let me give you an example so you can understand what this actually looks like in practice. I’ll tell the AI: “I want to build this app. Here’s what it should do, here’s roughly what it should look like. Figure out the user flow, the design, all of it.” And it does. It writes tens of thousands of lines of code. Then, and this is the part that would have been unthinkable a year ago, it opens the app itself. It clicks through the buttons. It tests the features. It uses the app the way a person would. If it doesn’t like how something looks or feels, it goes back and changes it, on its own. It iterates, like a developer would, fixing and refining until it’s satisfied. Only once it has decided the app meets its own standards does it come back to me and say: “It’s ready for you to test.” And when I test it, it’s usually perfect.
These newest models from OpenAI and Anthropic are not just a little more capable, they are way more powerful. The recursive improvement for AI technology feels like it is growing on an exponential curve rather than a linear one. Humans tend to think linearly and have trouble imagining exponential growth - and that defect in our ability to project could lead us to vastly underestimate the both the pace and level of change we are about to experience.
2. “But I tried AI and it wasn’t that good”
The author makes a good point that many people who aren’t in tech, or who are only using the free versions of Claude or ChatGPT won’t recognize the power and potential of the latest models. They are still operating like AI is a glorified search engine, one that frequently hallucinates. If that is you, you likely need to update your mental paradigm. Shumer writes:
In 2022, AI couldn’t do basic arithmetic reliably. It would confidently tell you that 7 × 8 = 54.
By 2023, it could pass the bar exam.
By 2024, it could write working software and explain graduate-level science.
By late 2025, some of the best engineers in the world said they had handed over most of their coding work to AI.
On February 5th, 2026, new models arrived that made everything before them feel like a different era.
If you haven’t tried AI in the last few months, what exists today would be unrecognizable to you.
Moreover the AI models are now so advanced that they are writing new versions of themselves. Dario Amodei of Anthropic says that a lot of the code for the new version of Claude is being written by the previous version of Claude. Open AI’s new GPT-5.3 Codex model was released with this note: “GPT-5.3-Codex is our first model that was instrumental in creating itself. The Codex team used early versions to debug its own training, manage its own deployment, and diagnose test results and evaluations.”
In other words, the software is getting better at writing the code that creates code writing software and will continue doing so as long as it has enough compute to scale.
What does that mean for human software developers? Where does that place them? One imagines that software engineers in 2026 might describe their work the way music producer Rick Rubin summarized his work:
It doesn’t end with software development. The implication is that if you are in knowledge work, AI is coming for your job. Legal work will be upended by AI reading contracts, summarizing case law, drafting briefs. Financial work will be transformed by AI building financial models, analyzing data and writing investment memos. AI writing quality has advanced to the point where most can’t tell between human and computer outputs for marketing copy, journalism, and technical writing. AI is approaching or exceeding human levels in multiple areas of medicine - reading scans, analyzing test results, suggesting diagnoses.
…nothing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn’t “someday.” It’s already started.
3. What should you do? Lean into it
In the last part of the essay, Shumer suggests that the only wise response is to become an early adopter of AI and lean into learning how to use it to advance your own capabilities. He argues, “This might be the most important year of your career. Work accordingly.” I think he means that literally, not figuratively. The code is developing quickly enough that 2026 or 2027could be a hinge point in history. Is that true? I don’t know, but it seems plausible. So how do you “work accordingly”? Shumer says:
The person who walks into a meeting and says "I used AI to do this analysis in an hour instead of three days" is going to be the most valuable person in the room. Not eventually. Right now. Learn these tools. Get proficient. Demonstrate what's possible. If you're early enough, this is how you move up: by being the person who understands what's coming and can show others how to navigate it. That window won't stay open long. Once everyone figures it out, the advantage disappears.
Maybe AI can help you learn how to respond to the growth of AI. I read a post from @godofprompt sharing advice in the form of a super in-depth prompt to give to AI to have it coach you in how you should respond to the threats AI poses to your particular career. It addresses the only important question: “What should I do about this on Monday morning?” If nothing else, it’s worth reading to see how structured and in-depth a power prompt can look if you want to get the most out of using AI.
What do we make of all this?
This piece went viral because it’s well written, from a believable person, and it’s scary. Is it true? I would put it this way: there is a lot of truth to it. But it’s not the whole story.
A came across a short response to the essay from a guy named Isaac Saul who runs an independent non-partisan politics newsletter. He explains that computer code is really a structured language with a lot of defined patterns so software engineers tend to think everything is a pattern and if AI can take their job they overestimate how well it can do everything else:
The truth is there is a lot more disorder, unpredictability, and humanness in so much of our lives — and our work — that I don’t think AI applications will always (or even often?) be able to account for.
Matt, for instance, lists journalism as a job in trouble thanks to AI (not that our industry needs more trouble). And it's true that AI can read documents fast and do incredible research and even write clean copy and edit -- it will probably eliminate or reduce the need for some jobs!
But you know what it can't do? It can't work a source over for years on end. It can't / doesn't / won't bear witness to live events. It reminds me of the famous Good Will Hunting scene, where Robin Williams is chastising Matt Damon about being such a smart ass but not being able to describe what the Sistine Chapel smells like. Damon is the AI.
The Doorman Fallacy
The writer Sahil Bloom made a similar point in his post, “Something Bad is Happening (But It’s Not What You Think.” He warns of the risks of replacing people with AI by sharing a thought from Rory Sutherland called “The Doorman Fallacy”:
In his 2019 book, Alchemy, legendary British advertising executive Rory Sutherland offered an interesting lens on the perils of automation:
Imagine you’re the owner of a five-star hotel and you hire a consulting firm to come in and propose opportunities for efficiency improvements.
The consultant observes the operations of the hotel and suggests that the role of the doorman can be automated. He currently costs $40,000 per year. You can install an automatic door-opening mechanism and save that money annually.
You accept the proposal, fire the doorman, and install the automatic door. The consultant walks away pleased with their great work (and the nice payday they received for it).
However, Sutherland goes on to explain (podcast interview link), the consultant didn’t understand the full scope of what the doorman was doing, beyond the technical work of opening the door:
“Two years later, the hotel’s a catastrophe...because the doorman was doing multiple things, many of which were human and kind of tacit. Security would be one...Hailing taxis, dealing with luggage, recognizing regular guests, providing status to the hotel—there are loads and loads of value creation components to that doorman which aren’t captured in the open-the-door definition.”
Bloom argues that in anything we do, there is a “surface value” and a “real value” and when we mistake the surface value for the real value we make poor decisions. It can happen at work but it can happen anywhere. Bloom continues,
So, for example, imagine you’re thinking of hiring a meal prep company to replace the need to cook dinner for your family:
Costs of Cooking: Well, it takes me one hour to cook dinner for my family in the evenings. I could get that hour back. Plus it would save time cleaning up afterwards since it would just be reheating the meals and a few dishes.
Value of Cooking: My family eats dinner and has calories to survive.
Based on this, you decide to hire the meal prep company. The optimized alternative will get you back that time, deliver the same value, and it’s reasonably priced relative to groceries.
But after a few months, something doesn’t feel right. You’re less connected to your family. Your kids grab their pre-prepared meals and eat in front of the TV. You and your partner grab yours and eat in front of your respective computers, firing off emails the entire time.
As it turned out, the Real Value of cooking dinner for your family went far beyond the Surface Value of the food and calories it provided. The Real Value had very little to do with food. It was about connection. About a ritual. About slowing down. About doing something together.
So when Anthropic CEO Dario Amodei predicts that 50% of entry-level white collar jobs will be wiped out in the next 1-5 years, I think we should pay attention. However, we should also pause and consider whether that prediction is based eliminating only surface value and not considering the real value of those jobs.
The seen and the unseen
One more response, from an essay called “AI isn’t coming for your future. Fear is.” The writer, Connor Boyack, responding directly to Shumer’s essay says this:
…The fear you’re feeling right now, that sinking sense that the rug is being pulled out from under you, is one of the oldest and most consistently wrong reactions in human history. It has a name. It has a pattern. And it has a track record of being spectacularly, almost comically, incorrect.
Not once or twice… like, every single time.
He goes on to explain that when you read an article about how AI will replace jobs, your brain imagines current workers losing their current jobs and sitting idle. “It does not (it cannot) simultaneously imagine the new roles, new industries, and new forms of value that will be created. Because those things don’t exist yet. They’re unseen.”
Boyack describes this as a failure of imagination due to our brains being wired to respond to visible threats rather than to anticipate invisible opportunities.
He goes on to share several examples, ranging from the Luddites to ATMs and bank tellers that tell a common story: a new technology provokes fear that people will be replaced by the technology when, time and again, the technology creates new opportunities and new jobs. Here’s the example he starts with,
In 1589, an English clergyman named William Lee invented something remarkable: a mechanical knitting frame that could produce stockings far faster than any human hand.
He brought it to Queen Elizabeth I, hoping for a royal patent. She refused.
In a story recorded centuries later, Elizabeth I is said to have told William Lee, “Thou aimest high, Master Lee… It would assuredly bring, to [hand knitters], ruin by depriving them of employment, thus making them beggars.”
Read that again.
A monarch (reportedly) looked at a machine that would make clothing faster, cheaper, and more accessible to her subjects, and her instinct was to block it. Not because it didn’t work but because it worked too well. Because she could see the hand knitters who would lose their immediate livelihood, and she could not see everything that would come after.
Lee was forced to flee to France, where he died in poverty.
But the technology didn’t die with him. His brother brought the knitting frames back to England. The machines spread. And over the next two centuries, that very same textile industry—the one Elizabeth tried to protect by freezing it in place—became the engine of the Industrial Revolution.
England didn’t just keep its textile jobs. It became the wealthiest nation on the planet because of mechanized textile production. The industry grew orders of magnitude beyond what hand knitters could ever have produced. New jobs, new trades, and new entire economic classes emerged that Elizabeth couldn’t have fathomed.
She saw the hand knitters. She could not see the factory workers, the merchants, the engineers, the shippers, the global trade networks, the rising middle class… and the entire modern economy that the knitting machine helped set in motion.
“The seen” destroyed her judgment. “The unseen” would have saved it.
Connor sees this story repeat itself time and time again through history - always with the new technology creating fear of job loss, only to have new jobs get created and the economy to expand. He calls it the “fixed-pie delusion”, the instinctual belief that the economy is a zero-sum game where “there is a fixed amount of work to be done, and if a machine does more of it, humans must do less.”
But, he says, the economy is not a fixed pie. “It’s a garden. And technology is rain.”
I like that argument and the examples hearten me. However the question becomes, “Is this time different?” It’s one thing for a mechanical knitting frame to threaten hand-knitters who could go find other handwork. Never before have we had a technology introduced that could so broadly disrupt so many industries at once, from law to medicine to finance, etc.
Interestingly, Boyack suggests the same response as Shumer: People should lean into learning this new way of working with AI. Shumer advocates for it out of fear - If you don’t do it now, you’ll be permanently left behind. Boyack suggests leaning into AI as a way to capitalize on the future:
That question—what is this making possible?—is the most valuable question you can ask right now. About AI, about your career, about your life.
The knitting machine didn’t ruin England. It made it the wealthiest nation on earth.
The power loom didn’t destroy the textile industry. It expanded it beyond anyone’s imagination.
The computer didn’t end employment. It created the modern economy.
AI won’t shrink your future if you refuse to let fear shrink your vision.
We Are Not Alone, But We Are Not Adrift
I’ll close with this surprising essay I received from Scott Crosby, who used to be one of my pastor’s in New York. It is called “We are Not Alone, But We are Not Adrift.” Crosby is doing something I wish more people of faith were doing - grappling with the reality of AI at a serious theological level.
His work explores “cross-ontological flourishing,” a big phrase which he describes as tackling this question:
Can different kinds of created things—humans, artifacts, computational systems—participate together in God’s purposes according to their distinct natures, without collapsing those distinctions?
Crosby’s essay describes his research, which includes how giving an LLM model a Creation-Fall-Redemption framework results in greater stability of semantic relationships, and less hedging and oscillation in more grounded outputs.
Crosby offers a way of framing the language we use for AI as “analogical” and I found it helpful:
When we talk about different things participating in God’s order, we’re using what’s called analogical language.2 Christians have always learned to understand new things by analogy—by asking how they fit into the patterns God has already revealed in creation and Scripture.
Think of it simply:
- A dog can be “faithful” and a friend can be “faithful”—there’s real similarity, but not in identical ways
- A map “represents” terrain and a photograph “represents” a scene—truly similar function, different methods
- Angels “know” and humans “know” and animals “know”—each according to their created nature
This is different from two other ways of thinking:
- Treating things as identical: “AI thinks” meaning exactly what “humans think” means → leads to consciousness panic
- Treating things as completely unrelated: “AI processes” having nothing to do with “humans understand” → leads to pure mechanism, no respect
The third way—analogical participation—says: AI maps meaning in a manner analogous to how humans understand meaning—truly similar in that both involve patterns, relationships, and coherence, yet essentially different because humans possess interiority, intentionality, and personhood while computational maps do not.
Crosby closes with some pastoral advice, including reasons for hope within a Christian worldview that I found wise and comforting. If you are a believer (or if you aren’t but you are looking for words of hope somewhere), I’d encourage you to read.
We live in a remarkable time. This is a moment filled with opportunity and risk. My prayer is that we would not be ignorant of the changes that will likely affect so many of us but also that we would not be slaves to fear.
For me that means taking time to reflect rather than just reacting. When I read a single one essay like “Something Big is Happening” my emotional reaction is anxiety. What helps move me beyond the anxiety is reflection.
A large part of why The Weekend Reader has been useful to me is that it’s force me to reflect. I do that better when I read multiple authors who have differing views.
My reflections are improved when I compare them to scripture and theology. I confess I need guides there so I’m grateful for Scott and for others like Andy Crouch who are attempting to reflect theologically on our AI age.
My reflections are probably most improved when they are tested in conversation with thoughtful family friends. This whole topic is a lot (an impossible amount?) to process in isolation. Thinking these things over with friends sharpens my thinking and tends to calm my nerves - not because they just say “everything will be okay” but because I feel we can face hard things better when we face them together.
We are not alone and we are not adrift.
Read widely. Read wisely. Live hopefully.
Max
If this resonated with you, share it with someone in your life who is thinking about these things (or who ought to be).
If you don’t want to miss future editions of The Weekend Reader, subscribe for free:
The views expressed here are my own and do not represent Saturn Five or any other organization I’m involved with.







I am in the legal "services" industry. I am probably well ahead of most of my colleagues in adapting to AI technology. And, you are right, AI has expanded my abilities and efficiency. The examination of contracts, legal briefs, memoranda, and depositions is much more efficient than sitting on a couch with highlighter and pen scribbling margin notes on hard-copy documents. Will it change my work-habits? Yes and no. I was in the practice of law when computers came into being. Until that time, we utilized IBM Selectric typewriters, hard copies, and 8 1/2 x 11 sheets of paper to practice law. The firm where I worked was one of the first to adopt computers in the practice of law. We assumed we would now be able to work three days a week and rest on the other four days. Our imagination about the future was deficient. Case files that were once at most an inch thick grew to banker-box size litigation files. Our work-load increased "exponentially" and now we work long after 5 PM and most weekends. Case data has exploded for each case. Our imagination concerning the future of the law was far, far from complete. An AI robot does not yet know the smell of a courtroom, let alone the Sistine Chapel.
In my daily work as a judicial mediator, I continue to be confounded by the "disorder, unpredictability, and humanness" of the practice of law. The Law is not circumscribed by words on paper or rational legalistic arguments. If you really want to get "real" about the practice of law, consider the words of Charles Bowen, aka Lord Bowen, an English Court of Appeals judge who once said, "The state of a man's digestion is a matter of more importance than the Constitution of the United States". The implication of Lord Dowen's observation is that judicial decisions rest as much upon intuitive hunches, the personality, background, temperament and experience of the Judge as they do from any conclusions arising from logic and rationality. As a judge with ten years experience on the bench, I often warn lawyers that jurors will make decisions based more upon an argument they had at home before coming to court rather than strict observance of the jury instructions given at the end of a case.
In other words, the Law is actually about people, not words on paper or arguments based upon reason. And those who continue to understand people and the complexity of relationships among individual people with their cultural background, family values, beliefs (spiritual or otherwise) will continue to be the life-blood of the law as well as many other occupations deemed at risk as a result of AI. Imagination and creativity arise from the interactions of human beings, each bringing their own unique framework of truth, morality, values, and beliefs to the table. I predict the soul of the lawyer, the soul of a writer, the soul of a communicator, and the soul of any creator will out-guess, out-run, and out-smart any machine with no heart, no faith, and no dreams.
Thank you for the balance!