Are you talking to a robot?
This myriad of async text mediums, I’m afraid, is a perfect breeding ground for talking agents.

.. or is that your robot they’re talking to?
The superhuman artificial abilities of software engineers
Agentic AI is a class of artificial intelligence that focuses on autonomous systems that can make decisions and perform tasks without human intervention. The independent systems automatically respond to conditions, to produce process results.
— Wikipedia
Ironically, the first profession we software engineers decided to replace with AI was our own.
Apparently it wasn’t quite a decision at first: while we figured GPTs could talk, we didn’t realize OpenAI’s GPT3 could code until it showed it to us. Only then we decided to train it especifically for coding and make it into a product.
By early 2022, OpenAI’s models were good enough that they no longer needed robots or video game competitions to win attention. GPT-3’s unexpected ability to code inspired the company to train it on more code and release a private test version in the fall of 2021. They called it Codex, and it was intentended to help software engineers write code. That fall Microsoft also incorporated a preview version of the technology, which it called Github Copilot [..]
— Keach Hagey, The Optimist (p. 262)
Prompts and copilots are now slightly more mainstream. People know of ChatGPT and play around with the free version, and fewer but enough people use at work the confusingly named Microsoft Copilot (not to be confused with Github Copilot.. somehow).
But prompts and copilots aren’t Agentic AI: the human in the loop is still a core part of prompts and copilots: you’re asking questions, replying to answers, typing while waiting for autocomplete and asking for verifying how your edits went at every step.
But software engineers have a superpower we don’t yet: Agents.
Here’s a free-form quote from one of my friends talking about his current software engineering work:
At any one point I’ll have somewhere between 2 and 4 agents doing work. 4 is probably my limit.. I have to babysit them. Sometimes they stop, and I just have to go there and say “yep, you’re doing well, keep going!” and they said “well, this seems hard” and I say “yes, I know it’s hard, but keep going!”
^ Does that look anything like prompting or copiloting to you?
It’s not. Some software engineers are playing a completely different game from the one you and I are playing.
How many prompts have you used last month with your ChatGPT? 50? 100? 500?
Well, the top user at this Claude Code Leaderboard currently has spent almost $10,000 in the past 30 days in AI, with 8.7B tokens—we’re talking millions of prompts in 30 days. At 50 prompts a day at an average of 1,000 tokens per prompt, it would take you 477 years of prompting to use as much AI as he did.

Us mere humans don’t yet have access to these magic powers: Agentic AI at this level of sophistication is currently reserved to software engineers who have mastered the use of Zed, Warp, Cursor, Claude Code and similar tools to help them write code.
But don’t fool yourself: this restriction won’t last forever.
Agentic AI is here to stay, and it will take non-software engineering tasks by storm when it comes.
But what could that look like? Why would we do that? And what are the problems we’ll run into?
Let’s start by talking about.. talking.
It’s weird that we started with code..
While software engineers vary in their opinions about the usefulness of AI for coding, they all seem to agree with one thing: AI often makes some really dumb programming decisions.
The problem with programming is that the bar for correctness is much higher than the bar for our day-to-day language. I can say “yosup” and you kind of understand what I mean, while if AI adds a space in the wrong place, perhaps your Python app will crash at runtime.
But there’s a critical difference between talking and programming: while we’re more and more OK with having AI code for us, we’re still not OK having AI speak for us.
Because, of course, AI can make some really dumb decisions, programming or otherwise. And we don’t want to be blamed for that. And unlike code, there are typically way fewer controls over what we speak.
This fear of AI talking for us, I predict, will change. AI will get a bit better at not making dumb decisions, but I suspect that for the most part, we’ll just accept that AI messes up sometimes, and that this is OK given all the work it’s doing for us. The price of admission.
Expedience will beat perfection, because that’s how incentives are aligned. Because this is a systems’ game, and like email versus handwritten letters, if you’re not in the game, you’ll just be out of the game.
You can already see it today: how much of the current content on the internet today is generated by robots? .. is the fact that the robotic content isn’t any good inhibiting us from using and seeing even more robots? Thought so.
A number of professions, particularly in tech, involve talking: product management, project management, people management, people operations, marketing, sales, customer success. Talking, talking, talking.
This talking happens over mostly 4 digital channels today, and 3 of them are text: Email, Slack/Teams, and Texts. The only other is Zoom calls.
This myriad of async text mediums, I’m afraid, is a perfect breeding ground for talking agents.
The hyperactive hive mind .. on steroids
In World Without Email, Cal Newport argues that there’s a paradigm called the “hyperactive hive mind” which illustrates much of how email and slack work at companies today:
The Hyperactive Hive Mind: A workflow centered around ongoing conversation fueled by unstructured and unscheduled messages delivered through digital communication tools like email and instant messenger services.
Newport, Cal. A World Without Email: Reimagining Work in an Age of Communication Overload (p. xvii). (Function). Kindle Edition.
While Cal Newport’s research and thesis is that these messages are making us more distracted and less productive, killing our ability to do deep work, he also acknowledges that you can’t just unplug from the hive mind on your own—that’s where all the work is happening, after all.
There are many issues with professional digital communication today, and I’ve written about them before. One very important one is that digital communication has no backpressure mechanism: the people sending you messages have no way to tell whether you can, or not, receive messages.
(In fact, as a fun aside, did you know Slack administrators have stats on who’s sending the most messages, but no stats on how many messages anybody is receiving? The numbers would be mind-boggling, I suspect.)
Right now, we’re automating production but not consumption: automatic reminders, auto-responders, password reminders and mailing list features: Many of the emails we receive are sent to us by machines to be read by humans.
There’s no reason why emails and messages have to be read by humans in a world of widely deployed LLMs, and there are many incentives for why it shouldn’t—the most important of which being everyone is overwhelmed by messages already, and creation of messages will continue to scale through more and more automation.
The incentives to increase focus, to avoid distractions, to be more efficient, in a world where the amount of messages you receive increases by an order of magnitude (say 10x-20x) is too high. Humans trying to keep up without the help of AI will just be too slow.
And once consumption of messages becomes something that AI does for us, all hell will break loose.
From “Claude made me do it” to “Claude did it”
With software engineers, it’s already not uncommon for something to be caught on code review, only to be justified with “well, AI did it this way *shrugs*” That’s where the joke “Claude made me do it” comes from.
With Agents, AI will not make us do anything.. they will be doing it themselves on our behalf. We’re already lowering our bar for what passes for “us” with copy and paste, and removing controls will only make the problem worse.
While today we use this sometimes in documentation or carefully worded emails, it will become day-to-day.
The Project Manager often asks the engineers how long X is gonna take, so he just has his agent ask for him. The Software Engineer is used to replying however long times 2, so she has her agent do that for her. The Manager’s agent reads everything and summarizes it to the Director. I mean, the Director’s agent.
The poor employee who doesn’t have an agent, though, is caught in a quagmire: just keeping up with Slack now requires 60 hours a week without AI’s help, and everyone thinks he’s slow and becoming the bottleneck of the hyperactive hive mind. So he’s either unproductive because there’s not enough time to talk, or because there’s no time to work.
And then there’s all the impersonation problems. How does constructive feedback work in this world? When is the feedback for not asking enough questions appropriate to a person who uses an agent to ask for estimates? What happens if your agent hallucinates?
If we already deflect blame when we copy and paste crap or have AI write code junk in our name, how much will we deflect when it’s just doing crazy things all on its own?
And how can we tell when we’re talking to somebody versus their agent?
Sign here, human.
GPG is a tool from 1999 that allows you to sign and encrypt messages. Nobody uses it, really, because it’s too convoluted even for tech-savvy computer users.
But you know how when you go to Chase or Amazon’s website, and as long as it says amazon.com
on the address bar, you know it’s not an impostor, and you can give it your credit card? That’s public key cryptography giving you digital security guarantees (at least until quantum computers come around).
Creating content is easy today, and there’s already an issue with impersonation—be it on social media or news websites.
To me, the problem is that we’re used to a world where digital identity counterfeiting through content was hard, and so none of our content is signed, just like the internet was prior to the advent of https or how money was prior to the ubiquity of high-quality printers.
In a world where content can be created infinitely and almost for free, ensuring you are you becomes much more important—particularly if a lot of the content mistaken for yours is actually coming from your robots.
Now I don’t have a great answer for this yet, but I foresee a resurgence (or ..surgence?) of public key cryptography for signing content to separate it from impersonation. Be it from your robots or others’, the default will be that most content, most of the time, will actually be coming from robots and not people.
One way it could look like is this:
To me, the problem is that we’re used to a world where digital identity counterfeiting through content was hard, and so none of our content is signed, just like the internet was prior to the advent of https or how money was prior to the ubiquity of high quality printers.
---
Signed by: https://duion.life/content/files/2025/08/key.pub
iHUEARYIAB0WIQRSlP3LN+PV6w+/jwk828/15GS2CwUCaJVT6wAKCRA828/15GS2C6mMAP9iDBqtF8hdPgPnhi4SYD/CR/vtPdXgtbGYaN9iPPcvDwD/WA+AGxYM2ur+bZVc6cy21NiLRtJwevvlDa5F/pnVow4=
The above statement is definitely made by the person controlling https://duion.life — no counterfeiting is possible. While you can’t prove that the person behind duion.life is actually Dui, you know that whoever wrote the above text has access to the private key for the public key quoted above that’s hosted behind TLS in that domain.
And before LLMs and agents, verifying that signed content was a pain.. but with AI, GPG is easy again:

There you go. GPG is user-friendly again, a quarter of a century later.
In this new world of agents and impersonation, at least I, personally, will want to be careful about which contents I sign ..
.. and I’ll also want to make sure that my agents can’t access my private keys and go around signing stuff in my name.