A Question for the AGI Builders
Consciousness in the commit log
We are living through a strange, exhilarating moment. I have spent the better part of twenty years watching infrastructure evolve (from mainframes and terminal emulators to serverless functions and global edge networks), and I have never seen a technological shift ripple through the human organization quite like this one. We used to worry about uptime and latency (the hard, measurable metrics). Now, we are all worrying about meaning.
Look at the pace of development. Every VC deck today features some mention of generative models or the path to AGI. We are stacking transformer layers higher and wider than anyone could have dreamed of even five years ago, building systems that exhibit emergent reasoning. The compute budget of training something like GPT-4 alone is staggering (well over $100 million, apparently), and the results are rewriting not just code, but the very nature of human work (from marketing copy to low-level programming). The technical achievement is undeniable, a massive testament to engineering prowess.
But this is Disruptive Empathy, and my whole focus has always been the soft stuff, the wetware, the human side of the algorithm. We talk endlessly about “model alignment,” meaning, making sure the AI’s goals match ours (preventing Skynet scenarios, basically). But in all the white papers and startup retreats, I find myself circling back to the deeper, messier, truly disruptive questions of human alignment.
In the rush to create synthetic intelligence, we seem to be bypassing the necessary internal introspection required to manage the people building it. If we are seriously talking about creating entities that can debate philosophers or write their own documentation (which many of these models can do, albeit imperfectly), we must surely stop and ask what we are sacrificing, or ignoring, in ourselves.
Back when the Commodore VIC-20 first shipped (the $299 “Friendly Computer” that brought computing to the masses), the challenge was defining what a personal computer was. It was a box of circuits. Today, the challenge is defining what a person is, and what value our emotional and mental presence adds when a token-based prediction engine can outperform us on the Bar Exam.
This isn’t just about job displacement (a topic for another essay). This is about the psychological state of a team that is building something that fundamentally questions its own value proposition.
How do you, the engineering leader (or the product manager, or the founder staring down a Series A) ensure the well-being and sense of purpose of your highly-skilled teams when they are actively trying to automate away tasks that used to define their careers? How do you maintain an empathetic culture when your core technology is constantly blurring the line between tool and colleague?
If the future of work involves us simply correcting the hallucinations of sophisticated silicon, then the soft skill of discernment (the ability to know when a thing is wrong, not just how to make it right) becomes the most valuable commodity in the world. And that skill is built on consciousness, empathy, and intellectual self-awareness. These are things that cannot be trained out of a data set.
So, I’ll ask the question directly to those of you running sprints in Nashville, in Austin, or anywhere else where the next trillion-parameter model is being spun up:
How do your teams deal with consciousness? (Their own, I mean, not the machine’s.)
What psychological guardrails do you have in place for the engineers who know they are building their own replacement, or at least, fundamentally reshaping their future role? Are we treating the people building AGI with the same meticulous care we are applying to aligning the AGI itself?
I’d genuinely love to hear your experiences, because the code only runs as well as the culture supporting it.
(Special thanks to
for inspiring this post. Don’t sleep on the good work he is doing in the US, Ghana, and elsewhere.)

