Context Not Included
The accidental clarity of early text chat
If you’re of a certain vintage (the vintage that spent considerable time staring at CRT monitors, that is), you probably remember that specific, jarring sound of a computer connecting to the internet via a phone line. That digital screech was the gateway to another world: a world built almost entirely out of words.
Before the era of sleek smartphone interfaces, high-definition video calls, and a practically infinite library of perfectly curated reaction GIFs, there was the pure, unadulterated text based internet. BBS message boards, AOL chat rooms, ICQ (remember the “uh-oh!” sound?), and AIM (AOL Instant Messenger). All of them had one glaring commonality: they put everyone in the same communication environment.
Just words. No facial expressions. No vocal tone. No body language. No implied subtext handed to you for free.
For most neurotypical people, this was a massive adjustment. It stripped away all the nonverbal cues they implicitly relied on to convey and interpret tone and intent. But for an AuDHD (Autistic and ADHD) brain like mine, it was simultaneously the closest thing to a native environment the internet had ever offered… and also, somehow, still confusing in entirely new ways.
Here’s why that history (and the “Context Not Included” nature of those early tools) actually matters for how we design and use communication tools today.
The AuDHD Native Habitat: An Even Playing Field
For many neurodivergent individuals, the problem with face-to-face communication isn’t understanding what’s being said, but navigating the constant, unspoken social signaling. There’s a perpetual, high-processing effort to decode eye contact (Too little? Too much?), read microseconds of facial tension, or interpret a slightly too-long pause that apparently means “I’m upset” instead of “I’m thinking.”
Early text chat accidentally leveled that playing field. Suddenly, everyone had to state their business directly. They had to choose their words carefully because they didn’t have a pleasant smile or a relaxed posture to do the heavy lifting of showing they were friendly, or joking, or stressed.
The communication was forced to be literal. For an AuDHD brain, this was clarifying. The uncertainty of social cues was largely eliminated. We were all temporarily standardizing our communication protocol: ASCII text only. It was like suddenly everyone agreed to turn off the confusing social “feature” and just use the base OS.
The Accidental Precision of T-9 and Forced Compression
Remember trying to “text” (a brand new verb back then!) on a numeric phone keypad? T-9 predictive text wasn’t just a technological marvel; it was an accidental lesson in precision and compression. Every word you wanted to type took effort and forethought. You learned to condense your ideas not just because of character limits, but because typing was work.
This forced brevity was often a bonus for AuDHD communication styles (where directness is prized), but it also revealed something profound about context. It showed how often we assume context rather than stating it. In a limited environment like a SMS message or an AIM away message, you didn’t have room for hedging or nuanced qualifiers. You had to say exactly what you meant.
The result? The resulting text was often sparse, literal, and... easily misinterpreted by anyone looking for that assumed context.
New Confusions and the “Away Message” Subtext
But don’t get me wrong: as clarifying as it could be, this new environment still managed to be baffling. Because, of course, the people using the tools were still people (specifically, neurotypical people) accustomed to complex social signaling. They just adapted.
The result was things like the “lol” problem. Did “lol” actually mean “laughing out loud”? Or was it just a filler word to show you weren’t angry? Was “LOL” different from “lol”? We had to develop new protocols on the fly to replace the old nonverbal ones.
And then there were the away messages. Oh, the away messages! A feature originally meant to just let people know you weren’t at your computer became a high-art form of emotional subtext. You were apparently supposed to decode the exact emotional state of your friend based on a cryptic lyric, an inside joke, or the precise combination of parentheses and underscores in their message. It was a whole new layer of implied context to worry about, just when we thought we had simplified things.
Why this History Still Matters: The Context Layer in the Human OS
Revisiting the era of AIM and ICQ isn’t just a fun dose of nostalgia. It’s a vital reminder of how context actually works in human communication. Our old, text-based tools revealed a fundamental truth that’s as relevant today as it was in 1999: We tend to ignore how much meaning we load onto the words, instead of packing it into the words.
Nonverbal communication does a massive amount of hidden processing for us. When that layer is removed, as it often is in modern work (Slack, emails, even this blog!) we face the same problems we did with AIM, only now on a much larger, global scale.
Today, we try to solve the “context not included” problem with emojis, GIFs, and video calls. But perhaps the real lesson from those archaic tools is that we should focus less on finding a digital replacement for tone, and more on improving the base communication protocol itself.
We need to become better, more conscious context-providers. If we want disruptive empathy (the kind that truly understands user needs or builds resilient teams), we have to start by accepting that context isn’t a given. It’s not optional. It must be built, explicitly and deliberately, into every interaction. After all, the human operating system, just like any other system, is only as good as the context you give it to work with.


