Centuries from Now, We’ll Still Say “Literally”
If the future keeps listening to us, will it ever learn to speak for itself?
Shakespeare wrote in English—technically. But for many contemporary readers, his work feels like looking at a cubist portrait: the elements are all there—eyes, hands, the curve of a cheek—but combined into something far from human. Shakespeare's words are familiar, but the way they’re ordered, the rhythms and references, the density of allusion—they can feel impenetrable. Teachers explain that it’s Early Modern English, and they’re right. But to the average high school student, it might as well be Elvish.
This isn’t surprising. Language evolves. We know that. What’s surprising is how rapidly it once changed—and how, now, that pace may be slowing.
I started thinking about this after listening to a sci-fi story. The setup: a colony on Mars, the last outpost of humanity, a millenium after the Earth has gone quiet. Time travelers from this distant future return to the present, disguised as everyday Americans. What jarred me, oddly enough, wasn’t the technology or the time travel. It was their voices. These characters—born a thousand years from now—spoke with perfect Midwestern accents. Like they’d just stepped out of a Big Ten tailgate.
It felt implausible. And then, the more I thought about it, the more believable it became.
We are living in a radically different linguistic moment. Never before has a civilization documented so much of its speech. Every video call, podcast, livestream, film, tutorial, meme—it all becomes a kind of linguistic fossil record. We’re not just transcribing what we say. We’re recording it, replaying it, reinforcing it, training our machines to mimic it.
That creates a kind of feedback loop. Language, once dynamic, begins to self-reference. And that raises an interesting question: could the very abundance of linguistic artifacts begin to slow language evolution?
The linguist John McWhorter notes that much of language change in the past was due to geographic separation and low literacy rates. People spoke how they heard others speak, and when communities were isolated—whether by class, distance, or colonization—language diversified. But today, we are all hearing each other all the time. In a 2016 interview, McWhorter suggested that “broadcast media may be doing more to standardize accents than any educational policy ever could.”
There is some data to back this up. A 2019 study published in the journal Language Variation and Change tracked accent convergence in urban areas of the UK and found significant dialect leveling—particularly among younger speakers heavily exposed to national media. And in the U.S., sociolinguist William Labov, in his decades-long studies of American English, observed that the once-distinct Northern Cities Vowel Shift has begun to recede among Gen Z speakers. Exposure to national media, researchers suggest, plays a major role.
In short: we're hearing the same voices, over and over again. And that matters.
Historically, language shifted through discontinuities—generational turnover, migration, isolation, even catastrophe. But in a world saturated with persistent recordings of everyday speech, those breaks may soften. A child in 2150 may grow up not just reading about the past, but watching it, listening to it, even conversing with simulations trained on its cadence.
Already we see a flattening. Media has become the dominant accent coach. YouTube, Netflix, and TikTok accelerate the convergence toward a kind of globalized English—American-inflected, region-neutral, algorithm-approved. Call centers train toward it. AI assistants are optimized for it. And our own voices, as we interact with these tools, begin to bend back toward their patterns.
That doesn’t mean language won’t change. Of course it will. New slang will emerge. Code-switching will persist. And cultural shifts will always reorient how we express identity through language. But the core—grammar, accent, prosody—may begin to ossify.
Linguists sometimes speak of linguistic drift—the gradual, often imperceptible evolution of language over time. What we may be entering is a phase of linguistic inertia.
And if that’s true, it’s worth asking: what are we preserving? What values, assumptions, biases, and blind spots are baked into the way we speak now, that we might be passing on not only through culture, but through code?
AI language models already carry some of these imprints. The data used to train voice assistants like Siri or Alexa is drawn from curated, largely Western sources, reinforcing specific speech norms. As sociolinguist Carmen Fought put it in a 2020 panel discussion: "When we teach a machine to talk, we’re making decisions about what kind of speaker it should be. That’s a cultural decision, not a neutral one."
Shakespeare had no idea we’d be streaming his plays on demand while multitasking. But perhaps the more interesting inversion is this: maybe someone centuries from now will stream us. Our arguments about traffic. Our jokes about Wordle. Our tutorials on folding fitted sheets.
And maybe they’ll understand us. Maybe the influence of our speech will rival that of their parents’. Maybe they’ll hear the voice of the past—and not just comprehend it, but feel it.
Or maybe, they’ll notice a strangeness. A distance. A rhythm that doesn’t quite match their own. And in that moment of difference, they’ll be reminded: even in an era of linguistic memory, language still carries the fingerprints of its time.


