From Networked Selves to Prompted Realities
In 2008, I wrote an academic essay exploring how social networking sites were reshaping our understanding of social interaction, identity, and technology. At the time, Facebook had just opened to the public, Twitter was still in its infancy, and Instagram didn’t yet exist. It was a moment of transition, from a web of information to a web of people. I was fascinated by the way digital tools were not just facilitating communication, but rewriting the rules of visibility, intimacy, and performance.
Looking back almost two decades later, I’m struck not only by how many of those dynamics persist, but by how they’ve deepened. If in 2008 the network was a mirror of society, today it’s a prism, bending, refracting, and even generating the realities we inhabit. The arrival of generative AI marks a profound shift: we’re no longer just networking; we’re negotiating our identities with machines that simulate context, intention, and voice.
The premise of that early work was that technological artifacts are never neutral. They don’t just support social life, they script it. Today, the script is written collaboratively, line by line, with a machine.
Networks as Social Artifacts
In the original essay, I distinguished between “social networks” as sociological constructs, webs of interpersonal relations, and “social network sites” as technological systems that expose, shape, and sometimes distort those relations. Platforms like Facebook introduced structure: profile, status, friend list, and wall. But with that structure came constraint.
The curated self was born here, forced into a singular presentation to audiences that once were distinct: friends, family, colleagues, and strangers. I described this as a form of “identity compression,” where complex selves were flattened into digestible, platform-friendly personas. There was no room for ambiguity. The software rewarded clarity, continuity, and legibility.
But identity isn’t legible. Not fully. And forcing it to become so, through algorithmic profiles and public feeds, wasn’t just a technical choice. It was a cultural one.
From Status Updates to Prompts: AI as the New Interlocutor
Fast forward to 2025, and the curation hasn’t disappeared. It’s metastasized. We don’t just write into platforms anymore, we write with them. The prompt is our new mode of address. The AI is our new writing partner.
Tools like ChatGPT, Midjourney, and Sora don’t simply reflect. They respond. And in that response, they participate in the formation of meaning. When I use GPT to help draft a difficult email or brainstorm a piece like this, I’m not outsourcing cognition. I’m splitting it, and with that split comes dissonance: a growing distance between what I mean, what I say, and what I remember saying. There’s a strange intimacy in seeing a machine echo my voice, sometimes better than I can.
This collaboration forces a new identity tension: not between public and private, but between authored and generated. If an AI writes your dating bio, your apology, your love letter. Did you actually say it? Or did you just approve it?
We used to ask if the internet was making us more connected. Now the question is: are these tools making us more ourselves, or less?
Situated Action, Revisited
In 2008, I leaned on the theory of situated action; the idea that human behavior is not the result of abstract plans, but emerges in response to environments. I likened it to a key seeking the right lock, and vice versa.
But today, the environment is synthetic. It adapts. It predicts. It autocompletes.
A prompt is no longer just a question. It’s a situation. A model’s reply is no longer just a suggestion; it’s an actor in that situation. These systems don’t just respond; they co-construct. And that collapses the old boundaries between tool and collaborator.
What does it mean to be a self when that self is shaped in part by an agent that has no self of its own?
The Crisis of Identity (Again)
When I first wrote about identity collapse in 2008, the concern was about context collapse: professors friended by students, employees surveilled by bosses, children exposed to parents’ digital lives. Now the collapse is ontological.
We’re seeing the erosion not just of who sees us, but of who authors us. Generative tools anticipate, simulate, and regenerate our expressions before we’ve fully formed them. They fill in our blanks. And if we aren’t careful, they replace our hesitations — those fertile moments of ambiguity where meaning is born — with confident, contextless text.
A colleague recently admitted: “I don’t even know if I wrote that proposal or if GPT did. I just prompted and edited.” That should stop us. Not because it’s lazy, but because it signals something deeper: a blurring between self-expression and synthetic suggestion.
We used to fear being watched. Now we should ask: are we still writing ourselves, or just curating the machine’s version of us?
What We Still Haven’t Learned
If I underestimated anything back then, it was how quickly a revolutionary technology becomes banal. Social media reshaped public discourse, rewrote norms of trust and authenticity, and yet it slipped into our lives like water through a crack.
With generative AI, we have a narrow window to pause and ask better questions. Not “What can it do?”, its pervasive adoption is inevitable, but “What is it going to do to us?”.
When every idea begins with a prompt, what happens to originality? When apologies are written by bots, what happens to accountability? And when speed and fluency become our highest UX values, what happens to depth?
We need to stop evaluating AI purely by output quality. We need to ask what kind of inputs — social, emotional, educational — we’re now compromising on.
Who’s Looking Back?
Looking back, I’m struck by how many of the early concerns hold up. The stage has changed — social networks have faded into the backdrop — but the drama is the same: how we adapt, subtly and profoundly, to the tools we build.
In 2008, we were just learning to live in public.
In 2025, we are learning to live with synthetic cognition, tools that seem to think, but don’t; tools that amplify us, but also erode our edges; tools that could liberate us, but might just as easily automate away the nuance, friction, and ambiguity that make human experience meaningful.
The question isn’t whether we’ll adapt. We always do.
The real question is whether we’ll remain awake to what’s being negotiated, surrendered, or reshaped in the process.
Because behind every artefact — be it a Facebook profile or a generative prompt — there’s a mirror. And it’s worth asking whether what’s looking back still remembers who we were before we asked.