Body Image in the Age of AI
- Jacqueline Vickery
- 5 days ago
- 7 min read
Updated: 5 days ago
Why Awareness Alone Won't Protect Teens

As a confident yet incredibly self-conscious teenager in the 90s, I spent far too much time struggling with what I saw when I looked at myself in a mirror. I didn't have the language to understand what the magazines and television and ads around me were doing to my sense of my own body.
The adults in my life offered well-intentioned messages of self-empowerment (“you're beautiful! be yourself!”), but they identified no systems or harm and left me feeling alone with an embodied experience of a beauty culture that I could feel, but could not yet describe.
By my 20s, I had - thankfully! graciously! beautifully! - been introduced to feminism, and I could name beauty standards as a tool of patriarchal capitalism designed to profit from my feelings of inadequacy. This helped my self-esteem,
but unfortunately understanding the system intellectually didn’t protect me from experiencing the system emotionally.
I carried the frustration of knowing better and yet still being caught up in the pervasive system that left me feeling inadequate at times. It took years past adolescence before the knowing and the feeling started to line up, and longer still before I arrived at the ability to look at myself outside of that harmful gaze.
Knowing Isn't Enough
Phew, ok, so if that was my experience 25 years ago with photoshopped magazine covers, skinny blonde celebs selling skincare in between cheesy sitcoms, and grainy digital images that we passed around on burned CDs, I can only imagine what adding never-ending social media feeds and AI-generated synthetic images into the mix is doing to young people today.
Maybe you're hopeful that kids who grew up seeing their faces with puppy dog ears superimposed onto sci-fi green screen backgrounds already know not to take these images at face value. Maybe we can assume they're fine...?
Research suggests otherwise. Studies on social comparison and filtered imagery reveal that knowing an image has been edited does not protect the viewer from comparing themselves to it. Emerging research on AI-generated imagery suggests the same dynamics are likely at play:
knowing isn’t enough, because social comparison happens in the encounter itself, not in the analysis of the encounter afterward.
Naming these dynamics is important, but naming them isn’t as protective as we might hope when it comes to self-esteem, body image, and social comparison. If awareness alone doesn't protect from harmful consequences, then we need to look at what young people are growing up with.
The Ecosystem Teens Are Growing Up In
To do that, we must situate synthetic content inside the broader cultural and media ecosystems that young people are developing in today.
The Sephora kid phenomenon, in which ten and eleven-year-olds are using retinol and anti-aging products marketed to them through TikTok hauls, was already in full swing before AI became widely available. Clean girl, that girl, coquette, and the rapid-cycle aesthetic identities that require a fresh product stack each season were already shaping how tweens assembled a self. The Get Ready With Me format was already converting the morning routine into performance.
Beauty filters on Snapchat and Instagram had already spent years playfully shifting what young people see when they look at their own faces (to the point that plastic surgeons have described patients arriving with filtered selfies as reference photos for what they want to look like!).
And while body image conversations have historically centered girls and femme-presenting teens, the incel-adjacent looksmaxxing language had already migrated into mainstream TikTok.
The trend gives them a formal vocabulary for ranking their own faces and bodies against standards that are just as narrow, and just as tied to patriarchal capitalism, as the ones marketed to girls.
AI Intensifies the Pressure
AI generation and synthetic content is the newest layer added on top of all of this, and it brings capacities that earlier technologies didn't offer. The ratio of human-made to synthetic content in teens' feeds is shifting rapidly and the tools to produce synthetic versions of themselves are just a tap away.
Unrealistic beauty standards are now produced at effectively zero marginal cost, at unlimited scale, by systems optimized for realism, vibes, and engagement.
For decades the beauty ideal was something teens watched from a distance, attached to someone else's face on a magazine or a screen, and more recently to the filtered and curated versions of friends scrolling past in their feeds. AI changes that by letting them generate the ideal on their own face and hold the result in their hand a few seconds later. And then of course, share it with their friends and followers.
And we know that the standards encoded in AI image generation are not neutral. These systems are trained on datasets that reflect and amplify existing biases toward thinness, whiteness, youthfulness, able-bodiedness, and narrowly defined gender presentation, which means the "ideal" the technology produces is intensified and rendered infinitely reproducible, and pushed toward sameness. Every generated face nudges a little closer to the same narrow template, and that template is reinforced every time someone uses the tool.
For some teens these tools can offer something useful, such as a creative space to visualize and try on aspects of identity that the world hasn't yet made room for. A transgender teen picturing a future self, for instance, might find the experience affirming and clarifying.
But the same tools that help a teen imagine themselves more fully are also trained on datasets that flatten and erase their unique identities. The young people already most pressured by that "ideal" standard are the same ones the intensified version hits hardest. As I've said many times before, young people who are most vulnerable offline are also the most vulnerable online.
Identity Under Construction
Adolescence is when identity is being actively constructed, and that construction happens in large part through their media environments and cultures. This was true when I was a teen in the 90s and it's still true today.
But unlike adults, teens aren't measuring the media environment against an already-constructed identity. They are building their identity using culture as part of the raw material, which means whatever the dominant culture is offering them (its standards, its hierarchies, its ideas about who is worth looking at) gets folded into their developing sense of self before they have the tools to push back on it.
An adult encountering a manipulated image has decades of lived experience to push against it. Teens are still building that frame of reference, so the ideal gets folded into who they are rather than measured against who they already know themselves to be. And the media environment they are building inside of is unlike the one any previous generation encountered.
Earlier generations had unrealistic standards too, but we didn't have the technology to instantly produce ourselves inside those standards, or the algorithmic systems delivering a customized version of the standard back to us every time we opened an app.
It took me two decades of lived experience and a feminist community to get myself out of the beauty ideal trap. How is a fourteen-year-old in today's synthetic media environment supposed to do it on their own?
Common Responses to the Problem
One common response has been to focus on the tools themselves: ban AI, restrict the apps, kick kids off the platforms. But AI is not the thing that created this problem, it's accelerating something that was already happening. This means our response has to address the broader ecosystem, not just the newest layer.
That said, we absolutely need greater regulation of AI across the board, especially on platforms where young people are present. We need stronger guardrails built into how these technologies are designed, deployed, and marketed to minors.
A second response has been a renewed push for critical media literacy and now AI literacy: teach young people to recognize manipulated images, to understand how algorithms shape what they see, and to notice the systems profiting from their insecurities.
These are all valuable and necessary techniques I teach in my programs. A young person with the tools to name what is happening is better positioned to navigate an environment than one without them. Both responses have their place, but neither is sufficient on its own
because awareness works at the level of thinking and the harm operates at the level of feeling.
What Else Young People Need
Young people do not yet have the years of lived experience, or the relationships and sense of self, that eventually allow the knowing and the feeling to line up. And they are not going to build those things through better critical analysis of their feeds alone. They are going to build them through every other aspect of their lives,
which means the work for the rest of us is making sure screens are not the most prominent source of their identity development and sensemaking.
Young people need adults who know them and offer a corrective lens to a distorted identity script, friends and peers who see them in full dimension, spaces to develop a voice outside of performative metrics, and room for creativity and expression that does not require an audience.
They need reflections that are not the black mirror of their phone screens, and a proverbial lens that isn't the front-facing camera connected to filters that distort what they see when they look.
They need adults who can help them name what they are encountering without shaming them for being affected by it, and a willingness to hold space for the gap between what they understand and what they feel.
The question for those of us who care about young people is not only whether we can teach them to recognize what is happening online (many of them already can), but what else we are offering them while they live inside it. For me, part of the answer is always feminism, which didn't save me on its own, but it gave me the framework that made everything else possible.


