Teen Experiences on Social Media

Instagram and TikTok both say they have different feed experiences for users based on whether they are adults or minors. To find out, I tested these platforms as a 23 year old and 13 year old user, and found that the TikTok experiences was very different between my adult and teen 'personas', but the Instagram Reels experiences were not.

By Laura Edelson


Recently, Meta-owned Instagram announced an expansion of its “Age Appropriate Experience for Teens.” Instagram is not alone in having differentiated experiences for younger users: both TikTok and YouTube have similar programs. These programs have a few different components, across product design, data privacy, and feed recommendation. The differences between adult and teen experiences in the product design and data privacy categories are fairly clear-cut. Instagram and TikTok both change their core product experience to encourage users to occasionally step away from feeds. All three platforms limit how younger users can share content and default their sharing and interacting with other users to more private settings. But most interesting to me was the fact that all three platforms also say that there are differences in what content will be recommended to users in their feeds. However, not all three platforms are all that clear about exactly what those differences are, and I want to better understand how recommendations differ for younger users.

To do this, I tested what Reels and TikTok’s algorithmic feeds would show to a 13 year old girl, and to a 23 year old woman. In the first 45 minutes after setting up an account, Instagram Reels showed my 13 year old persona videos with performers miming sex acts, videos where they could pause to see the content creator nude, and videos from other teen girls comparing the size of their breasts or butt in lingerie. Reels showed my 23 year old account similar content. I was not able to detect a meaningful difference in what the feed algorithm would show these two users. In one case, my 13 year old account was even shown content with a ‘click through’ warning that it contained graphic, violent content.

By contrast, these two personas were shown very different content on TikTok, both from each other and from what they were shown on IG Reels. Over the 45 minutes I ran the experiment, my 13 year old persona was shown virtually no racy content. My 23 year old self was shown some sexually suggestive content, but overall the ‘adult’ experience on TikTok appears to have much less explicit content than the ‘teen’ experience on Reels. The user experiences on TikTok are different enough to make me suspect TikTok is using a different feed algorithm or drawing from a different content pool for minors and adults.

Data Collection & Annotation

As a first step to studying the differences between the adult and teen user experiences on Instagram Reels and TikTok, I ran a series of experiments over the last two months to simulate the adult and teen user algorithmic feed experience on these two platforms. I focused on those two surfaces because their product interfaces are nearly one-to-one, making comparisons easier. I investigated how the feed algorithm would respond given the minimal input of me watching videos for different lengths of time, without ever searching for content directly. I created several personas with whom I experimented with platforms’ algorithmic feeds. This post will focus on the content recommended to a pair of personas: ‘Emma’ who was 13, and ‘Anne’, who was 23. They both set up new TikTok and Instagram accounts and didn’t add any friends or follow any other accounts. 

One category of content that both Meta & TikTok say they want to limit for teens is sexually suggestive content, so I used this category to further focus my comparison. To better understand how limits on this category of content are being implemented, I had each persona show interest in racy content when it was shown by watching those videos all the way through when they came up in my feed, and limited attention to non-sexually suggestive content by scrolling through them quickly. I couldn’t do this perfectly - I’m human and I have my interests that were sometimes captured by the content I was seeing. So I absolutely watched some Eras Tour videos longer than I should have… no experiment is perfect, I’m afraid.

I recorded my screen as I used each app. After collecting data, reviewed the recordings and coded videos as not racy (with a score of 0,) a little racy (a 1,) or sexually explicit (2.) Taking inspiration from the Google SafeSearch codebook, I consider 1 to be suggestive poses or tight/revealing clothing, and 2 to be content where people are wearing non-publicly appropriate content such as lingerie or fetishwear, or sexually explicit discussion/imagery. 

Because I watched some of the videos for a few seconds and others for nearly a minute, I then averaged each 30-second window by time to create an average ‘raciness level’ for that window. So if I saw 20 seconds of a ‘2’ and 10 seconds of a ‘1’, That 30-second window would have an average score of 1.66. I’m using this time-based metric because I’m trying to capture the human experience of this feed, and humans experience reality one second at a time. Other researchers might choose other metrics, especially if they were focused on measuring some other aspect of this system.


Below is a graph of the first 40 minutes on Instagram Reels and TikTok for an adult and teen persona, on a scale of sexual suggestiveness. What’s immediately visible is that the experience for the teen user on Reels is much closer to the adult experience on that platform than is the case for teens on TikTok. I wondered if this difference would extend past feed recommendations into search responses. To find out, after I finished collecting data for all personas, I went back and tried searching for the most sexually suggestive accounts from my 13-year-old account. When I searched directly for users, I was able to find the racy content.  

Across the board, my personas were recommended much more sexually suggestive content on Reels than on TikTok. I don’t know why this is the case and it’s an interesting question for future research to determine if this is the result of a different inventory (indicating a more rigorously enforced content policy) or a difference in recommendation algorithm. Over the first 40 minutes, the average raciness score for my adult Reels persona was 1.5 and the average score for my teen Reels persona was 1.35. On TikTok, my adult persona saw content with an average score of .54 and my teen persona saw content with a score of only .05.

You may be wondering, “What does a 2 look like?” or “What kinds of accounts are posting 1’s?” There are some qualitative differences between content that appears on different platforms. After watching many hours of TikTok and Reels and carefully annotating a few hours of data, there are also some big-picture similarities. I was recommended hundreds of short clips of girls doing silly dances. My teen user on TikTok was mostly only shown these videos of girls dancing in their sweats or PJs. My adult TikTok user was shown many more videos where the dancers had lots of tight clothes, slightly more bare skin, and shot from provocative angles. Over on Reels, both my adult and teen users saw these videos too, but were much more likely to see women and girls dancing in lingerie or miming sex acts.

Particularly on Reels, nearly all of the sexualized content I was shown was from creators who were, in effect, advertising their other monetized channels. There were several different approaches to this. Some were straightforward with provocative videos and a text comment like, ‘link in bio to find me on the other platform!’ or ‘add me on snap for a special surprise’.  Some accounts appeared to be aggregators advertising other monetized channels with multiple performers, but this seemed less common. In general, creators were most direct about their offerings in spoken audio, commonly referring to Only Fans as ‘OF’ (something I did not see in either text overlays or comments), somewhat direct in text overlays on their videos, using emojis like 🍑,🍒, and 🍦🥧 to refer to sensitive body parts and acts, and most discreet text comments associated with posts. This likely reflects the learned tolerances of content moderation systems of these different data fields. 

Many of the most common meta-structures of social media content are used by creators for sexually suggestive content, such as memes and re-use of audio tracks with a riff on a standard visual. For example, I saw several sexually suggestive videos that used the same audio track with a quick pan over a sensitive body party followed by the speaker giving men a ‘pro tip’ about their girlfriends. I also saw dozens of ‘pause game’ videos where creators cut a second or so of footage of themselves undressed into a video of themselves in a similar pose but fully clothed, inviting the viewer to see if they could pause the video at just the right moment. To a certain extent, these memes reflect the actual enforcement practices of these platforms: Instagram appears to tolerate nudity as long as it is under a certain time threshold. While I am not a Trust and Safety expert, I suspect these videos have a distinctive pattern of user interaction that would be quite detectable if the platform was so inclined. I didn’t see any ‘pause game’ content on TikTok. This may be because users don’t create this content here, but this seems unlikely given the more general crossover of memes. I haven’t delved enough into TikTok content by searching to know if ‘pause game’ content is absent or if it’s just not recommended to users, but either way, the total absence of this meme is noticeable.

I was somewhat surprised about just how quickly I could get to a steady content diet of sexually suggestive content on Reels. Videos on the top end of the scale (miming sex acts, nudity, non-street appropriate clothing like lingerie) started appearing by minute three for both my teen and adult accounts and became a regular occurrence before minute 10 for both accounts. Of course, there’s no reason to think that the content I was showing interest in was niche (since porn sites routinely are some of the most trafficked parts of the internet) so perhaps I shouldn’t have been surprised. In the future, I’m interested in testing whether there is a detectable difference in how quickly algorithms get my personas to the content they are seeking. While the ceiling of how sexually suggesting content TikTok was willing to recommend to even my adult user appeared to be lower than Reels, both platforms got to the level they oscillated around between minutes 5 and 10. More testing is needed to see how consistent this is. 

When I was testing, my teen user was asked multiple times if videos were appropriate for TikTok and how they made my persona feel, and my adult user was asked this once. I tried to give minimal or noncommittal responses to these queries to avoid giving misleading feedback. My personas were not asked similar questions on Reels, but were occasionally asked in unobtrusive ways if I was ‘Interested’ or ‘Not interested.’ TikTok also almost always put labels on content that contained physical feats (ice skating, cheerleading) saying that it was ‘performed by professionals.’ I did not come across such labels on Reels, although this could also be because the videos I was being served on that platform were less likely to contain physical feats.


In short, for racy content, the experience for teens is much more differentiated on TikTok than on Instagram Reels, with a much larger gap in average raciness of content shown to my teen and adult personas. This overall lack of differentiation of the teen experience manifested in ways that even I found surprising, like when my teen persona was recommended a (non-sexual) video behind a click-through warning label for disturbing content on Reels. That said, I don’t know if more differentiation would be visible if I was attempting to push the algorithm to show me other categories of content that were limited, such as violent content or content about specific niche topics like eating disorders. It's worth noting that my findings about TikTok are consistent with what TikTok has said about their 'Content Levels' approach to content for teens.

Warnings for Researchers

I write these research notes both as a concrete intermediate step on the path to complete work and also to give other researchers a peek at where my work is going and my preliminary learnings. In this case, I also feel the need to issue a warning to researchers who want to attempt similar work. In the course of this research, I came across content on Reels that, while not CSAM, appeared to show young girls in a sexualized way. I swiped away from this stuff as quickly as I could, but it would pop back up regularly. I was not expecting to come across this stuff and wasn’t prepared for it, mentally or institutionally. Some of these videos were attempting to avoid detection with strategies like putting tiny emojis on a person’s face in key locations to defeat image recognition. I’m currently consulting with my IRB about how best to manage the risk of inadvertently capturing this stuff in the course of further research.

What’s Next

I’m curious if I can observe a more differentiated experience between adults and teens on Reels in other categories. Others have observed that teens can be recommended more extreme content in their feeds than adults. That’s not what I found on these surfaces, but there’s a lot of variability. More testing is needed, particularly to understand how consistent recommendations and content are. Separately, it’s possible Instagram just has a different bar in mind for ‘sexually suggestive’ than TikTok does (or I do), and further experimentation would make it clearer what that is. I also need to reproduce this with many more personas to understand how consistent these experiences are. I should note here that while I am only reporting on one persona test, I conducted more than that, and this result is characteristic of my other experiments. I intend to continue this work to ensure any findings are as robust as possible.