Intervention I: Hacking the Algorithm
Last updated
Last updated
This intervention is linked to my research on how algorithms shape migrant integration. AI-driven recommendations tend to reinforce user preferences, potentially restricting exposure to local culture, news, and integration resources, which may ultimately slow the adaptation process.
The initial idea of this intervention was to conduct a controlled experiment by creating new Instagram and YouTube accounts on a separate phone (Patient 0) to analyze the types of content displayed while accessing the platforms from different locations, network connections.
Simultaneously, we used our personal phones to explore whether their proximity to the new device influenced the content shown. We also aimed to identify other factors affecting content recommendations.
Create a "Patient 0" : A completely new device with no prior data or connections to our existing digital footprints.
Expose it to the physical world: Let Patient 0 exist in the same spaces as us and observe if (and how) it becomes "infected" by the surrounding digital ecosystem.
Make conclusions: The moment a video appeared simultaneously across devices, it was our cue to yell, run, and confront the algorithm. After all, what’s an experiment without a little chaos?
The intervention was both playful and unsettling. What started as curiosity quickly turned into a deeper reflection on how algorithms shape our digital realities, and how interconnected, or even compromised, our online experiences might be.
On the first day, I started using the Instagram account while connected to the IAAC Wi-Fi. Interestingly, most of the content displayed seemed to originate from India and Pakistan. This made me consider the possibility that the shared network connection might have influenced the content shown on the phone.
A significant conversation with Sául led us to Adrián Pascual, his student who had already ventured down a similar rabbit hole. Through his project WTF Do You Like?, developed at Elisava, he explored a world free from algorithmic influence, where users could watch content based on their actual interests rather than what platforms decided for them. His approach involved distributing YouTube login credentials built around specific filter bubbles, allowing people to engage with content that resonated with their true preferences rather than those shaped by recommendation systems.
This idea had a profound impact on our discussions, reinforcing our perspective as students critically aware of misinformation bubbles on social media. Like Pascual, we seek to understand how algorithms manipulate our online experiences, limiting exposure to diverse viewpoints.
As soon as we passed the phone to Ramón, he began the experiment by scrolling to observe the types of content that appeared. However, he was quickly banned from Instagram, leading us to the conclusion that we should dispute the ban and explain the project, as this situation could provide valuable insights.
Following the intervention, Mohit and Lucretia continued the experiment by using YouTube Shorts to investigate whether the same content "infection" was occurring between the two phones. After a few days, Lucretia noticed that some content displayed on Phone 0 (the testing phone with no prior data) also appeared on her personal device.
And just when they thought it was all a coincidence, they got hit with something undeniable, two eerily similar videos (different Big Bang Theory episodes, but still, what are the odds!?) popped up simultaneously. That moment felt like a near-definitive sign that content isn’t just being recommended independently, it’s flowing, overlapping, and even interfering across devices.
Through repeated testing and constant “interventions” between devices, we started noticing content contamination, not necessarily identical videos appearing across devices, nor always from the same person’s account, but a clear influence spreading from one phone to another. The breakthrough moment came when patient zero managed to “infect” the rest of us with a video that showed up on multiple phones at once.
So, the theory held up, it is possible to influence what people see on their devices, and not just through targeted ads. The way algorithms interact, clash, and bleed into each other is messy, unpredictable, and, quite frankly, a little disturbing.
Instagram and YouTube amplify content that gets high engagement, often leading to self-segregation in migrant communities.
Instead of exposing users to diverse perspectives, the AI clusters people into similar groups, reinforcing ethnic or cultural divides.
This can lead to less interaction with locals, language barriers, and difficulty adapting.
How do Instagram & YouTube algorithms affect migrants' integration?
Do these platforms reinforce isolation or encourage cultural blending?
How can algorithmic "hacking" be used to empower migrants instead of trapping them in echo chambers?