Slaves to the Algorithm: Reclaiming Culture From Automated Curation
Twenty years ago, cultural curators were people who staked their reputations on what they championed. Today, the curator is an algorithm, and it’s reshaping culture at every level — what gets made, what gets seen, how we consume it and, reflexively, what gets created by AI itself.
Incentives rule everything around me. Content creators describe their decisions as serving what the algorithm wants. Even if the algorithm makes poor recommendations, creators keep chasing the next platform update because invisibility is worse. Audiences, meanwhile, scroll endlessly through feeds designed for engagement, not satisfaction.
This is happening alongside creative decision-making being outsourced to tools such as ChatGPT. As humans, we naturally anthropomorphize, treating non-human things as if they have preferences and judgment. But I can tell you from having helped train large language models myself: They do not have taste, at least not yet. I’d even argue that models are at an inherent disadvantage when it comes to taste, because tasteful content is timely, and models are trained on historical data.
The more we treat these systems as authorities rather than tools for creation, the more we flatten the parts of culture that require human creativity and judgment.
What Gets Flattened and How
History shows us that innovation has always come disproportionately from outliers. But to a pattern-matching system, statistically unusual work looks like noise or a mistake, so it gets filtered out or deprioritized.
These algorithms are trained to identify what has performed in the past and surface more of it. Their results have nothing to do with quality, which is often subjective. Consequently, the outlier work—the risky bet, the innovation that takes years to find an audience—becomes economically irrational.
Why spend months on something the algorithm won’t recognize when you could spend hours on something it will? I suspect a slow-burn show like The Wire would have been killed by today’s engagement metrics before getting a chance to become what many consider one of the greatest TV series ever made.
In the current vicious cycle, users defer to the systems, our behavior becomes the new training data and the platforms optimize for the patterns we reinforce. All of this rapidly narrows the field of possibility, and we are all the poorer for it.
U.K. columnist James Marriott cites research that found popular music across genres has grown shorter, simpler and more repetitive; breakthrough inventions are becoming rarer despite record spending on scientific research; and literature is becoming less complex.
Homogenization through incentive design is real, so what do we do about it? We all have a responsibility, starting on the tech side.
The Rolling Stone Culture Council is an invitation-only community for Influencers, Innovators and Creatives. Do I qualify?
What the Technology Can (and Can’t) Do
I work with the reinforcement learning techniques that train LLMs, so I understand what technical improvements are possible and where they hit limits.
Chain-of-thought prompting is one approach that makes AI reasoning more auditable. Instead of just getting an output, you can require the model to explain step by step how it reached a conclusion. This can at least make the reasoning auditable, although the model may still fabricate a plausible-sounding rationale.
The alignment research community is also working to ensure AI systems don’t produce harmful outputs or drift from intended goals. That is important for safety, but it still can’t instill taste.
Part of the solution is tech companies educating AI users so they understand that the output is not gospel. But they cannot do it alone.
How People Should Respond
In business, decisions have long been made based on what “the data shows.” But with AI surfacing ever more sophisticated analytics, leaders can now unconsciously optimize for what the algorithm rewards without ever articulating it.
This matters for culture because the people funding, greenlighting and distributing cultural work — studio executives, label heads, publishers and platform managers — are making decisions using this invisible logic. Instead, leaders need to understand what these metrics actually measure, evaluate them against their own judgment and make the calls themselves.
We’ve spent years talking about augmentation not replacement in our workforces, but lately we’ve witnessed workers opting to replace themselves. Research shows that even short-term ChatGPT use reduces brain engagement, and knowledge workers are using these tools regularly. Leaders need to actively promote psychological independence — the capacity to use these tools without outsourcing judgment.
Even then, organizational change isn’t enough. Culture is built between people, and I advocate for “third spaces,” not just physical venues but cognitive zones where algorithmic input is deliberately excluded. Reading books, now in alarming decline among younger generations, is essential practice for maintaining independent thought, while social interaction preserves empathy. Both are foundational to human judgment.
Bring Back Friction
The friction between what performs immediately and what might be impactful long term previously created the space for culture to evolve. The algorithm will keep on working, but humans remain the curators of beauty, value and meaning. Individually and collectively, we need to stop being passive consumers and preserve that creative friction before we optimize it away altogether.