The convergence problem
Open any major streaming service across countries—recommendations appear strikingly uniform. Same genres surface. Same content patterns promote. Same engagement mechanics drive discovery. Different people across cultures with distinct histories face algorithms optimised toward identical metrics.
This represents structural outcome rather than conspiracy. When AI systems train on identical data, optimise for same objectives, and deploy at scale, convergent outputs emerge. Models learn what works on average. By definition, average results in average.
How AI narrows choice
Recommendation algorithms identify patterns in aggregate behaviour. If most watching film A also watch film B, systems recommend B to anyone watching A. Useful yet self-reinforcing. B receives more views because recommended more frequently. It gets recommended more because views increase. The loop tightens.
Systems converge toward narrow options satisfying average taste. Niche, local, unusual content surfaces less—failing to match dominant patterns. The tail shrinks. The head expands. Consumption diversity declines despite catalogue expansion.
This dynamic extends beyond entertainment into news, shopping, music, search results. Every system optimising engagement faces identical convergence pressure. The popular becomes more popular. The different becomes invisible.
The cultural cost
Human cultures sustain through differences—language, music, food, storytelling, values developed across centuries in specific locales with particular histories. They are inherently local and particular. When AI pushes billions toward identical content and consumption patterns, cultural diversity substrate erodes.
A teenager in Lagos and one in Lisbon encounter increasingly similar cultural inputs. Musical tastes converge. Fashion references align. Aspirations synchronise. AI did not intend this—it simply optimised engagement across largest audiences, and scale-level engagement favours universal over particular.
The individual cost
AI personalisation often narrows rather than expands individual taste. Listen to jazz? Algorithms deliver more jazz. Suggestions rarely mention West African highlife or Brazilian bossa nova, despite genuine musical connections. Systems optimise existing preferences, not potential discoveries.
Filter bubble effects migrate from information into identity. People become narrower versions of themselves, reinforced by algorithms rewarding consistency while penalising exploration. Serendipity producing growth faces algorithmic disfavour due to unpredictability.
The alternative: understanding individuals, not averaging them
Homogenisation risk is not inherent to AI itself—it is inherent to specific AI kinds: systems modelling populations optimising for averages. Alternative approaches exist: systems modelling individuals optimising for understanding.
Intent processes behavioural signals on-device. Models do not compare individuals against populations. They read individual behaviour directly. Detected patterns remain specific to that person, context, moment. No averaging. No aggregation. Intelligence stays individual.
Individual behaviour varies far more than aggregate models suggest. Someone reading poetry, following Formula 1, researching gardening equipment does not fit neat segments. Traditional systems pick dominant signals, suppressing remainder. On-device intelligence perceives full patterns, responding independently to each dimension.
Preserving human complexity
No one is average. Not a slogan—a statistical fact. Population averages describe no actual member. Average-designed systems serve no one well, serving most poorly, some not at all.
Behavioural intelligence preserves human complexity by refusing segment reduction. On-device models see persons, not cohorts. They respond to current actions, not patterns from similar people. This fundamentally restructures the AI-individual relationship.
The responsibility of AI builders
Homogenisation risk accelerates. Every recommendation system, content algorithm, personalisation engine optimising aggregate patterns contributes. Regulators cannot solve design problems through rules alone.
AI builders face choices: flatten human diversity into manageable segments, or build systems that understand and preserve it. Architecture determines outcomes. On-device intelligence reading individual behaviour without aggregation provides one answer. Others may exist. The question requires asking before the answer becomes impossible to change.