ALGORITHMIC BUBBLES AND LIVED EXPERIENCE
This is Badland issue #05
Text: Viktor Kewenig
Images: Harald Schaack
Today, digital and physical realities form a unitary space in which each modality functions as the bearer of information — but none holds a privileged position over the other. Any information is available immediately, any truth can be synthesised at a fingertip.
The corollary of an over-supply of information is, naturally, a feeling of acute overwhelmdness, which places the burden on the act of interpretation. However, new ideas are difficult to form, contradicting evidence is hard to digest.
Unable to understand the hyper-connected conglomerate of different conceptual realities, which technological advances have brought upon us, many are not freed by the access to the Internet. Instead, they are incarcerated within a dangerous self-regurgitating bubble of information. Easy digestion of pre-selected misinformation takes the place of any real quest for truth — often pre-existing psychological tendencies and conceptions deepen. This is especially dangerous in the on-going situation where social interaction (an important source of necessary conflict) is held to a minimum, and the practice of self-isolation becomes societal standard. A friend of mine — who had a pres-existing condition and rarely saw people, because he was naturally scared of infection — once gave me his phone to check something on Instagram, after a long time of self-isolation. I quickly realised that his explore page was full of questionable healing practices, ‘Zeitgeist’-esque conspiracy theories and society-critical influencers who would portrait themselves as the last bastion in a fight against the elite. An amalgamation of information I was personally sceptical about, and of course I told him so.
Conflicting opinions are important; being contradicted is essential if we want to understand the array of lived realities, which make up truth in our society. For this reason, the phenomenon of isolated information bubbles on technical devices is a real threat to humanitarian efforts. Unfortunately, the algorithms that shape our individual digital spaces facilitate this. The problem is that these algorithms do not understand conceptual information against the background of lived experiences, but rather by means of statistical mapping of combinatorial frequencies.
Given the pressure for high-speed profit margins in today’s interconnected world economy, it makes sense to use models that perform well and quick rather than fund long-term research on models reflecting human experience. Hence, computational models based on this distributional hypothesis underlie Amazon’s Alexa, Apple’s Siri, Instagram and many more. As a consequence, however, the machine-learning powered devices with which we can carry around any information ever known to humankind, a few clicks away in our pocket, present new information merely as the output of past input. New, and potentially disruptive information is likely to get compressed or filtered out in favour of pre-existing patterns.
The promises of a technologically advanced society are immense: finally, humanity can be free from being a cogwheel in the machine of evolution. Finally, each and every individual can pursue their inherent creativity (as advocated by Chomsky). However, under the present conditions, this utopia seems further away than ever before. Instead, parallels between today’s developments and the industrial revolution suggest themselves.
The technological advances of the 18th and 19th centuries did not liberate humankind but instead degraded the people of that time to becoming literal parts of the machine, stuck with working inhumane hours, catering to the needs of the machines around the clock. In the same way, our existence in digital space is becoming reduced to generating data and feeding the algorithms. A shift towards appreciating the chasm between lived human realities and the rules that make up digital space is needed if we want the digital and the analogue to form a harmonious union.
Viktor Kewenig studies at the intersection between Computational Neuroscience, Philosophy and Psychology. Philosophers like Wittgenstein have long understood the primacy of lived realities over logical reductions. The main problem he dedicates his research to is that this realisation has not yet arrived in the neuroscientific study of the brain, nor has it been seriously considered in Artificial Intelligence - each field has worked in isolation. Viktor believes that an interdisciplinary effort is called for to solve the problems with current models of human intelligence, which posit a threat for the experienced quality of human lives.