Slave to the Algorithm

Brexit and Trump are children – simultaneously bastard and legitimate – of algorithmicism. It doesn’t take much of a cognitive leap to see how Facebook algorithms helped dumbly to herd and loop-amplify the populist bile that gave birth to those phenomena, argues D.J. MacLennan.

When you make a purchase with your debit or credit card, who approves or rejects the transaction? Precisely nobody. It’s an ‘authorless event’, the decree of a few lines of code. Not processed in ‘the Cloud’, just on somebody else’s computer. This truth groans with implications for your liberty.

We fear the coming of smart machines in a juvenile way. Sweaty teen-tech hype boosts the formidable machine learning (ML) capabilities of modern neural networks up into the heady stratum of ‘artificial intelligence’ (AI). And dystopian pop culture seizes on this view because it fits the Western narrative of technology itself as worthy foe. Once upon a time, Skynet will cross the intelligence threshold and take over the world. The (probable) End.

No emotionally-conflicted robot Astro Boy zooms in to save our resolutely biological skins.

Humankind already faces a dire algorithm-control problem. But right now, it’s human, not machine, control we need to worry about. Algorithmicism concentrates power in the hands of ever-fewer human actors. These elites of cynical operators set the social and ‘ethical’ parameters within which the rest of us ply our circumscribed paths. It understands ‘moral machines’ as ones bound by their ‘ethics modules’ to strict rules of conduct and engagement – you know, infallible American ones.

But without the ability to choose otherwise, no entity can behave morally. The controlling elites want not artificial moral agents but better slaves.

In their recent book Living with Robots, science philosophers Paul Dumouchel and Luisa Damiano argue for the closing of the false divide between private ‘internal’ emotions and expressed ‘external’ ones. They scent residual Cartesian dualism in the computational ‘naked mind’ model of thought so favoured by AI researchers. Mind, they argue, is never substrate-independent; on the contrary, what we call ‘mind’ emerges dynamically as a result of our corporeal and environmental ‘radical embodiment’. ‘Where is my mind?’ becomes a moot question. ‘Mind’ is in the ‘interindividual’ mix.

Far from being restrictive, Dumouchel and Damiano’s view brings a refreshing indeterminacy to the mind/body problem and to the quest to create ‘social robots’. Quoting Hannah Arendt, they argue for the creation of a diversity of novel cognitive agents, to enhance the ‘plural condition of humanity’. With the false dichotomy of external emotions as mere display versus internal ones as authentic erased, we’ll both understand our own sensibilities better and begin to synthesize mechanisms that grant robots ‘artificial empathy’. Equipped with an analogue of our emotional systems, a machine could participate in ‘affective loops’ with humans and other agents. Eventually, robot ‘substitutes’ could aid us, and act for us autonomously and authoritatively – all without threatening us or usurping our agency.

As far-fetched as this may sound, just look to the nascent affective loops that senior citizens seem to form with cute therapeutic robots such as ‘Paro’. Designed to look like a baby harp seal, Paro learns to respond to its given name and to interact in specific ways with specific people. It evokes a sense of well-being and stimulates group discussion. On the other side of the debate, psychologist Sherry Turkle sees this kind of engagement with robots as sad and sinister. In her book Alone Together, she contends that the affective expressions of robots are only ever ‘apparent’ or fake, and that when she engages with a Paro, the lonely senior citizen is really just talking to herself.

So what does all this have to do with loss of liberty, or worse still, existential risk? ‘Partial’ or ‘analytic’ agents, as Dumouchel and Damiano call them, already abound in our world, invisibly. The banking algorithm I mentioned earlier is one type, Jeff Bezos’ Amazon rakers are another. In military circles, partial agents integrated with autonomous drones feed back ‘intelligence’ and confidence levels to remote cadres of human officers. In the ‘kill chain’, nothing absolves a commander of doubt and responsibility like a probabilistic chart with a green light. As these types of systems expand, generating more and more authorless events, the incidence of what sociologist Charles Perrow calls ‘normal accidents’ increases, and their consequences ramify beyond all attempts to model them.

With partial agents comes at least partial enslavement. Consider your friendly local Openreach telecoms engineer (spoiler: you don’t really have one). This autonomy-poor worker receives regular alerts to check his job list. Although the logistical and technical priority ordering of the jobs on his screen may look wonky, he must start at the top and work robotically through them until his day is done. The sales operatives who assign the jobs work from a plethora of call centres across the UK, mostly oblivious to the technical and logistical challenges, to the endless backtracking and tail-chasing, that the engineer will face. Because of the bare minimum of leeway granted to them in making sales and resolving issues, the operatives are forced to refer constantly to their handlers – their line managers, who have a fraction more autonomy than they do. With the slightly different input screens and form fields available to them, the line managers can tweak discounts offered to customers and access a few more agents in the byzantinely indirect chain of supply. And of course, none of the people in the call centre actually work for Openreach. They can only place orders with them – orders which will then slot into place among the orders of all the other ‘providers’ hungrily chasing binding 18-month contracts with their customers.

Unlike the partial agents involved, the engineer can actually feel like he’s a robot. It gets him down, or so he seemed to tell me.

The ‘options’ available to the human agents in this preposterous chain are not of the world. The humans operate in tight thrall to the partial-agent system, within that – and solely within that – system-space. This is not the world. The system was conceived by a cadre specifically to curtail the agency of world-engaged human beings, to corral them with dependent, algorithmically-decided screens and menus so that only certain contextual actions remain possible – or even conceivable.

Brexit and Trump are children – simultaneously bastard and legitimate – of algorithmicism. It doesn’t take much of a cognitive leap to see how Facebook algorithms helped dumbly to herd and loop-amplify the populist bile that gave birth to those phenomena. Just as nobody decided that you would suddenly be denied access to your own money in ‘your own’ bank account, nobody made Brexit a far more probable outcome than it would otherwise have been.

There’s no apocryphal-Luddite solution to any of this. A true ‘coevolution’ (as Dumouchel and Damiano put it) with synthetic entities requires that we stand against all forms of ‘ethics’-by-diktat and strive for an indeterminate expansion of the capacity to make moral choices. As part of that process, we need to unmask and disperse the cadres behind the algorithms. Robots have a physical ‘social presence’ that elevates them above the level of mere code. Consequently, shadowy elites find it harder to hide behind them.

Live to the ‘rithm. Work to the ‘rithm. Die to the ‘rithm. In accepting this brutally blunt template, we degenerate ourselves into original ‘robots’ – the biosynthetic kind imagined by Karel Čapek in his 1920 play R.U.R. (Rossum’s Universal Robots). They rose up and overthrew their human masters. Aye, Grace, sparks will fly when that whistle blows.

Comments (3)

Join the Discussion

Your email address will not be published.

  1. JOHN MC GURK says:

    I THIK IF YOU TAKE TIME TO REALLY UNDERSTAND WHAT IS HAPPENING IT PUT POWER INTO A VERY FEW SELECTED ELITES HANDS AND THAT
    IS VERY FRIGHTING.

  2. SleepingDog says:

    Freedom may be inversely proportional to meaning. Meaning may arise from constraints (think of music as opposed to noise).

    Some modern theories of human creativity accept a significant degree of algorithmic input, which is presumably why computer software can write Bach concertos that people cannot disguish from ones he wrote.
    https://en.wikipedia.org/wiki/Computational_creativity

    Ancient Roman republican plebeains rebelled against the unwritten laws of the patricians and demanded that these be written down, and cast into bronze tablets that everyone could see (and even read, if they happened to be the literate few). The Christian Protestant reformation was partly concerned with translating Latin text understood by the few into the popular languages. These transcriptions and translations were done by humans. But the task of translating the rules discussed in this article is of a scale that only machines will be ultimately be able to perform. This might start as a semi-automatic legal codification project and move onto natural language processing of regulations and rights. A system of overrides will strike down incompatible lower-level rules, in the way that international treaties take precedence over national legislation in some jurisdictions.

    Values can be abstracted from policy and practice, and in the UK we might find that these found-values are antithetical to professed-values. This process should logically lead to pressure for constitutional reform (amongst other things). But it will require significant investment in training citizens in systematic (rather than personalized) thinking, I believe.

  3. Keth says:

    ‘Weapons of math destruction’ by Cathy O’neil is a must read for anyone wishing to understand the lopsided effects of algorithms and how the favour the elites and punish those of us in the lower socio-economic regions of contemporary society.

Help keep our journalism independent

We don’t take any advertising, we don’t hide behind a pay wall and we don’t keep harassing you for crowd-funding. We’re entirely dependent on our readers to support us.

Subscribe to regular bella in your inbox

Don’t miss a single article. Enter your email address on our subscribe page by clicking the button below. It is completely free and you can easily unsubscribe at any time.