John Stuart Mill – the most influential thinker of the Victorian age – would be turning in his grave if he knew that highbrow philosophical thinking was to be, by the early 21st Century, an underlying concept of a significant proportion of the lowbrow culture of the Hollywood blockbuster.
Just what is Hollywood’s problem with artificial intelligence? One reporter investigates why filmakers are so sceptical about our future alongside technology…
Entertainer Will Smith’s latest offering, I, Robot, is the latest in a long line of films to question the development of artificial intelligence and its impact on humankind; a list that goes all the way back to 2001: A Space Odyssey, and includes Blade Runner, The Terminator, Ghost in the Shell, and, of course, The Matrix.
Philosophy’s influence on The Matrix has been well-documented, spawning academic tomes such as: ‘Take the red pill’, but the rest of these examples, admittedly to a differing degree, all ask the same question…
What makes a soul? Can artificial intelligence develop a soul? Is a soul transferable, i.e. does it exist independently from whatever ‘hardware’ it inhabits?
More importantly, how will artificial intelligence, in whatever form we choose to create it, react when it realises how much power it will have over its creator?
The answer offered by these films, and the science fiction writing that was their foundation, leans towards the negative. The unerring and un-abating logic that propels the decision-making processes is portrayed as cold and entirely inhuman.
In the sci-fi epic 2001, the HAL-9000 computer, when attempting to logically reconcile conflicting orders, develops a paranoia that dooms its master, the astronaut Frank Poole, and ultimately itself.
And in both The Terminator, The Matrix, and to a certain extent I, Robot, the logic that drives the artificial intelligence running the globe deems that humans have suited their purpose, that of setting in motion a power that they cannot control, and must be destroyed, or kept in their place, at the whim of the machines.
The other common theme in these three films is the recognition by the artificial intelligence of the dependence that humans have on it, and how this dependence can be used as a tool against us.
But are we really that far away from that now? Commonly used anti-virus programs update automatically whenever your computer is connected to the internet; some modern fridge-freezers recognise faults within them, and report to the manufacturer via email without your knowledge; the latest SkyPlus digibox recognises that you haven’t programmed it to record your favourite show, and does it anyway; and Microsoft software runs the vast majority of the world’s computers.
However, this logic alone is not portrayed as intelligence. The logical artificial intelligence believes in itself utterly, and has no acceptance of even the inkling of a potentially different way to exist. To be intelligent, the filmmakers say, you need individuality, uniqueness, in other words, to be more human: a point emphasised in both Blade Runner and I, Robot.
In the former, Rachel and the world-weary detective, Deckard, are replicants, appearing as human by virtue of the personality-shaping powers of memory, whilst the latter has the robot Sonny pleading his uniqueness through his emotions imprinted onto his positronic brain.
Nevertheless, this desire for ‘human-ness’ might actually be farther from the truths of our society.
Marx wrote that society will always strive for development; a mode of production that is more efficient, casting off the less useful parts of the past in pursuit of better logic. This is similar to the proposition that artificial intelligence will grow ever more efficient and, as such, lead to the next stage in Marx’s production theory; that machines themselves may cast off their less useful past, i.e. us.
This leads to the ultimate question of the soul. If we are to believe Hollywood, artificial intelligence is just that – artificial, devoid of any sort of soul of its own, and therefore reliant on what is given to it by its programmers, as in Blade Runner and I, Robot.
If we accept that human society and the super-computers powering artificial intelligence are not dissimilar, how would we perceive the notion that artificial intelligence could take the next step, and ask, as ‘I, Robot’ does, when a personality simulation stops being a simulation and becomes a soul?
And if so, will that ‘soul’ be transferable from one robot or computer to an identical counterpart, demonstrating a truth, of sorts, behind Cartesian dualism – the theory that there are two types of material in the world, the mental or mind, and the physical or body – as documented by the philosopher, Descartes?
In I, Robot, the super-computer’s transition into an ‘entity’ that threatens human existence as we know it is not, as we might believe, development into a ‘personality’, just an extreme extension of its original programming.
A more radical answer to this question is proposed by the Japanese animation or ‘manga’, Ghost in the Shell, acknowledged as one of the influences behind The Matrix, and what Empire magazine described as: “The kind of film James Cameron would make if Disney ever let him”.
Of all the films mentioned here, Ghost in the Shell is the most philosophically challenging. It documents a computer program, known as a ‘ghost’ breaking away from the shackles imposed by its original host to develop an identity of its own, able to inhabit whatever form it chooses of its own accord, so long as it is connected to the internet, even asking for asylum from the government.
Despite the tone of the film, this attitude towards artificial intelligence is more positive, where AI and humankind can co-exist, as opposed to the traditional view of one form of intelligence being subservient to the other, an approach that comes from the darker side of science fiction writing and filmmaking.
Blade Runner’s replicants do the dirty and dangerous work of off-world exploration and mining and are barred from Earth; I, Robot has the robots doing the menial jobs forsaken by humans, such as dog walking and delivering post; the machines in The Matrix were originally designed to do man’s bidding; and in all three, parity, even dominance, is sought by the artificial ‘life-forms’.
This kind of outlook is relentlessly pessimistic, and, like most things rooted in philosophy, asks more questions than it answers…
Including why Hollywood filmakers have such a problem with artificial intelligence.