Two years ago, I submitted a research proposal to three University English Departments. Two years ago, no one wanted to see the idea grow. Do I blame them? No. Do I think it was the most cutting edge research ever? No. But damn have I not been able to give up on being OBSESSED with AI technologies, measuring humanity against robots, and Bladerunner ideology and theory.
So, here’s a little snippet from that old grad school application, as a marker of where I was at the time, and where I could have been now if I had been accepted.
What do you think? Time to reapply?
Research Proposal: 2017
“We all know the popular trope in Science Fiction of “killer” robots— take James Cameron’s The Terminator and Isaac Asmiov’s I, Robot, for example. Each considers a future where humans are in danger of being rendered extinct by a “race” of artificially intelligent robots intent on destruction. While these narratives have until now been imaginary, it might come as a surprise to learn of an already existing effort by the United Nations to “place an outright ban on the development and utilization of automated weapons, also known as ‘killer robots’” (Futurism 2017). So quickly it seems, humans are being faced with the realities of a technology they’ve already had decades to imagine.
My interest, then, lies in investigating the complex evolutionary trajectory of our philosophical ruminations in the SF “genre” as they relate to our current and “real” relationship with Artificial Intelligence (AI) technology. In other words, I want to explore how SF (in print and on screen) has affected or anticipated the way in which humans interact with an increasingly automated existence. Have these warnings and fables made us more accepting and ready for change? Or have these stories made us more fearful, wary of the unfamiliar newness of a technological age we can (even now) barely keep up with?
With big questions like this rising to the fore, I feel it is important to create a method of tracking this evolutionary trajectory as we begin to seriously construct a purposeful, real-life relationship with technology, and particularly with Artificial Intelligence. As point of entry and focus, I will use three temporally separate (but ideologically similar) SF works: Philip K. Dick’s Do Androids Dream of Electric Sheep (1968), Ridley Scott’s Bladerunner (1982), and Denis Villeneuve’s Bladerunner: 2049 (2017). Specifically, by analyzing these works alongside one another – using their common ethics as a control and their release dates as variables – I will track the differences in each narrative (with relation to human/AI interactions) and thus produce a meaningful evolutionary “map” of our moral, ethical, and philosophical considerations as they’ve advanced over the past 50 years within the genre. Hopefully, this will provide a significant grounding point for further theology and practice in this area.”
Header Image Copyright Jessica Barratt // Intellectual Property of Jessica Barratt, 2019