This is the first of two posts about cyborgs: what they are, and how they manifest as the hybrid objects and forms of knowledge that characterise much of place-making today. Is cyborg place-making a reality in-the-making? Or is it all just science-fiction? This post maps the presence of cyborgs in contemporary societies, and begins to discuss how cyborg technologies already shape place-making processes.
Welcome to the cyborg
Cyborgs have multiple origins. The Wikipedia entry reviews many of these origins, including nineteenth century literature, science-fiction and prosthetic medical applications. In popular culture, famous examples include Darth Vader, Robocop, and Ghost in the Shell, among many others. Cyborgs have well-known cousins, namely “androids” – robots made to look very human-like, even “fleshy” (i.e. something a bit more advanced than your typical android smartphone). Androids have been popularised as intelligent “droids” in the Star Wars trilogy, and as ruthless killing machines in the Terminator film series starring Arnold Swarzenegger. Iconic droids are also the main characters in Isaac Asimov’s robot short stories, which explore ambiguous notions of robotic self-awareness, emotions, and self-determination. Androids are not quite cyborgs though. There is also some contention as to whether cyborgs are “bionic”. The adjective bionic can be defined as “Having or denoting an artificial, typically electromechanical, body part or parts”. From a scientific persperctive, bionics can also be defined as “The use of a system or design found in nature as a model for designing machines and other artificial systems”. In other words, bionics seems to relate more to biomimicry than to cyborg engineering per se.
Concerning the medical realm, MIT bionics designer Hugh Herr’s TEDtalk about “NeuroEmbodied Design” demonstrates how bionic prosthetic devices could slowly extend human potential, thereby turning humans into cyborgs. Hugh Herr suffered a major climbing accident that lead to the amputation of his two legs, which has made him a prime user of prosthetic devices. There are many other examples of prosthetic devices that illustrate that bionic eyes, arms and spines are no longer science fiction. More broadly, extensive research and technological innovation is opening up a world of opportunities, in the form of “new converging technologies” emerging from the integration of nanotechnology, biotechnology, information technology, and cognitive sciences.
Yet technological implants and limb extensions and replacements are not necessary to turn ordinary mortals into cyborgs. As Amber Case argued in 2010, we are all cyborgs already, thanks the presence of pervasive, interconnected technologies in our daily lives. Not only does technology extend our physical selves, it also directly extends and mediates our mental selves. Think of toddlers playing with iPhones long before they can even master the power of words, or the multiple digital selves and identities that we entertain online. The Google Home is yet another prime example of a device that directly extends our cognitive functions. In our digitally-addicted societies, could you imagine looking for any piece information without first “googling it”? Recursively, one could imagine one of Isaac Asimov’s robots asking: “Google, what is the meaning of life for a robot?” Not that living cyborg, hyper-digital lives necessarily makes us much happier than in the not-so-distant analogue past, when human contacts were at least more necessary to function in everyday life than they are today. Personal addiction to digital worlds, such as social media and the fetishized digital identities these mediate, impoverishes the lives of many. First person shooter games may also contribute to influence the behaviour of mentally “at-risk” players, although it actually seems highly unlikely that such video games might turn adolescents into trigger-happy killers. Digital hyperconnectivity does not necessarily make Robocops out of all of us. But it does make us more cyborg-like.
Technology may sometimes seem to offer almost infinite opportunities of improving human life. This is an important stance of the post-human perspective, which views that technology bears huge promises to enhance human capacities and even existence, in turn enabling to steer human development and evolution in unprecedented, radical ways. There is a clear risk to overdoing human evolution through technological fixes, however. The notion of eugenics, for example, aims at selecting the most desirable human traits through such means as breeding and genetic engineering. The most ghastly, and yet well-known example of eugenics was that carried out under the Nazi regime, in their quest for the Aryan race. Yet could you imagine what eugenics might offer if super-powered by today’s or tomorrow’s level of technological advancement? In contrast to the 1930s, much of the technology that now governs our societies is deeply networked and interoperable: digital technologies are everywhere and communicate with each other almost seamlessly. Add “thought control” to this technological ubiquity and we can head straight down to the brutal world of George Orwell’s 1984, as warned by Daniel Power, expert on decision-support systems for business. Advanced human engineering, motivated by ruthless power regimes, could also lead to the bio-engineered replicants from the dystopian science-fiction film Blade Runner, and its recent sequel set in a context of ecological collapse. The next frontier in “life-enhancing” technological innovation could include microchip implants that would enhance (or control) cognitive processes. A brave new world of sustained, unbridled technological innovation might not be so life-enhancing after all…
More broadly, technology has always been associated with social control, power, and concentrations of capital. The advent of the industrial factory is a major case in point, which is partly the theme of Pink Floyd’s song “Welcome to the machine”. The large-scale deployment of machinery in society has been simultaneous with the creation and reproduction of particular sets of cultural identities, which have somehow reinforced and reproduced existing distributions of power and classes in society. For example, Paul Willis’ Learning to Labour (1977) is a landmark ethnographic account of the lives of working class lads in Birmingham in the 1970s who somehow embrace working class identities, as a way to express cultural distinction, thereby indirectly contributing to the reproduction of differentiated socio-economic classes in English society. The history of humanity testifies to the fact that technology has always been instrumental to transforming society for better or worse, as Yuval Noah Harari argues repeatedly in the book Sapiens. In all, the notion of the cyborg, or that which seamlessly fuses human and technological components, can be observed at different scales and approached from multiple angles.
Cyborgs in the city
Place-making is increasingly a cyborg practice. Through its very mathematical, symmetrical shapes and linearity, it occurs to me that new builds often look like they been downloaded straight from a parallel, digital world. Between CAAD, GIS, BIM, 3D design and visualisation software, and emerging smart sensor infrastructures, the potential for integrated, interoperable technology is often lauded as enabling the smart city of tomorrow. Augmented Reality and Virtual Reality can be deployed alongside the aforementioned technologies to explore new urban environments, in-situ, or fully digitally. Complex design software and decision-support platforms enable to combine, merge and convert a wide range of different data formats, in the fields of architecture, property management, urban planning, urban design and construction. Check out for example digital opportunities for fully interactive Environmental Impact Statements. To be sure, such interactive digital impact assessment reports would be more legible and engaging than the boring EIA reports that have been “trumped” by recent, significant budget cuts to the US Environmental Protection Agency?
Increasingly, complex technology provides very advanced tools for urban place-making. At the same time, the smart technological city can only be enabled by a digitally literate citizenry. In the process, learning to master the advanced digital tool could almost become self-serving; the machine almost becomes an end in itself rather than a means to an end. Humans are increasingly encouraged to adapt to and speak the language of the machine (e.g. learn how to code), as pointed out by Yuval Noah Harrari in Sapiens. This is evidenced by Obama’s 2013 call for young Americans to learn computer science and coding, the spread of Maker-spaces and various urban labs and digital labs, as well as the proliferation of hackathon events across the world, and the emergence of machine learning as a professional career. Regarding spatial analyses, which are a core dimension of effective spatial planning, machines simply cannot be left to their own devices. Interconnected technologies can provide additional support for evidence-based decision-making. However, they sometimes require some serious “ground-truthing”, or checking on the ground if the digital reality actually matches the physical, on-site reality. For example, over-reliance on satellite imagery can give an erroneous picture of actual urbanisation trends, as has been the case in Ho Chi Minh City, with quite severe consequences for poorer communities. Hence the need to combine traditional human observations with remote sensing and other methods for geospatial data collection and visualisation. Even with complex, fancy algorithms, the machine alone can only do so much to improve human lives.
In closing, we may say that, as true cyborgs, we must trust our common sense, even our gut feeling, and abide by the moral values that are dearest to us, so that we can make the smartest use of the technology available.