One of the main reasons I love my part-time job so much is that it gives me the opportunity to discuss ideas and topics I usually have no time to discuss the rest of the time. As I’m working with students, helping them develop their ideas and hammer out the details of their essays, I often find myself straying somewhat from the topic at hand and venturing into new territories. This morning, with nothing more than a notion of what he wanted to talk about, a student managed to get me thinking and taking notes on the real and pressing question of what we mean when we talk about robots.
On the surface, I believe “robot” is usually used as a catch-all definition, an umbrella term for any kind of machine capable of operating largely on its own without direct human input. It covers robotic arms purpose-built to perform high-precision tasks quickly and repeatedly (e.g. machine tools used to weld automobile components together on an assembly line). It also could also describe any of the household tools that make our lives just a little less annoying and dull (e.g. autonomous vacuum cleaners). It also can be used to describe the androids, those talking, walking, thinking machines that look like us and seemingly act like us. Is it fair, though, to lump the Roomba in with ASIMO, IBM’s Watson in with a device putting Chryslers together? To seriously understand our future with these devices and mechanisms, we first have to define our terms. I, for one, suggest that a true robot must be capable of some kind of movement, whether preprogrammed or independent. Without that capacity to move, a robot simply becomes a “thinking box”, excellent at solving complex mathematical calculations or finding planets among the stars, but not able to physically rise up against its human masters.
The fear in many people’s minds, however, is the marriage of the freely moving robot with the artificial intelligence of the “thinking boxes”. Combined, the narrative goes, machine learning can merge with the endless potential for activity to produce sleepless soldiers bent upon our destruction. This self-aware, planning, conspiring variety of robot, I feel, is many decades away and would require some sort of power source outside a power cable or battery back. The reason for this is simply that we have only just reached a point where automobiles are able to navigate major highways with relative safety. They are, however, anywhere from 5 to 40 years away from being able to operate fully independently and autonomously. Because of the random nature of life, these vehicles, while certainly able to detect obstacles and respond as needed, these systems are so-far unable to quickly react to sudden changes in the environment. Because of the massive amount of computing power needed to make even the most basic course correction, fully autonomous vehicles are still fairly far off in the future, to say nothing of the mechanical warriors envisaged in so many films and science-fiction tales.
A second concern expressed by many people is that robots are driven by pure logic, mathematical exactness and precise, efficient processes. We fear the robots because they lack the moral compass and code of ethics we believe distinguishes us from the rest of creation. What we fail to appreciate is that our moral compasses and codes of ethics are simply electric sparks that have been shared over the generations. Robots are driven by the same electric interactions as human, so why then could robots not be engineered with the seeds of morality and ethics? On the other hand, morality and ethics, whether people like to believe it or not, are learned and are flexible and perhaps not the best thing to program into a machine that can do nearly everything faster and better.
It is, however, this lack of conscience that frightens many people. The argument goes something along the lines of a device that “thinks” only in machine terms cannot possibly appreciate the mystery, beauty and preciousness of life. This absence of morality might be problematic if we were trying to create mechanical diplomats or android priests, but as tools the robots would not require this level of development, the presence of a “soul”, if you will. Furthermore, all one needs to do is look around and see how far human morality has gotten us. The failures of humankind to use its own “soul” only reinforces the appeal of a mechanism that is not subject to the same fuzzy reasoning and skewed logic we so often display.
What then is the function of the robot? Is it merely to function as a tool to perform the tasks we see ourselves too advanced to address? Is it to replace the human element from dangerous situations and environments so that we can safely exist above the fray? Is it to further expand the jobs and positions by which humans can be made redundant and bring about the truly non-stop world of consumption and production? Are the robots to be friends, enemies or colleagues?