A story in the May/June 2016 issue of MIT Technology Review sums up the profound questions that we should be asking about new AI technology. The call-out for the article says: “Can China reboot its manufacturing industry—and the global economy—by replacing millions of workers with machines?”
Stop for a minute and think carefully about that question. Embedded in it are difficult questions about our beliefs and values, our worldview. Is our ultimate goal material wealth? China can reboot it's economy by using more machines, but should this be done? Is it moral to embark on a path of “. . . replacing millions of workers with machines”?
Thankfully, many thought leaders today are addressing the human and ethical questions that surround these technological trends. But such discussions tend to be drowned out by more pragmatic concerns. The pressure for businesses to use AI and robots, or else die, is intense.
One representative example is worth looking at. Cambridge Industries Group (CIG), which operates factories in China, hopes to replace two-thirds of its 3,000 workers with machines by mid-2017. A primary reason for this change, says CEO Gerald Wong, is because manufacturing in Germany, Japan, and the United States is already becoming cheaper than in China because of robotics, artificial intelligence, and automation. This means that, regardless of who is in the Oval Office, jobs are not likely to come back to the US; that's because these jobs will vanish.
“It’s very clear in China: people will either go into automation or they will go out of the manufacturing business,” Wong says. “We’re going to use standard robots at first. But then we’re going to use more advanced ones. More and more, we need to get into more advanced robotics. That can help make a dark factory.”
A dark factory, in case you’re wondering, is the widely used term for a factory without people.
The thought leaders in this arena are, thankfully, taking questions about the human and social impact of these technological changes seriously. Among those people is David Rotman, editor of MIT Technology Review. In the most recent issue of the magazine (March/April 2017), Rotman concludes an excellent article by saying, “if AI is going to achieve its full economic potential, we’ll need to pay as much attention to the social and employment challenges as we do to the technical ones.”
Rotman’s essay addresses questions about what happens to human beings when they can no longer work. He cites Joel Mokyr, an economic historian at Northwestern University, who acknowledges that the removal of work from people would be personally and socially devastating. “There is no question that in the modern capitalist system your occupation is your identity,” Mokyr says. He knows there will be “pain and humiliation” when robots take jobs from people.
Rotman also addresses the tendency for AI and robotics to erode the middle class, increasing the gap between the wealthy and the poor. Government can help ease this pain, but downward pressure on the middle class will continue regardless of who is in the Oval Office.
Missing from this vital discussion about technology, work and economics is a serious theological perspective. Underlying all the science, innovation, and compassion for workers, are deeper questions about human nature and God’s designs for economies. A theological compass, if given an opportunity, could help guide technological development to meaningful destinations.
How we develop technology and use it is a choice; it doesn’t just happen accidentally. AI and robotics are expressions of a worldview and a set of deeply held values. A materialistic worldview will produce a materialistic world with technology serving materialistic ends. Is that what we really want?
Author Andrew Schmookler is asking that question: “Our economy, with its focus on the material and the mechanical, embodies an approach to human life with the spirit drained out of it.”