Could Smart Machines Become Too Smart?

With the accelerating advancement in artificial intelligence technologies, are there dangers lurking as some prominent technologists and futurists have recently warned?

Technology convergence…

A variety of technical developments in recent years are converging to make the machines in our lives much smarter than we could have ever have imagined.  We are now seeing an ever accelerating advancement of intelligence in “machines” from the small, powerful CPU’s driving our smartphones to a myriad of smart, connected devices around the Internet of Things, advanced robotics, natural language processing (NLP), cognitive learning, smart toys,  self-driving / autonomous cars, and super computers like IBM Watson with advanced algorithms capable of quickly completing massive computational tasks like human genome sequencing.

In Gartner’s recent list of top 10 strategic tech trends for 2015, they predict:

Deep analytics applied to an understanding of context provides the preconditions for a world of smart machines. This combines with advanced algorithms that allow systems to understand their environment, learn for themselves, and act autonomously. Prototype autonomous vehicles, advanced robots, virtual personal assistants and smart advisors already exist and will evolve rapidly, ushering in a new age of machine helpers. Gartner believes that the smart machine era will be the most disruptive in the history of IT.

With the intersections of artificial intelligence (AI), cognitive computing, big data and predictive analytics, a new era in the influence of “smart machines” in our lives is rapidly becoming the new normal.  As these technologies rapidly accelerate, do AI and smart machines portend some level of danger in the future?

Some caution flags…

In the past few months, several of the most influential technologists and futurists on the planet are sounding alarms concerning the potential dangers of the most advanced artificial Intelligence (AI) development, citing various scenarios that could go horribly wrong for humanity in the future.  Following are some examples…

Elon Musk – Late last year Musk told an MIT SymposiumI think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful with artificial intelligence.  I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we’re summoning the demon. …HAL9000 would be like a puppy dog, and governments need to start regulating the development of AI sooner rather than later. More recently, Musk is putting his money on the line announcing that he's donating $10 million to the Future of Life Institute for a research program that will focus on keeping AI "beneficial" to humanity.

Bill Gates – During a Reddit "Ask Me Anything" discussion, Gates said that we should be worried about AI becoming too powerful – I am in the camp that is concerned about super intelligence… first the machines will do a lot of jobs for us and not be super intelligent, which should be positive if we manage it well.  A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

Stephen Hawking – Hawking recently said in a BBC interview... the development of full artificial intelligence could spell the end of the human race. While some forms of AI have been useful, I worry that we won't be able to keep up with super-intelligent versions that can potentially “outwit” humans.

Ray Kurzweil – Kurzweil commented at an Exponential Finance conference in New York:  My timeline is computers will be at human levels, such as you can have a human relationship with them, 15 years from now.  When I say about human levels, I'm talking about emotional intelligence. The ability to tell a joke, to be funny, to be romantic, to be loving, to be sexy, that is the cutting edge of human intelligence, that is not a sideshow.

Arthur C. Clarke –While not so recent, in 2001 Clarke predicted amongst other things (many right, some wrong) that… by 2020, Artificial Intelligence reaches human level. From now on there are two intelligent species on Earth.

The near-term reality…

The concerns and predictions cited above are related to those areas where the deepest, most advanced AI research  and development are taking place.  To be clear, they are not sounding alarms for the vast array of pragmatic uses of increasing computing intelligence, which are benefiting human kind in so many ways. In the advanced AI realm, some of the top scientists and researchers in the artificial intelligence field have recently signed an Open Letter:  Research Priorities for Robust and Beneficial Artificial Intelligence that outlines their research principles:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

This open letter references a document, which can be viewed here that outlines in detail the agreed upon current, near-term and longer-term research priorities.  Signatories to the letter include AI researchers and experts at MIT, Harvard, Stanford, University of Cambridge, Oxford University, Google/DeepMind, Association for the Advancement of Artificial Intelligence, IBM Research, MIRI and many others.

The reality of smart machine “defiance” displayed by HAL 9000 (I'm sorry Dave, I'm afraid I can't do that) in the Stanley Kubrick film of Arthur C. Clarke’s 2001:  A Space Odyssey is unlikely to be upon us anytime soon.  Most research scientists are not worried about any near-term dangers of rapidly advancing, uncontrolled AI research, but clearly others feel that Musk’s, Gate’s, Hawking's and others’ recent pronouncements are worth noting.

Wrapping up…

At Citrix we are all about work better, live better and pursuing the application of technology to deliver on this mission. We’re interested in many aspects around the rapid advances in smart machines and believe AI has a critical role to play in helping our customers.  For example, we  apply predictive analytics and natural language processing to identify hot customer support cases and accelerate their resolution. At the Citrix Startup Accelerator, several companies are applying various elements of AI in their innovative solutions. For example, doing object recognition in video, using machine learning to improve customer churn, and Coseer, whose AI/ NLP based technology automates routine and repetitive decisions in enterprises.

It goes almost without saying that Citrix will be doing its part to ensure all aspects of smart machines and what many call augmented intelligence are used for good. Clearly we don't know what's ahead for highly advanced artificial intelligence, so perhaps it’s best for researchers to proceed with some degree of caution and a defined purpose.  While I’m not losing sleep (yet), I take seriously the prospect that rapid AI advancements could become potentially detrimental to human kind in the distant future… what do you think?