
Last November, Amazon revealed its intelligent home assistant called Echo. The black cylinder-shaped device is always on and ready for your voice commands. It can play music, read audio books and it is connected to Alexa, Amazon’s cloud-based information service. Alexa can answer any number of questions regarding the weather, news, sports scores, traffic reports and your schedule in a human-like voice.
Echo has an array of seven microphones and it can hear—and also learn—your voice, speech pattern and vocabulary even from across the room. With additional plugins, Echo can control your automated home devices like lights, thermostat, kitchen appliances, security system and more with just the sound of your voice. This is certainly a major leap from “Clap on, Clap off” (watch “The Clapper” video from the mid-1980s here: https://www.youtube.com/watch?v=Ny8-G8EoWOw).
As many critics have pointed out, the Echo is Amazon’s response to Siri, Apple’s voice-activate intelligent personal assistant and knowledge navigator. Siri was launched as an integrated feature of the iPhone 4S in October 2011 and the iPad released in May 2012. Siri is also now part of the Apple Watch, a wearable device, that adds haptics—tactile feedback—and voice recognition along with a digital crown control knob to the human computer interface (HCI).
If you have tried to use any of these technologies, you know that they are far from perfect. As the New York Times reviewer, Farhad Manjoo explained, “If Alexa were a human assistant, you’d fire her, if not have her committed.” Often times, using any of the modern artificial intelligence (AI) systems can be an exercise in futility. However, it is important to recognize that computer interaction has come a long way since the transition from mainframe consoles and command line interfaces were replaced by the graphical, point and click interaction of the desktop.
What is artificial intelligence?

Artificial intelligence is the simulation of the functions of the human brain—such as visual perception, speech recognition, decision-making, and translation between languages—by man-made machines, especially computers. The field was started by the noted computer scientist Alan Turing shortly after WWII and the term was coined in 1956 by John McCarthy, a cognitive and computer scientist and Stanford University professor. McCarthy developed one of the first programming languages called LISP in the late 1950s and is recognized for having been an early proponent of the idea that computer services should be provided as a utility.
McCarthy worked with Marvin Minsky at MIT in the late 1950s and early 1960s and together they founded what has become known as the MIT Computer Science and Artificial Intelligence Laboratory. Minsky, a leading AI theorist and cognitive scientist, put forward a range of ideas and theories to explain how language, memory, learning and consciousness work.
The core of Minsky’s theory—what he called the society of mind—is that human intelligence is a vast complex of very simple processes that can be individually replicated by computers. In his 1986 book The Society of Mind Minsky wrote, “What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.”
The theory, science and technology of artificial intelligence have been advancing rapidly with the development of microprocessors and the personal computer. These advancements have also been aided by the growth in understanding of the functions of the human brain. The field of neuroscience has vastly expanded in recent decades our knowledge of the parts of the brain, especially the neocortex and its role in the transition from sensory perceptions to thought and reasoning.
Ray Kurzweil has been a leading theoretician of AI since the 1980s and has pioneered the development of devices for text-to-speech, speech recognition, optical character recognition and music synthesizers (Kurzweil K250). He sees the development of AI as a necessary outcome of computer technology and has written widely—The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999), The Singularity is Near (2005) and How to Create a Mind (2012)—that this is a natural extension of the biological capacities of the human mind.
Kurzweil, who corresponded as a New York City high school student with Marvin Minksy, has postulated that artificial intelligence can solve many of society’s problems. Kurzweil believes—based on the exponential growth rate of computing power, processor speed and memory capacity—that humanity is rapidly approaching a “singularity” in which machine intelligence will be infinitely more powerful than all human intelligence combined. He predicts that this transformation will occur in 2029; a moment in time when developments in computer technology, genetics, nanotechnology, robotics and artificial intelligence will transform the minds and bodies of humans in ways that cannot currently be comprehended.
Some fear that the ideas of Kurzweil and his fellow adherents of transhumanism represent an existential threat to society and mankind. These opponents—among them the physicist Stephen Hawking and the pioneer of electric cars and private spaceflight Elon Musk—argue that artificial intelligence will become the biggest “blow back” in history such as depicted in Kubrick’s film 2001: A Space Odyssey.
While much of this discussion remains speculative, anyone who watched in 2011 as the IBM supercomputer Watson defeated two very successful Jeopardy! champions (Ken Jennings and Brad Rutter) knows that AI has already advanced a long way. Unlike the human contestants, Watson was able to commit 200 million pages of structured and unstructured content, including the full text of Wikipedia, into four terabytes of its memory.
Media and interface obsolescence
Today, the advantages of artificial intelligence are available to great numbers of people in the form of personal assistants like Echo and Siri. Even with their limitations, these tools allow instant access to information almost anywhere and anytime with a series of simple voice commands. When combined with mobile, wearable and cloud computing, AI is making all previous forms of information access and retrieval—analog and digital alike—obsolete.
There was a time not that long ago when gathering important information required a trip—with pen and paper in hand—to the library or to the family encyclopedia in the den, living room or study. Can you think of the last time you picked up a printed dictionary? The last complete edition of the Oxford English Dictionary—all 20 volumes—was printed in 1989. Anyone born after 1993 is likely to have never seen an encyclopedia (the last edition of the Encyclopedia Britannica was printed in 2010). Further still, GPS technologies have driven most printed maps into bottom drawers and the library archives.

But that is not all. The technology convergence embodied in artificial intelligence is making even more recent information and communication media forms relics of the past. Optical discs have all but disappeared from computers and the TV viewing experience as cloud storage and time-shifted streaming video have become dominant. Social media (especially photo apps) and instant messaging have also made email a legacy form of communication for an entire generation of young people.
Meanwhile, the advance of the touch/gesture interface is rapidly replacing the mouse and, with improvements in speech-to-text technology, is it not easy to visualize the disappearance of the QWERTY keyboard (a relic from the mechanical limitations of the 19th century typewriter)? Even the desktop computer display is in for replacement by cameras and projectors that can make any surface an interactive workspace.
In his epilogue to How to Create a Mind, Ray Kurzweil writes, “I already consider the devices I use and the cloud computing resources to which they are virtually connected as extensions of myself, and feel less than complete if I am cut off from these brain extenders.” While some degree of skepticism is justified toward Kurzweil’s transhumanist theories as a form of technological utopianism, there is no question that artificial intelligence is a reality and that it will be with us—increasingly integrated into us and as an extension of us—for now and evermore.
This article was especially interesting to me. The subject matter was not particularly biographical but historical. I read it straight through — which is more and more difficult as I age! Good stuff Kev! Love Mom