AI and the future of information

Amazon Echo intelligent home assistant
Amazon Echo intelligent home assistant

Last November, Amazon revealed its intelligent home assistant called Echo. The black cylinder-shaped device is always on and ready for your voice commands. It can play music, read audio books and it is connected to Alexa, Amazon’s cloud-based information service. Alexa can answer any number of questions regarding the weather, news, sports scores, traffic reports and your schedule in a human-like voice.

Echo has an array of seven microphones and it can hear—and also learn—your voice, speech pattern and vocabulary even from across the room. With additional plugins, Echo can control your automated home devices like lights, thermostat, kitchen appliances, security system and more with just the sound of your voice. This is certainly a major leap from “Clap on, Clap off” (watch “The Clapper” video from the mid-1980s here: https://www.youtube.com/watch?v=Ny8-G8EoWOw).

As many critics have pointed out, the Echo is Amazon’s response to Siri, Apple’s voice-activate intelligent personal assistant and knowledge navigator. Siri was launched as an integrated feature of the iPhone 4S in October 2011 and the iPad released in May 2012. Siri is also now part of the Apple Watch, a wearable device, that adds haptics—tactile feedback—and voice recognition along with a digital crown control knob to the human computer interface (HCI).

If you have tried to use any of these technologies, you know that they are far from perfect. As the New York Times reviewer, Farhad Manjoo explained, “If Alexa were a human assistant, you’d fire her, if not have her committed.” Often times, using any of the modern artificial intelligence (AI) systems can be an exercise in futility. However, it is important to recognize that computer interaction has come a long way since the transition from mainframe consoles and command line interfaces were replaced by the graphical, point and click interaction of the desktop.

What is artificial intelligence?

The pioneers of artificial intelligence theory: Alan Turing, John McCarthy, Marvin Minsky and Ray Kurzweil
The pioneers of artificial intelligence theory: Alan Turing, John McCarthy, Marvin Minsky and Ray Kurzweil

Artificial intelligence is the simulation of the functions of the human brain—such as visual perception, speech recognition, decision-making, and translation between languages—by man-made machines, especially computers. The field was started by the noted computer scientist Alan Turing shortly after WWII and the term was coined in 1956 by John McCarthy, a cognitive and computer scientist and Stanford University professor. McCarthy developed one of the first programming languages called LISP in the late 1950s and is recognized for having been an early proponent of the idea that computer services should be provided as a utility.

McCarthy worked with Marvin Minsky at MIT in the late 1950s and early 1960s and together they founded what has become known as the MIT Computer Science and Artificial Intelligence Laboratory. Minsky, a leading AI theorist and cognitive scientist, put forward a range of ideas and theories to explain how language, memory, learning and consciousness work.

The core of Minsky’s theory—what he called the society of mind—is that human intelligence is a vast complex of very simple processes that can be individually replicated by computers. In his 1986 book The Society of Mind Minsky wrote, “What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.”

The theory, science and technology of artificial intelligence have been advancing rapidly with the development of microprocessors and the personal computer. These advancements have also been aided by the growth in understanding of the functions of the human brain. The field of neuroscience has vastly expanded in recent decades our knowledge of the parts of the brain, especially the neocortex and its role in the transition from sensory perceptions to thought and reasoning.

Ray Kurzweil has been a leading theoretician of AI since the 1980s and has pioneered the development of devices for text-to-speech, speech recognition, optical character recognition and music synthesizers (Kurzweil K250). He sees the development of AI as a necessary outcome of computer technology and has written widely—The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999), The Singularity is Near (2005) and How to Create a Mind (2012)—that this is a natural extension of the biological capacities of the human mind.

Kurzweil, who corresponded as a New York City high school student with Marvin Minksy, has postulated that artificial intelligence can solve many of society’s problems. Kurzweil believes—based on the exponential growth rate of computing power, processor speed and memory capacity—that humanity is rapidly approaching a “singularity” in which machine intelligence will be infinitely more powerful than all human intelligence combined. He predicts that this transformation will occur in 2029; a moment in time when developments in computer technology, genetics, nanotechnology, robotics and artificial intelligence will transform the minds and bodies of humans in ways that cannot currently be comprehended.

Some fear that the ideas of Kurzweil and his fellow adherents of transhumanism represent an existential threat to society and mankind. These opponents—among them the physicist Stephen Hawking and the pioneer of electric cars and private spaceflight Elon Musk—argue that artificial intelligence will become the biggest “blow back” in history such as depicted in Kubrick’s film 2001: A Space Odyssey.

While much of this discussion remains speculative, anyone who watched in 2011 as the IBM supercomputer Watson defeated two very successful Jeopardy! champions (Ken Jennings and Brad Rutter) knows that AI has already advanced a long way. Unlike the human contestants, Watson was able to commit 200 million pages of structured and unstructured content, including the full text of Wikipedia, into four terabytes of its memory.

Media and interface obsolescence

Today, the advantages of artificial intelligence are available to great numbers of people in the form of personal assistants like Echo and Siri. Even with their limitations, these tools allow instant access to information almost anywhere and anytime with a series of simple voice commands. When combined with mobile, wearable and cloud computing, AI is making all previous forms of information access and retrieval—analog and digital alike—obsolete.

There was a time not that long ago when gathering important information required a trip—with pen and paper in hand—to the library or to the family encyclopedia in the den, living room or study. Can you think of the last time you picked up a printed dictionary? The last complete edition of the Oxford English Dictionary—all 20 volumes—was printed in 1989. Anyone born after 1993 is likely to have never seen an encyclopedia (the last edition of the Encyclopedia Britannica was printed in 2010). Further still, GPS technologies have driven most printed maps into bottom drawers and the library archives.

Instant messaging vs email communications
Among teenagers, instant messaging has overtaken email as the primary form of electronic communications

But that is not all.  The technology convergence embodied in artificial intelligence is making even more recent information and communication media forms relics of the past. Optical discs have all but disappeared from computers and the TV viewing experience as cloud storage and time-shifted streaming video have become dominant. Social media (especially photo apps) and instant messaging have also made email a legacy form of communication for an entire generation of young people.

Meanwhile, the advance of the touch/gesture interface is rapidly replacing the mouse and, with improvements in speech-to-text technology, is it not easy to visualize the disappearance of the QWERTY keyboard (a relic from the mechanical limitations of the 19th century typewriter)? Even the desktop computer display is in for replacement by cameras and projectors that can make any surface an interactive workspace.

In his epilogue to How to Create a Mind, Ray Kurzweil writes, “I already consider the devices I use and the cloud computing resources to which they are virtually connected as extensions of myself, and feel less than complete if I am cut off from these brain extenders.” While some degree of skepticism is justified toward Kurzweil’s transhumanist theories as a form of technological utopianism, there is no question that artificial intelligence is a reality and that it will be with us—increasingly integrated into us and as an extension of us—for now and evermore.

What is CRM and why do you need it?

CRM Logos
CRM solutions (clockwise from top left) Salesforce.com, Microsoft Outlook Business Contact Manager, ACT! and SugarCRM.

I have used CRM software tools for more than ten years. Some of these were single user apps, some were client/server-based and included workgroup collaboration. Others were integrated with corporate-wide ERP systems and linked all departments together. Among the well-known solutions I have used are ACT!, Salesforce.com, SugarCRM and Microsoft Outlook Business Contact Manager.

Each of these has its strengths and weaknesses. Many functions and features are common to them all such as contact management, sales pipeline management, sales forecasting, etc. Each also has unique and distinguishing capabilities. Among the most important technical features of a CRM for me have been:

  • browser access
  • mobile app access
  • staff and management user levels
  • customizable dashboards
  • email client/server synchronization
  • APIs for ERP integration
  • automated email and text notifications for both staff and customers
  • custom and automatic report generation

The purpose of this article is to review the evolution and importance of customer relationship management as a business discipline and then explain some key lessons I have learned in my experience with CRM tools over the past decade.

Although it did not always have an acronym or business theory behind it, CRM has been practiced since the dawn of commerce. In short, customer relationship management is the methods that a business uses when interacting with customers. Although CRM is often associated with marketing, new business development and sales functions, it actually encompasses the end-to-end experience that customers have with an organization.

Therefore, customer relationship management is an important part of every business; how you manage your client relationships—from initial contact to account acquisition and development through delivery of products and services … and beyond—is vital to your future. It stands to reason that companies that are very good at customer relationship management are often among the most successful businesses.

Around the time that computers were used in business—especially the PC in the 1980s and the World Wide Web in the 1990s—the phrase customer relationship management and its acronym CRM began to acquire a specific meaning. By the late 1990s, entire schools of business thought were developed around strategies for the collection and handling of information and data about customer relations. CRM-specific technology platforms that place the customer at the center of business activity grew up around these theories.

In the first decade of the new century, the warehousing of customer information as well as the availability of demographic data about the population as a whole made it possible for CRM tools to be used for integrated and targeted marketing campaigns for new customer acquisition. Later, the growth of Big Data and cloud computing services moved CRM data out of the IT closet and made it available with software as a service (SaaS) solutions that are very flexible and can be deployed at any time and anywhere.

Most recently, social media has added another layer of information to CRM whereby companies can monitor or “listen” to dialogue between their organization and customers in real time.

CRM software industry growth
Source: Gartner Research

Business software industry experts are reporting that investment in CRM tools has been exploding and shows little sign of slowdown. According to an enterprise software market forecast by Gartner Research in 2013, total spending on CRM systems would pass that of ERP spending in 2016 and reach a total of $36 billion by 2017.

Cloud adoption by business functions
Source: Really Simple Solutions

The Gartner Research study also showed that by 2014 cloud-based CRM systems would represent 87% of the market, up from 12% in 2008. Meanwhile, in their Cloud Attitudes Survey, Really Simple Systems showed that cloud-based adoption by CRM users is more than double that of all other business functions including accounting, payroll, HR and manufacturing.

Mobile CRM adoption
Source: Gartner Research

Along with the growth of Cloud-based CRM solutions—and also driving it—is mobile technology. According to Gartner Research, mobile CRM adoption experienced the following in 2014:

  • 500% growth rate in the number of apps rising from 200 to 1,200 on mobile app stores
  • 30% increase in the use of tablets by the sales people
  • 35% of businesses have been moving toward mobile CRM apps

While these trends show that expectations are very high that increased CRM resources and investment will produce improved business results, there are countervailing trends that the path forward is far from a straight line. A survey by DiscoverOrg showed that nearly one quarter of all businesses do not have any CRM system. Additionally, one industry study shows that many organizations face setbacks during implementation and some (25-60%) fail to meet ROI targets.

Finally, other research shows that companies that have invested in CRM tools do not take advantage of some 80% of their potential benefits, especially integration and extension throughout the entire organization. All of the above statistics correspond with my own experience. While decision makers and business leaders have expectations that a CRM solution will significantly impact their bottom line, the challenges of implementation can be daunting and bog down the effort quickly.

Therefore, it is critical to have a CRM implementation plan:

  • Develop an integrated CRM strategy that places the customer at the center of all company departments and functions.
  • Map your IT infrastructure and identify all centers of customer data.
  • Evaluate, select and test a technology solution that is appropriate for your organization.
  • Utilize IT resources to build an architecture that will bring all or most of your customer data together within one system.
  • Identify champions in each department and build support and buy-in for the CRM throughout the company.
  • Work on your data quality and make sure that the information that is going into the system at startup does not compromise the project.
  • Provide training and share success stories to encourage everyone to use the system throughout the day.

In our intensely competitive environment, it is clear that CRM tools can enable an organization to effectively respond to multiple, simultaneous and complex customer needs. Every department—marketing, sales, customer service, production, shipping and accounting—has a critical role to play in building the customer database and using the CRM.

The following conclusions are derived from my experience:

  1. Few companies have implemented CRM technologies and even when CRM tools are available, few people embrace and use them.
  2. Those with effective CRM implementations are significantly outperforming the competition on the service and communication side of their business.
  3. The best and most successful companies connect their CRM infrastructure with business strategy and make its use part of their corporate culture.

2013: A big year for Big Data

The year 2013 will be important for a couple of reasons. Believe it or not, 2013 marks the twentieth anniversary of the World Wide Web. It is true that Tim Berners-Lee developed the essential technologies of the web at CERN laboratory in Switzerland in 1989-90. However, it was the first graphical browser called Mosaic—developed by a team at the National Center for Computer Applications at the University of Illinois-Urbana—in April 1993 that made the web enormously popular.

ncsa-mosaic

Marc Andreessen, developer of the first graphical web browser Mosaic in 1993.
Marc Andreessen, developer of Mosaic the first graphical web browser in 1993.

Without Mosaic, brainchild of UI-U NCSA team member Marc Andreessen, the explosive growth of the web in the 1990s could not have happened. Mosaic brought the web outside the walls of academia and transformed it into something that anyone could use. In June 1993 there were only 130 web sites; two years later there were 230,000 sites. In 2007 there were 121 million web sites; it is estimated that there are now 620 million web sites. Now that qualifies as exponential growth.

This brings me to the second reason why this year is important: worldwide digital information will likely surpass 4 zettabytes of data in 2013. This is up from 1.2 zettabytes in 2010. Most of us are familiar with terabytes; a zettabyte is 1 billion terabytes. In between these two are petabytes (1 thousand terabytes) and exabytes (1 million terabytes). 2013 is going to be a big year for Big Data.

Companies that grew up in the age of the World Wide Web are experts at Big Data. As of 2009, Google was processing 24 petabytes of data each day to provide contextual responses to web search requests. Wal-Mart records one million consumer transactions per hour and imports them into a database that contains 2.5 petabytes. Facebook stores, accesses and analyzes 30+ petabytes of user-generated data.

DataTerms

The expansion of worldwide Big Data and the metric terms to describe it (yottabytes or 1,000 zettabytes are coming next—beyond that is TBD) has become the subject of much discussion and debate. Big Data is most often discussed in terms of the four V’s: volume, velocity, variety and value.

Volume

The accumulation of Big Data volume is being driven by a number of important technologies. Smartphones and tablets and social media networks Facebook, YouTube and Twitter are important Big Data sources. There is another less visible, but nonetheless important, source of Big Data: it is called the “Internet of Things.” This is the collection of sensors, digital cameras and other data gathering systems (such as RFID tags) attached to a multitude of objects and devices all over the world. These systems are generating enormous amounts of data 24/7/365.

Velocity

The speed of Big Data generation is related to the expansion and increased performance of data networks both wired and wireless. It is also the result of improved capturing technologies. For example, one minute of high definition video generates between 100 and 200 MB of data. This is something that anyone with a smartphone can do and is doing all the time.

Variety

The Big Data conversation is more about the quality of the information than it is about the size and speed. Our world is full of information that lies outside structured datasets. Much of it cannot be captured, stored, managed or analyzed with traditional software tools. This poses many problems for IT professionals and business decision makers; what is the value of the information that is largely “exhaust data”?

Value

There are good internal as well as external business reasons for sharing Big Data. Internally, if exhaust data is missed in the analytical process, executives are making decisions based upon intuition rather than evidence. Big Data can also be used externally as a resource for customers that otherwise would be unable to gain real-time access to detailed information about the products and services they are buying. It is the richness and complexity of Big Data that makes it so valuable and useful for both the executive process and customer relationships.

Every organization today is gathering Big Data in the course of its daily activities. In most cases, the bulk of the information is collected in a central EMS or ERP system that connects the different units and functional departments of the organization. But more likely than not, these systems are insufficient and cannot support all data gathering activities within the organization. There are probably systems that have been created ad-hoc to serve various specialized needs and solve problems that the centralized system cannot address. The challenge of Big Data is to capture all ancillary data that is getting “dropped to the floor” and make it useful by integrating it with the primary sources.

Making Big Data available offers organizations the ability to establish a degree of transparency internally and externally that was previously impossible. Sharing enables organization members and customers to respond quickly to rapidly changing conditions and circumstances. Some might argue that sharing Big Data is bad policy because it allows too much of a view “behind the curtain.” But the challenge for managers is to securely collect, store, organize, analyze and share Big Data in a manner that makes it valuable to those who have access and can make use of it.

I remember—upon downloading the Mosaic browser in 1993 with my dial up connection on my desktop computer—how thrilling it was to browse the web freely for the first time. It seemed like Mosaic was the ultimate information-gathering tool. I also remember how excited I was to get my first 80 MB hard disk drive for data storage. The capacity seemed nearly limitless. As we look back and appreciate the achievements of twenty years ago, we now know that those were really the beginnings of something enormous that we could not have fully predicted at the time.

With the benefit of those experiences—and many more over the past two decades of the transition from analog to online and electronic media—it is important to comprehend as best one can the meaning of Big Data in 2013 and where it is going. Those organizations that recognize the implications and respond decisively to the challenges of the explosive growth of structured and unstructured data will be the ones to establish a competitive advantage in their markets.