TGO & print media in the digital age

The latest book by Joseph W. Webb, Ph. D. and Richard M. Romano
The latest book by Joseph W. Webb, Ph. D. and Richard M. Romano

In their latest book “This Point Forward: The New Start the Marketplace Demands,” Dr. Joseph W. Webb and Richard Romano offer the following blunt words for printing company representatives: “There is nothing worse than a bald or gray-haired guy standing in front of a bunch of young executives talking about how exciting print is. You’re not a wise elder statesman. You risk being perceived as an old relic who has no clue.”

Webb and Romano are inveighing against print media romanticism, i.e. nostalgic talk about the love of print, how it smells and feels, that it doesn’t require batteries or tech support and that it doesn’t crash or steal your identity, etc. They write “an increasing number of today’s communications and advertising managers do not expect to use print. Why should they? It doesn’t serve their purpose. … Today’s marketing communications managers are highly skilled digital media experts, who are both creative and innovative, and who are fluent in the statistical nature of digital media analytics.”

These are fundamental truths; cultural changes are shaking up the media business and the printing industry is not the only one facing problems. Digital streaming and on-demand have forever disrupted traditional radio and TV broadcast advertising. Any media business that tries to remain some kind of analog island amid the digital ocean is going to be swept under by the next technology or economic tidal wave.

For printing companies, this means morphing away from a print-centric to a digital-centric strategy. Understood as one of many choices that media buyers use to achieve their objectives, print can play a valuable and even critical role. For example, targeted and personalized direct mail­ can be central to a campaign as long it is integrated with a web, email and social media presence where the results are measureable. In short, the future of print depends on its integration into data driven analytics; print needs to be tracked, measured and cost justified or budgets for it will dry up and dollars will be spent on other more effective media forms.

Udi Arieli presenting the Theory of Global Optimization
Udi Arieli presenting the Theory of Global Optimization

While Webb and Romano give an exhaustive review of the strategic reboot that printers require to be successful through 2020, they spend little time on the operational aspects of this transformation. Fortunately, there is someone in the printing industry that has developed a groundbreaking approach to production that makes print a competitive and attractive option for marketers and advertisers for decades to come. That person is Udi Arieli of EFI® and his approach is the Theory of Global Optimization (TGO).

What is the Theory of Global Optimization?

The Theory of Global Optimization is an approach to operational management that responds to all external and internal challenges facing the printing industry today. Its goals are to:

  • improve performance
  • increase throughput
  • accomplish more with diminishing resources
  • increase profitability

It does these things, not with automated and digital equipment although these are critical assets of the printing company of the future, but as a proactive management philosophy. TGO educates the entire organization against the reactive and narrow thinking that predominated in an era when companies could achieve success with limited or no business theory at all.

The two basic concepts of the Theory of Global Optimization are:

  1. Adopt the Global View
    Printing—as well as other custom or “pull” manufacturing businesses—is a chain of independent links. As the complexity of the production process increases and the company grows in size, the need for a global view of the business intensifies. A wider perspective beyond an individual project, client, cost center or machine must guide the decision making on a day-to-day and hour-by-hour basis. The profitability of the business is the result of the sum total of the performance of all jobs, customers, departments and equipment within the company; this is the global view.
  2. Optimize the System
    All areas of the establishment must be synchronized and optimized. The weakest links in the chain—the few constraints within the company that have the most impact on throughput, on-time delivery and costs—must be identified and managed. It is not possible for any individual no matter how talented to comprehend the complex interaction of these variables within the operation. Advanced computerized data collection and scheduling software are required to integrate and automate the critical decision making process.

Evolution of manufacturing theory

For years Udi Arieli has pointed to the relationship of his theory to previous generations of scientific management theory. That the Theory of Global Optimization contains the accomplishments of manufacturing theory going back to the beginning of the industrial revolution—and is also the modern day continuation of those achievements in the digital age—is proven by the following historical review:

  • 1801: Eli Whitney / Interchangeable Parts
    Eli Whitney
    Eli Whitney

    Whitney is known for two related contributions to industrialization: the mechanized of farming (invention of the cotton gin in 1793) and, although he did not originate it, the promulgation of interchangeable parts. Whitney’s name is associated with the concept of interchangeable parts—the production of identical components made to specifications such that one part can freely and easily replace another—because he demonstrated the principle by assembling ten guns from a pile of mixed parts in front of Congress in July 1801. In the later 1800s this method became known as the “American system of manufacturing” as it increasingly utilized machine tools and semi-skilled labor to produce the parts to specified tolerances instead of the manual labor of skilled craftsmen.

  • 1911: Frederick Taylor / Scientific Management
    Frederick W. Taylor
    Frederick W. Taylor

    At the turn of the twentieth century, Frederick Taylor extended the ideas and methods of the American system of manufacturing by studying labor productivity and introducing advanced planning into the production process. What is now known as “Taylorism” introduced ideas of scientific management and process management onto the production floor. Concepts such as “workflow” and “automation” emerged later from Taylor’s breakthrough stopwatch time and motion studies and his analysis of the functions and stages of the manufacturing process.

  • 1913: Henry Ford / Assembly Line
    Henry Ford in 1914
    Henry Ford in 1914

    Both interchangeable parts and scientific management methods were employed by Henry Ford in the startup of assembly line production of the Model T on December 1, 1913 in the Highland Park, Michigan. Also known as progressive assembly, the breakthrough of the assembly line was described by one of Ford’s top executives and engineers, Charles E. Sorenson, as “the practice of moving the work from one worker to another until it became a complete unit, then arranging the flow of these units at the right time and the right place to a moving final assembly line from which came a finished product.” Although this method had been pioneered by Ransom Olds in 1901, Henry Ford is credited with perfecting and sponsoring it by agreeing to the installation of a motorized conveyor belt that enabled a Model T to be assembled in 93 minutes.

  • 1950: W. Edwards Deming / Process Control
    W. Edwards Deming in 1953
    W. Edwards Deming in 1953

    Following World War II, W. Edwards Deming further advanced scientific management theory by demonstrating that organizational cooperation and learning can improve manufactured product quality and reduce costs; that effective process control requires data gathering and measurement; that every process has a range and causes of variation in quality; that production workers should participate in continuous improvement initiatives. Deming gained worldwide notoriety for his pioneering work with the leaders of Japanese industry during what became known as the “Japanese post-war economic miracle” of 1950-1960. At first, Deming’s ideas were eschewed in the US for a host of cultural reasons. But by the 1980s, he was working directly with Ford Motor Company on a top to bottom quality manufacturing initiative and he would go on the become one of the most sought after experts on business management.

  • 1984: Eliyahu Goldratt / Theory of Constraints (TOC)
    Eliyahu Goldratt
    Eliyahu Goldratt

    In 1984, Eliyahu Goldratt wrote a novel called “The Goal,” which tells the story of Alex Rogo the manager of a production plant owned by UniCo Manufacturing. Rogo’s dilemma is that the plant is always running behind schedule and his job is on the line with upper management if he proves incapable of fixing the problems. The book was a clever method for Goldratt to explain his Theory of Constraints. TOC involves the successful management of constraints in the manufacturing process, i.e. focusing on those links in the chain—equipment, people and/or policies—that are preventing the organization from achieving its goal. Much of Goldratt’s TOC approach is derived from Deming’s notion that organizational cooperation and learning are keys to achieving agreed upon objectives; that measurement of indicators is required to gauge the impact of continuous improvement decisions.

  • 1984: Udi Arieli / Theory of Global Optimization (TGO)

    Udi Arieli
    Udi Arieli

    In the 1970s, as the third generation owner/operator of his family printing company in Israel, Udi Arieli realized that printing companies needed two things: a more advance business theory and smart software tools to manage the complex challenges they faced. In 1984, Arieli founded a company dedicated to developing intelligent production management solutions for the printing industry. While working on software, he established the elements of the Theory of Global Optimization. Arieli saw that the modern printing establishment (like many manufacturing businesses) had multiple interdependent processes—some were serial and some were parallel—that made manual- or analog-based decision making nearly impossible. Extending Goldratt’s theories, Arieli recognized that managing constraints in this dynamic environment required that production processes be replicated in a computerized scheduling model such that they could be globally synchronized and optimized. TGO is also derived from Deming’s teachings in that it educates and changes the thought process and culture of the entire business organization.

The future of print production management systems

Today TGO is more than a theory; it has become the foundation science on which EFI builds its management solutions. PrintFlow Dynamic Scheduling, for example, was the first software developed by Udi Arieli and his team based on the Theory of Global Optimization. PrintFlow acts as an operational umbrella for the business, gathering information about jobs, delivery commitments, production plans, resource availability and raw materials—generating run lists based on the best plan, not for an individual job, but for the business as a whole.

PrintFlow uses sequencing and optimization algorithms to maximize throughput while, at the same time, offering “what-if” and “weak-link” analytics to address real-world situations in real-time. PrintFlow is smart software that works with literally thousands of pieces of information to deliver a globally optimized plan that evolves with every new job and every new situation a printing business encounters.

TGO has evolved to become the foundation of EFI’s Automated Intelligent Workflow, bringing the printing industry to new levels of efficiency and savings. Recognizing the value of the Theory of Global Optimization, EFI continues to invest significant resources into its product suite—Digital StoreFront®, its MIS/ERP solutions, PrintFlow, Fiery®, VUTEk® and Jetrion®—so that they operate according to TGO principles.

It is not accidental that Udi Arieli developed the Theory of Global Optimization as a solution to the problems of the printing industry, one of the most complex and largest custom manufacturing sectors of the economy. The great value of Theory of Global Optimization is that it provides a framework for printing company executives to make their way out of the analog world of landline phone call status updates and into the digital world of client dashboard apps, automated text communications and email tracking information. By utilizing TGO, the printing firm of today can begin the practical transition of becoming the integrated media supplier of tomorrow.

By employing sophisticated digital operations management tools, print media suppliers can interact with the young advertising and marketing clients—that Webb and Romano write about—in a manner that fits their lifestyle and habits, i.e. more like their digital media suppliers. If print is going to survive in the digital age, it has to become easier to order, easier to produce, easier to track and easier to cost justify. Now that is the new start that the marketplace demands.

Is your head in The Cloud or in the sand?

The Cloud is everywhere all the time; it knows who you are, where you are and it is casting its shadow upon you right now. Driven by shifts in technology and culture, The Cloud is part of our personal and professional lives whether we like it or not. If you have a Facebook account, your Timeline is in The Cloud; if you have a Flickr account, your photos are in The Cloud; if you have a Netflix account, the movies you watch are stored in The Cloud; if you have a DropBox account, your documents are in The Cloud.

The Cloud or cloud computing has many forms. One can think of it as computing as a utility instead of with a piece of electronic hardware, a device or a program that you own. Cloud computing is associated with shared computer resources such as data storage systems or applications over the Internet.

Popular providers of cloud computing products and services: (clockwise from top left) Apple iCloud, Amazone Cloud Drive, Adobe Creative Cloud, Microsoft SkyDrive, Oracle Cloud Computing and IBM Cloud
Popular providers of cloud computing products and services: (clockwise from top left) Apple iCloud, Amazone Cloud Drive, Adobe Creative Cloud, Microsoft SkyDrive, Oracle Cloud Computing and IBM Cloud

In contrast to the personal computing model—where every system has unique copies of software and data and a large local storage volume—cloud computing distributes and replicates these assets on servers across the globe. Historically speaking, The Cloud is a return—in the age of the Internet, apps and social media—to the time-sharing terminal computing model of the 1950s. It maintains computer processes and data functions centrally and enables users to access them from anywhere and at any time.

The phrase “The Cloud” was originally used in the early 1990s as a metaphor to describe the Internet. Beginning in 2000, the technologies of cloud computing began to expand exponentially and since then have become ubiquitous. Solutions like Apple’s .Mac (2000), MobileMe (2008) and finally iCloud (2011) have enabled public familiarity with cloud computing models. Certainly the ability to access, edit and update your personal digital assets—documents, photos, music, video—from multiple devices is a key feature of The Cloud experience.

The development and proliferation of cloud file sharing (CFS) systems such as DropBox, Google Drive and Microsoft SkyDrive—offering multiple gigabytes of file storage for free—have also driven mass adoption. Some industry analysts report that there are more than 500 million CFS users today.

Beside benefits for the consumer, cloud-based solutions are being offered by enterprise computing providers such as IBM and Oracle with the promise of significant financial savings associated with shared and distributed resources. In fact, The Cloud has become such an important subject today that every supplier of computer systems—as well as online retailers like Amazon—is hoping to cash in on the opportunity by offering cloud solutions to businesses and consumers.

For those of us in the printing and graphic arts industries, a prototypical example of cloud computing is Adobe’s Creative Cloud. Adopters of Adobe CC are becoming accustomed to monthly software subscription fees as opposed to a one-time purchase of a serialized copy as well as shared data storage of their creative graphics content on Adobe’s servers.

Digital Convergence

The concepts of digital convergence were developed and expanded by Ihtiel de Sola Pool, Nicholas Negroponte and John Hagel III
The concepts of digital convergence were developed and expanded by Ithiel de Sola Pool, Nicholas Negroponte and John Hagel III

In a more general sense, The Cloud is part of the process of digital convergence, i.e. the coming together of all media and communications technologies into a unified whole. The concept of technology convergence was pioneered at MIT by the social scientist Ithiel de Sola Pool in 1983. In his breakthrough book Technologies of Freedom, De Sola Pool postulated that digital electronics would cause the modes of communication—telephone, newspapers, radio, and text—to combine into one “grand system.”

Nicholas Negroponte, founder of the MIT Media Lab, substantially developed the theory of digital convergence in the 1980s and 1990s. Long before the emergence of the World Wide Web, Negroponte was foretelling that digital technologies were causing the “Broadcast and Motion Picture Industry,” the “Computer Industry” and the “Print and Publishing Industry” to overlap with each other and become one. As early as 1978, Negroponte was predicting that this process would reach maturity by the year 2000.

At the center of digital convergence—and the growth and expansion of The Cloud—is the acceleration of electronic technology innovation. John Hagel III of The Center for the Edge at Deloitte has identified the following technological and cultural components that are responsible for this accelerated development.

Infrastructure and Users

The cost/performance trends of core digital technologies are closely associated with Moore’s Law, i.e. that the stated number of transistors on an affordable CPU doubles every two years. By extension this law of exponential innovation can also be applied to other digital technologies such as storage devices and Internet bandwidth. In simple terms, what this means is that the quantity of information that can be processed, transmitted and stored per dollar spent is accelerating over time. The development of digital convergence and of cloud computing is entirely dependent upon these electronic technology shifts. The following graphs illustrate this:

The cost of computing power has decreased significantly, from $222 per million transistors in 1992 to $0.06 per million transistors in 2012. The decreasing cost-performance curve enables the computational power at the core of the digital infrastructure.
The cost of computing power has decreased significantly, from $222 per million transistors in 1992 to $0.06 per million transistors in 2012. The decreasing cost-performance curve enables the computational power at the core of the digital infrastructure.
Similarly, the cost of data storage has decreased considerably, from $569 per gigabyte of storage in 1992 to $0.03 per gigabyte in 2012. The decreasing cost-performance of digital storage enables the creation of more and richer digital information.
Similarly, the cost of data storage has decreased considerably, from $569 per gigabyte of storage in 1992 to $0.03 per gigabyte in 2012. The decreasing cost-performance of digital storage enables the creation of more and richer digital information.
The cost of Internet bandwidth has also steadily decreased, from $1,245 per 1000 megabits per second (Mbps) in 1999 to $23 per 1000 Mbps in 2012. The declining cost-performance of bandwidth enables faster collection and transfer of data, facilitating richer connections and interactions.
The cost of Internet bandwidth has also steadily decreased, from $1,245 per 1000 megabits per second (Mbps) in 1999 to $23 per 1000 Mbps in 2012. The declining cost-performance of bandwidth enables faster collection and transfer of data, facilitating richer connections and interactions.

Culture: Installed Base

Tracking closely with the acceleration of computer technology innovation—and also driving it—is the adoption of rate of these technologies by people. Without the social and practical implementation of innovation, digital convergence and The Cloud could not have moved from the laboratory and theoretical possibility into modern reality. Both the number of Internet users and wireless subscriptions are core to the transformations in human activity that are fueling the shift from the era of the personal computer to that of mobile, social media and cloud computing.

Additionally, the use of the Internet continues to increase. From 1990 to 2012, the percent of the US population accessing the Internet at least once a month grew from near 0 percent to 71 percent. Widespread use of the Internet enables more widespread sharing of information and resources.
Additionally, the use of the Internet continues to increase. From 1990 to 2012, the percent of the US population accessing the Internet at least once a month grew from near 0 percent to 71 percent. Widespread use of the Internet enables more widespread sharing of information and resources.
More and more people are connected via mobile devices. From 1985 to 2012, the number of active wireless subscriptions relative to the US population grew from 0 to 100 percent (reflecting the fact that the same household can have multiple wireless subscriptions). Wireless connectivity is further facilitated by smartphones. Smart devices made up 55 percent of total wireless subscriptions in 2012, compared to only 1 percent in 2001.
More and more people are connected via mobile devices. From 1985 to 2012, the number of active wireless subscriptions relative to the US population grew from 0 to 100 percent (reflecting the fact that the same household can have multiple wireless subscriptions). Wireless connectivity is further facilitated by smartphones. Smart devices made up 55 percent of total wireless subscriptions in 2012, compared to only 1 percent in 2001.

Innovation Comparison

The full implications of these changes are hard to comprehend. Some experts point out that previous generations of disruptive technology—electricity, telephone, internal combustion engine, etc.—have, after an initial period of accelerated innovation, been followed by periods of stability and calm. In our time, the cost/performance improvement of digital technologies—and the trajectory of Moore’s Law—shows no sign of slowing down in the foreseeable future.

While it is increasingly difficult to keep up with the demands of this change, we are compelled to do so. The fact is that we have been in The Cloud for some time now means that our conceptions and plans must be reflective of this reality. We cannot attempt to hide from The Cloud in our personal and professional affairs anymore than we could have hidden from the personal computer or the smartphone. The key is to embrace The Cloud and find within it new opportunities for harnessing its power to become more effective and successful in our daily lives and business offerings to customers.

Genesis of the GUI

Thirty-five years ago Xerox made an important TV commercial. An office employee arrives at work and sits down at his desk while a voice-over says, “You come into your office, grab a cup of coffee and a Xerox machine presents your morning mail on a screen. … Push a button and the words and images you see on the screen, appear on paper. … Push another button and the information is sent electronically to similar units around the corner or around the world.”

Xerox 1979 TV Commercial
Frame from the Xerox TV commercial in 1979

The speaker goes on, “This is an experimental office system; it’s in use now at the Xerox research center in Palo Alto, California.” Although it was not named, the computer system being shown was called the Xerox Alto and the TV commercial was the first time anyone outside of a few scientists had seen a personal computer. You can watch the TV ad here: http://www.youtube.com/watch?v=M0zgj2p7Ww4

The Alto is today considered among the most important breakthroughs in PC history. This is not only because it was the first computer to integrate the mouse, email, desktop printing and Ethernet networking into one computer; above all, it is because the Alto was the first computer to incorporate the desktop metaphor of “point and click” applications, documents and folders known as the graphical user interface (GUI).

Xerox Alto Office System
Xerox Alto Office System

The real significance of the GUI achievement was that the Xerox engineers at the Palo Alto Research Center (PARC) made it possible for the computer to be brought out of the science lab and into the office and the home. With the Alto—the hardware was conceptualized by Butler Lampson and designed by Chuck Thacker at PARC in 1972—computing no longer required arcane command line entries or text-based programming skills.

The general public could use the Alto because it was based on easy-to-understand manipulation of graphical icons, windows and other objects on the display. This advance was no accident. Led by Alan Kay, inventor of object-oriented programming and expert in human-computer interaction (HCI), the Alto team set out from the beginning to make a computer that was “as easy to use as a pencil and piece of paper.”

Basing themselves on the foundational computer work of Ivan Sutherland (SketchPad) and Douglas Engelbart (oN-Line System), the educational theories of Marvin Minsky and Seymour Papert (Logo) and the media philosophy of Marshall McLuhan, Kay’s team designed an HCI that could be easily learned by children. In fact, much of the PARC team’s research was based on observing students as young as six years old interacting with the Alto as both users and programmers.

Xerox Alto SmallTalk desktop
An example of an Alto graphical user interface

The invention of GUI required two important technical innovations at PARC:

  1. Bitmap computer display: The Alto monitor was vertical instead of horizontal and, with a resolution of 606 by 808 pixels, it was 8 x 10 inches tall. It had dark pixels on a light gray background and therefore emulated a sheet of letter-size white paper. It had bit-mapped raster scan as a display method as opposed to the “character generators” of previous monitors that could only render alphanumeric characters in one size and style and were often green letters on a black background. With each dot on its display corresponding to one bit of memory, the Alto monitor technology was very advanced for its time. It was capable of multiple fonts and could even render black and white motion video.
  2. Software that supported graphics: Alan Kay’s team developed the SmallTalk programming language as the first object-oriented software environment. They built the first GUI with windows that could be moved around and resized and icons that represented different types of objects in the system. Programmers and designers on Kay’s team—especially Dan Ingalls and David C. Smith—and developed bitmap graphics software that enabled computer users to click on icons, dialogue boxes and drop down menus on the desktop. These functions represented the means of interaction with documents, applications, printers, and folders and thereby the user derived immediate feedback from their actions.
Alan Kay, Dan Ingalls and David C. Smith worked on the software programming and graphical user interface elements of the Xerox Alto
Alan Kay, Dan Ingalls and David C. Smith worked on the software programming and graphical user interface elements of the Xerox Alto

The Alto remained an experimental system until the end of the 1970s with 2,000 units made and used at PARC and by a wider group of research scientists across the country. It is an irony of computer and business history that the commercial product that was inspired by the Alto—the Xerox 8010 Information System or Star workstation—was launched in 1981 and did not achieve market success due in part to it’s $75,000 starting price ($195,000 today). As a personal computer, the Xerox Star was rapidly eclipsed by the IBM-PC, the very successful MS-DOS-based personal computer launched in 1981 without a GUI at a price of $1,595.

It is well known that Steve Jobs and a group of Apple Computer employees made a fortuitous visit to Xerox PARC in December 1979 and received an inside look at the Alto and its GUI. Upon seeing the Alto’s user interface, Jobs has been quoted as saying, “It was like a veil being lifted from my eyes. I could see the future of what computing was destined to be.”

Much of what Jobs and his team learned at PARC—in exchange for the purchase of 100,000 Apple shares by Xerox—was incorporated into the unsuccessful Apple Lisa computer (1982) and later the popular Macintosh (1984). The Apple engineers also implemented features that further advanced the GUI in ways that the PARC researchers had not thought of or were unable to accomplish. Apple Computer was so successful at implementing a GUI-based personal computer that many of the Xerox engineers left PARC and joined Steve Jobs, including Alan Kay and several of his team members.

In response to both the popularity and ease-of-use superiority of the GUI, Microsoft launched Windows in 1985 for the IBM-PC and PC clone markets. The early Windows interface was plagued with performance issues due in part to the fact that it was running as a second layer of programming on top of MS-DOS. With Windows 95, Microsoft developed perhaps the most successful GUI-based personal computer software up to that point.

First desktops: Xerox Star (1980), Apple Macintosh (1984) and Microsoft Windows (1985)
First desktops: Xerox Star (1980), Apple Macintosh (1984) and Microsoft Windows (1985)

Already by 1988, the GUI had become such an important aspect of personal computing that Apple filled a lawsuit against Microsoft for copyright infringement. In the end, the federal courts ruled against Apple in 1994 saying that “patent-like protection for the idea of the graphical user interface, or the idea of the desktop metaphor” was not available. Much of Apple’s case revolved around defending as its property something called the “look and feel” of the Mac desktop. While rejecting most of Apple’s arguments, the court did grant ownership of the trashcan icon, upon which Microsoft began using the recycling bin instead.

When looking back today, it is remarkable how the basic desktop and user experience design that was developed at Xerox PARC in the 1970s has remained the same over the past four decades. Color and shading have been added to make the icons more photographic and the folders and windows more dimensional. However, the essential user elements, visual indicators, scroll bars, etc. have not changed much.

With the advent of mobile (smartphone and tablet) computing, the GUI began to undergo more significant development. With the original iOS on the iPhone and iPod touch, Apple relied heavily upon so-called skeuomorphic GUI design, i.e. icons and images that emulate physical objects in the real world such as a textured bookcase to display eBooks in the iBook app.

Comparison of iOS 1 to iOS 7 user interface
Comparison of iOS 1 to iOS 7 user interface

Competitors—such as those with Android-based smartphones and tablets—have largely copied Apple’s mobile GUI approach. Beginning with iOS 7, however, Apple has moved aggressively away skeuomorphic elements in favor of flattened and less pictorial icons and frames, etc.

Multi-touch and gesture-based technology—along with voice user interface (VUI)—represent practical evolutionary steps in the progress of human-computer interaction. Swipe, pinch and rotate have become just as common for mobile users today as double-click, drag-and-drop and copy-and-paste were for the desktop generation. The same can be said of the haptic experience—tactile feedback such as vibration or rumbling on a controller—of VR and gaming systems that millions of young people are familiar with all over the world.

It is safe to say that it was the pioneering work of the research group at Xerox PARC that made computing something that everyone can do all the time. They were people with big ideas and big goals. In a 1977 article for Scientific American Alan Kay wrote, “How can communication with computers be enriched to meet the diverse needs of individuals? If the computer is to be truly ‘personal,’ adult and child users must be able to get it to perform useful activities without resorting to the services of an expert. Simple tasks must be simple, and complex ones must be possible.”