Where is VR going and why you should follow it

Promotional image for Oculus Rift VR headset
Promotional image for Oculus Rift VR headset

On November 2, video game maker Activision Blizzard Entertainment announced a $5.9 billion purchase of King Digital Entertainment, maker of the mobile app game Candy Crush Saga. Activision Blizzard owns popular titles like Call of Duty, World of Warcraft and Guitar Hero—with tens of millions sold—for play on game consoles and PCs. By comparison, King has more than 500 million worldwide users playing Candy Crush on TVs, computers and (mostly) mobile devices.

While it is not the largest-ever acquisition of a game company—Activision bought Blizzard in 2008 for $19 billion—the purchase shows how much the traditional gaming industry believes that future success will be tied to mobile and social media. Other recent acquisitions indicate how the latest in gaming hardware and software have become strategically important for the largest tech companies:

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014
Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014
  • September 2014: Microsoft acquired Mojang for $2.5 billion
    Mojang’s Minecraft game has 10 million users worldwide and an active developer community. The Lego-like Minecraft is popular on both Microsoft’s Xbox game console and Windows desktop and notebook PCs. In making the purchase, Microsoft CEO Satya Nadella said, “Gaming is a top activity spanning devices, from PCs and consoles to tablets and mobile, with billions of hours spent each year.”
  • August 2104: Amazon acquired Twitch for $970 million
    The massive online retailer has offered online video since 2006 and the purchase of Twitch—the online and live streaming game service—adds 45 million users to Amazon’s millions of Prime Video subscribers and FireTV (stick and set top box) owners. Amazon’s CEO Jeff Bezos said of the acquisition, “Broadcasting and watching gameplay is a global phenomenon and Twitch has built a platform that brings together tens of millions of people who watch billions of minutes of games each month.”
  • March 2014: Facebook acquired Oculus for $2 billion
    Facebook users take up approximately 20% of all the time that people spend online each day. The Facebook acquisition of Oculus—maker of virtual reality headsets—is an anticipation that social media will soon soon include an immersive experience as opposed to scrolling through rectangular displays on PCs and mobile devices. According to Facebook CEO Mark Zuckerberg, “Mobile is the platform of today, and now we’re also getting ready for the platforms of tomorrow. Oculus has the chance to create the most social platform ever, and change the way we work, play and communicate.”

The integration of gaming companies into the world’s largest software, e-commerce and social media corporations is further proof that media and technology convergence is a powerful force drawing many different industries together. As is clear from the three CEO quotes above, a race is on to see which company can offer a mix of products and services sufficient to dominate the number of hours per day the public spends consuming information, news and entertainment on their devices.

What is VR?

Among the most important current trends is the rapid growth and widespread adoption of virtual reality (VR). Formerly of interest to hobbyists and gaming enthusiasts, VR technologies are now moving into mainstream daily use.

A short definition of VR is a computer-simulated artificial world. More broadly, VR is an immersive multisensory, multimedia experience that duplicates the real world and enables users to interact with the virtual environment and with each other. In the most comprehensive VR environments, the sight, sound, touch and smell of the real world are replicated.

Current and most commonly used VR technologies include a stereoscopic headset—which tracks the movement of a viewer’s head in 3 dimensions—and surround sound headphones that add a spatial audio experience. Other technologies such as wired gloves and omnidirectional treadmills can provide tactile and force feedback that enhance the recreation of the virtual environment.

New York Times VR promtion
The New York Times’ VR promotion included a Google Cardboard viewer that was sent along with the printed newspaper to 1 million subscribers

Recent events have demonstrated that VR use is becoming more practical and accessible to the general public:

  • On October 13, in a partnership between CNN and NextVR, the presidential debate was broadcast in VR as a live stream and stored for later on demand viewing. The CNN experience made it possible for every viewer to watch the event as though they were present, including the ability to see other people in attendance and observe elements of the debate that were not visible to the TV audience. NextVR and the NBA also employed the same technology to broadcast the October 27 season opener between the Golden State Warriors and New Orleans Pelicans, the first-ever live VR sporting event.
  • On November 5, The New York Times launched a VR news initiative that included the free distribution of Google Cardboard viewers—a folded up cardboard VR headset that holds a smartphone—to 1 million newspaper subscribers. The Times’ innovation required users to download the NYTvr app to their smartphone in order to watch a series of short news films in VR.

Origins of VR

Virtual reality is the product of the convergence of theater, camera, television, science fiction and digital media technologies. The basic ideas of virtual reality go back more than two hundred years and coincide with the desire of artists, performers and educators to recreate scenes and historical events. In the early days this meant painting panoramic views, constructing dioramas and staging theatrical productions where viewers had a 360˚ visual surround experience.

In the late 19th century, hundreds of cycloramas were built—many of them depicting major battles of the Civil War—where viewers sat in the center of a circular theater as the timeline of the historical event moved and was recreated around them in sequence. In 1899, a Broadway dramatization of the novel Ben Hur employed live horses galloping straight toward the audience on treadmills as a backdrop revolved in the opposite direction creating the illusion of high speed. Dust clouds were employed to provide additional sensory elements.

Kromscop viewer invented by Frederic Eugene Ives at the beginning of the 20th century
Frederic Eugene Ives’ Kromscop viewer

Contemporary ideas about virtual reality are associated with 3-D photography and motion pictures of the early twentieth century. Experimentation with color stereoscopic photography began in the late 1800s and the first widely distributed 3-D images were of the 1906 San Francisco earthquake and taken by Frederic Eugene Ives. As with present day VR, Ives’ images required both a special camera and viewing device called the Kromskop in order to see 3-D effect.

1950s-era 3-D View-Master with reels
1950s-era 3-D View-Master with reels

3-D photography was expanded and won popular acceptance beginning in the late 1930s with the launch of the View-Master of Edwin Eugene Mayer. The virtual experience of the View-Master system was enhanced with the addition of sound in 1970. Mayer’s company was eventually purchased by toy maker Mattel and later by Fischer-Price and the product remained successful until the era of digital photography in the early 2000s.

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats
An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

Experiments with stereoscopic motion pictures were conducted in the late 1800s. The first practical application of a 3-D movie took place in 1922 using the Teleview system of Laurens Hammond (inventor of the Hammond Organ) with a rotating shutter viewing device attached to the armrest of the theater seats.

Prefiguring the present-day inexpensive VR headset, the so-called “golden era” of 3-D film began in the 1950s and included cardboard 3-D glasses. Moviegoers got their first introduction to 3-D with stereophonic sound in 1953 with the film House of Wax starring Vincent Price. The popular enthusiasm for 3-D was eventually overtaken by the practical difficulties associated with the need to project two separate film reels in perfect synchronization.

1950s 3-D glasses and a movie audience wearing them
1950s 3-D glasses and a movie audience wearing them

Subsequent waves of 3-D movies in the second half of the twentieth century—projected from a single film strip—were eventually displaced by the digital film and audio methods associated with the larger formats and Dolby Digital sound of Imax, Imax Dome, Omnimax and Imax 3D. Anyone who has experienced the latest in 3-D animated movies such as Avatar (2009) can attest to the mesmerizing impact of the immersive experience made possible by the latest in these movie theater techniques.

Computers and VR

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966
Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

It is widely acknowledged that the theoretical possibility of creating virtual experiences that “convince” all the senses of their “reality” began with the work of Ivan Sutherland at MIT in the 1960s. Sutherland invented in 1966 the first head-mounted display—nicknamed the “Sword of Damocles”—that was designed to immerse the viewer in a simulated 3-D environment. In a 1965 essay called “The Ultimate Display,” Sutherland wrote about how computers have the ability to construct a “mathematical wonderland” that “should serve as many senses as possible.”

With increases in the performance and memory capacity of computers along with the decrease in the size of microprocessors and display technologies, Sutherland’s vision began to take hold in the 1980s and 1990s. Advances in vector based CGI software, especially flight simulators created by government researchers for military aircraft and space exploration, brought the term “reality engine” into use. These systems, in turn, spawned notions of complete immersion in “cyberspace” where sight, sound and touch are dominated by computer system generated sensations.

The term “virtual reality” was popularized during these years by Jaron Lanier and his VPL Laboratory. With VR products such as the Data Glove, the Eye Phone and Audio Sphere, Lanier combined with game makers at Mattel to create the first virtual experiences with affordable consumer products, despite their still limited functionality.

By the end of the first decade of the new millennium, many of the core technologies of present-day VR systems were developed enough to make simulated experiences more convincing and easy to use. Computer animation technologies employed by Hollywood and video game companies pushed the creation of 3-D virtual worlds to new levels of “realness.”

An offshoot of VR, called augmented reality (AR), took advantage of high resolution camera technologies and allowed virtual objects to appear within the actual environment and enabled users to view and interact with them on computer desktop and mobile displays. AR solutions became popular with advertisers offering unique promotional opportunities that capitalized on the ubiquity of smartphones and tablets.

Expectations

Scene from the 2009 movie Avatar
A scene from the 2009 animated film “Avatar”

Aside from news, entertainment and advertising, there are big possibilities opening up for VR in many business disciplines. Some experts expect that VR will impact almost every industry in a manner similar to that of PCs and mobile devices. Entrepreneurs and investors are creating VR companies with the aim of exploiting the promise of the new technology in education, health care, real estate, transportation, tourism, engineering, architecture and corporate communications (to name just a few).

Like consumer-level artificial intelligence, i.e. Apple Siri and Amazon Echo, present-day virtual reality technologies tend to fall frustratingly short of expectations. However, with the rapid evolution of core technologies—processors, software, video displays, sound, miniaturization and haptic feedback systems—it is conceivable that VR is ripe for a significant leap in the near future.

In many ways, VR is the ultimate product of media convergence as it is the intersection of multiple and seemingly unrelated paths of scientific development. As pointed out by Howard Rheingold in his authoritative 1991 book Virtual Reality, “The convergent nature of VR technology is one reason why it has the potential to develop very quickly from a scientific oddity into a new way of life … there is a significant chance that the deep cultural changes suggested here could happen faster than anyone has predicted.”

Genesis of the GUI

Thirty-five years ago Xerox made an important TV commercial. An office employee arrives at work and sits down at his desk while a voice-over says, “You come into your office, grab a cup of coffee and a Xerox machine presents your morning mail on a screen. … Push a button and the words and images you see on the screen, appear on paper. … Push another button and the information is sent electronically to similar units around the corner or around the world.”

Xerox 1979 TV Commercial
Frame from the Xerox TV commercial in 1979

The speaker goes on, “This is an experimental office system; it’s in use now at the Xerox research center in Palo Alto, California.” Although it was not named, the computer system being shown was called the Xerox Alto and the TV commercial was the first time anyone outside of a few scientists had seen a personal computer. You can watch the TV ad here: http://www.youtube.com/watch?v=M0zgj2p7Ww4

The Alto is today considered among the most important breakthroughs in PC history. This is not only because it was the first computer to integrate the mouse, email, desktop printing and Ethernet networking into one computer; above all, it is because the Alto was the first computer to incorporate the desktop metaphor of “point and click” applications, documents and folders known as the graphical user interface (GUI).

Xerox Alto Office System
Xerox Alto Office System

The real significance of the GUI achievement was that the Xerox engineers at the Palo Alto Research Center (PARC) made it possible for the computer to be brought out of the science lab and into the office and the home. With the Alto—the hardware was conceptualized by Butler Lampson and designed by Chuck Thacker at PARC in 1972—computing no longer required arcane command line entries or text-based programming skills.

The general public could use the Alto because it was based on easy-to-understand manipulation of graphical icons, windows and other objects on the display. This advance was no accident. Led by Alan Kay, inventor of object-oriented programming and expert in human-computer interaction (HCI), the Alto team set out from the beginning to make a computer that was “as easy to use as a pencil and piece of paper.”

Basing themselves on the foundational computer work of Ivan Sutherland (SketchPad) and Douglas Engelbart (oN-Line System), the educational theories of Marvin Minsky and Seymour Papert (Logo) and the media philosophy of Marshall McLuhan, Kay’s team designed an HCI that could be easily learned by children. In fact, much of the PARC team’s research was based on observing students as young as six years old interacting with the Alto as both users and programmers.

Xerox Alto SmallTalk desktop
An example of an Alto graphical user interface

The invention of GUI required two important technical innovations at PARC:

  1. Bitmap computer display: The Alto monitor was vertical instead of horizontal and, with a resolution of 606 by 808 pixels, it was 8 x 10 inches tall. It had dark pixels on a light gray background and therefore emulated a sheet of letter-size white paper. It had bit-mapped raster scan as a display method as opposed to the “character generators” of previous monitors that could only render alphanumeric characters in one size and style and were often green letters on a black background. With each dot on its display corresponding to one bit of memory, the Alto monitor technology was very advanced for its time. It was capable of multiple fonts and could even render black and white motion video.
  2. Software that supported graphics: Alan Kay’s team developed the SmallTalk programming language as the first object-oriented software environment. They built the first GUI with windows that could be moved around and resized and icons that represented different types of objects in the system. Programmers and designers on Kay’s team—especially Dan Ingalls and David C. Smith—and developed bitmap graphics software that enabled computer users to click on icons, dialogue boxes and drop down menus on the desktop. These functions represented the means of interaction with documents, applications, printers, and folders and thereby the user derived immediate feedback from their actions.
Alan Kay, Dan Ingalls and David C. Smith worked on the software programming and graphical user interface elements of the Xerox Alto
Alan Kay, Dan Ingalls and David C. Smith worked on the software programming and graphical user interface elements of the Xerox Alto

The Alto remained an experimental system until the end of the 1970s with 2,000 units made and used at PARC and by a wider group of research scientists across the country. It is an irony of computer and business history that the commercial product that was inspired by the Alto—the Xerox 8010 Information System or Star workstation—was launched in 1981 and did not achieve market success due in part to it’s $75,000 starting price ($195,000 today). As a personal computer, the Xerox Star was rapidly eclipsed by the IBM-PC, the very successful MS-DOS-based personal computer launched in 1981 without a GUI at a price of $1,595.

It is well known that Steve Jobs and a group of Apple Computer employees made a fortuitous visit to Xerox PARC in December 1979 and received an inside look at the Alto and its GUI. Upon seeing the Alto’s user interface, Jobs has been quoted as saying, “It was like a veil being lifted from my eyes. I could see the future of what computing was destined to be.”

Much of what Jobs and his team learned at PARC—in exchange for the purchase of 100,000 Apple shares by Xerox—was incorporated into the unsuccessful Apple Lisa computer (1982) and later the popular Macintosh (1984). The Apple engineers also implemented features that further advanced the GUI in ways that the PARC researchers had not thought of or were unable to accomplish. Apple Computer was so successful at implementing a GUI-based personal computer that many of the Xerox engineers left PARC and joined Steve Jobs, including Alan Kay and several of his team members.

In response to both the popularity and ease-of-use superiority of the GUI, Microsoft launched Windows in 1985 for the IBM-PC and PC clone markets. The early Windows interface was plagued with performance issues due in part to the fact that it was running as a second layer of programming on top of MS-DOS. With Windows 95, Microsoft developed perhaps the most successful GUI-based personal computer software up to that point.

First desktops: Xerox Star (1980), Apple Macintosh (1984) and Microsoft Windows (1985)
First desktops: Xerox Star (1980), Apple Macintosh (1984) and Microsoft Windows (1985)

Already by 1988, the GUI had become such an important aspect of personal computing that Apple filled a lawsuit against Microsoft for copyright infringement. In the end, the federal courts ruled against Apple in 1994 saying that “patent-like protection for the idea of the graphical user interface, or the idea of the desktop metaphor” was not available. Much of Apple’s case revolved around defending as its property something called the “look and feel” of the Mac desktop. While rejecting most of Apple’s arguments, the court did grant ownership of the trashcan icon, upon which Microsoft began using the recycling bin instead.

When looking back today, it is remarkable how the basic desktop and user experience design that was developed at Xerox PARC in the 1970s has remained the same over the past four decades. Color and shading have been added to make the icons more photographic and the folders and windows more dimensional. However, the essential user elements, visual indicators, scroll bars, etc. have not changed much.

With the advent of mobile (smartphone and tablet) computing, the GUI began to undergo more significant development. With the original iOS on the iPhone and iPod touch, Apple relied heavily upon so-called skeuomorphic GUI design, i.e. icons and images that emulate physical objects in the real world such as a textured bookcase to display eBooks in the iBook app.

Comparison of iOS 1 to iOS 7 user interface
Comparison of iOS 1 to iOS 7 user interface

Competitors—such as those with Android-based smartphones and tablets—have largely copied Apple’s mobile GUI approach. Beginning with iOS 7, however, Apple has moved aggressively away skeuomorphic elements in favor of flattened and less pictorial icons and frames, etc.

Multi-touch and gesture-based technology—along with voice user interface (VUI)—represent practical evolutionary steps in the progress of human-computer interaction. Swipe, pinch and rotate have become just as common for mobile users today as double-click, drag-and-drop and copy-and-paste were for the desktop generation. The same can be said of the haptic experience—tactile feedback such as vibration or rumbling on a controller—of VR and gaming systems that millions of young people are familiar with all over the world.

It is safe to say that it was the pioneering work of the research group at Xerox PARC that made computing something that everyone can do all the time. They were people with big ideas and big goals. In a 1977 article for Scientific American Alan Kay wrote, “How can communication with computers be enriched to meet the diverse needs of individuals? If the computer is to be truly ‘personal,’ adult and child users must be able to get it to perform useful activities without resorting to the services of an expert. Simple tasks must be simple, and complex ones must be possible.”