As If By Chance: Part VII

Sketches of Disruptive Continuity in the Age of Print from Johannes Gutenberg to Steve Jobs

Johannes Gutenberg and Steve Jobs

Johannes Gutenberg and Steve Jobs

While reviewing nearly six centuries of print technology—through the lives and inventions of significant industry innovators—it became clear that the invention of printing by Johannes Gutenberg on the one side, and the breakthrough of desktop publishing by Steve Jobs on the other, are bookends in the age of print. While it has long been acknowledged that the hand-held type mold and printing press are the alpha in the age of manufacturing production of ink-on-paper forms such as books, newspapers, magazines, etc., the view that desktop publishing is the omega of this age is not necessarily widely held. When viewed within the framework of disruptive continuity, it can be shown that the innovations of Gutenberg and Jobs manifest similar attributes in terms of their dramatic departure from previous methods as well as their connection to the multilayered processes of cultural changes in the whole of society in the fifteenth and twentieth centuries.

The advent in 1985 of desktop publishing—a term coined by the founder of Aldus PageMaker, Paul Brainerd—is associated with Steven P. Jobs because he contributed to its conceptualization, he articulated its historical significance, and he was the innovator who made it into a reality. With the support of publishing industry consultant John Seybold, Jobs went on to integrate the technologies and brought together the people that represented the elements of desktop publishing: a personal computer (Apple Macintosh), page layout software (PageMaker), page description language (Adobe PostScript) and a digital laser printing engine (Canon LBP-CX). He demonstrated the integration of these technologies to the world for the first time at the Apple Computer annual stockholders meeting on January 23, 1985, in Cupertino, California, a truly historic moment in the development of graphic communications.

It is a fact that the basic components of desktop publishing had already been developed in the laboratory at the Xerox Palo Alto Research Center (PARC) by the late 1970s. However, due to a series of issues related to timing, cost and the corporate culture at Xerox, the remarkable achievements at PARC—which Steve Jobs had seen during a visit to the lab in 1979 and inspired his subsequent development of the Macintosh computer in 1984—never saw the light of day as commercial products. As is often the case in the history of technology, one innovator may be the first to theorize about a breakthrough, or even build a prototype, but never fully develop it, while another innovator creates a practical and functioning product based on a similar concept and it becomes the wave of the future. This was certainly the case with desktop publishing, where many of the elements that Jobs would later integrate at the Cupertino demo in 1985—the graphical user interface (GUI), the laser printer, desktop software integrating graphics and text, what-you-see-is-what-you-get (WYSIWYG) printing—were functioning in experimental form at PARC at least six years earlier. 

What became known as the desktop publishing revolution was just that. It was a transformative departure from the previous photomechanical stage of printing technology on a par with the separation of Gutenberg’s invention of mechanized metal printing type production from the handwork of scribes. Desktop publishing brought the era of phototypesetting that began in the 1950s to a close. It also eventually displaced the proprietary computerized prepress systems that had emerged and were associated with Scitex in the late 1970s. Furthermore, and just as significant, desktop publishing pushed the assembly of text information and graphic content beyond the limits of ink-on-paper and into the realm of electronic and digital media. Thus, desktop publishing accomplished several things simultaneously: (1) it accelerated the process of producing print media by integrating content creation—including the design of text and graphics into a single electronic document on a personal computer—with press manufacturing processes; (2) it expanded the democratization of print by enabling anyone with a personal computer and laser printer to produce printed material starting with a copy of one; (3) it created the basis for the mass personalization of print media and; (4) it laid the foundation for the expansion of a multiplicity of digital media forms within a decade, including electronic publishing in the form of the Portable Document Format (PDF), e-books, interactive media and, ultimately, contributed the global expansion and domination of the Internet and the World Wide Web.

Just as Gutenberg attempted to replicate in mechanized form the handwriting of the scriptoria, the initial transition to digital and electronic media by desktop publishing carried over the various formats of print, i.e., books, magazines, newspaper, journals, etc., into digital files stored on magnetic and optical storage systems such as computer floppy disks, hard drives and compact disks. However, the expansion of electronic media—which was no longer dimensionally restricted by page size or number of pages but limited by data storage capacity and the bandwidth of the processing and display systems—brought the phenomenon of hyperlinks and drove entirely new communications platforms for publishing text, graphics, photographs and audio and video, eventually on mobile wireless devices. Through websites, blogs, streaming content and social media, every individual can record and share their life story, can become a reporter and publisher or participate in, comment on and influence events anywhere in the world.

The groundbreaking significance of desktop publishing, which straddled both the previous printing and the new digital media ages, can be further illustrated by going through the above description by Will Durant of the impact Gutenberg’s invention and substituting the new media for printing and the other contemporary elements of social, intellectual and political life for those the historian identified in the 1950s about the fifteenth century:

To describe all the effects of desktop publishing and electronic media would be to chronicle well more than half the history of the modern mind. … It replaced all informational print by republishing it online as text or in more complex graphical formats like PDF, with methods for managing versions and protecting authenticity so that scholars and researchers in diverse countries may work with one another through video streaming and virtual reality tools, allowing entry of new information and data to be gathered, published and shared in real time as though they were sitting in the same room. … Online electronic media made available to the public all of the world’s manuals and procedural instructions; with the development of international collaborative projects such as Wikipedia, it became the greatest tool for learning that has ever existed, at no charge and available to all. It did not produce Modernism or the information age, but it further paved the way for a new stage of human society that had been promised by the American and French revolutions, based on democracy and where genuine equality exists as a fundamental right for everyone. It made the entire library of literature, music, fine and industrial arts, architecture, theater, athletic competition and cinema instantly available anywhere and at any time in the palm of the hand and prepared the people for an understanding of the role of mythology, mysticism and superstition in history by demonstrating the application of a scientific and materialist outlook in everyday life. It ended the monopoly of news by corporate and state publishers and the control of learning by educational institutions managed by the prevailing ruling classes. It encouraged the streaming of live video by anyone to the entire world’s audience of mobile device owners that could never have been reached through printed media. It facilitated global communication and cooperation of scientists and enabled the launching of the international space station and the sending of multiple probes to the surface of Mars and beyond. It affected the quality and character of all published literature and information by subjecting authors and journalists to the purse and taste of billions of regular working people in both the advanced and lesser developed countries rather than to just the middle and upper classes. And, after speech and print, desktop publishing, online and social media provided a readier instrument for the dissemination of nonsense and disinformation than the world has ever known. 

Up to the present, the new media has not yet displaced print the way print eventually replaced the scribes. It is likely that printing on paper will continue to exist well into the future in a similar manner that the ancient art of pen-and-ink calligraphy has continued to exist alongside of print for centuries long after the last scriptorium was shut down. Meanwhile, electronic media such as e-books have contributed to a resurgence of printed books and, after the initial fascination with the electronic devices such as Amazon’s Kindle, the popularity and thirst of the public for books has increased, particularly following the onset of the coronavirus pandemic. While the forced separation of people from each other has driven up the use of digital tools such as online video meetings, events and gatherings, the self-isolation of reading a printed book has suddenly peaked again. 

Considerable effort has been made to replicate the experience of reading print media in electronic form. The advent of e-paper—the simulation of the look and feel of ink-on-paper which was pioneered at Xerox PARC in the 1970s (Nick Sheridon, Gyricon)—is an attempt to adapt two-dimensional digital display technologies to mimic the reading experience of the printed page. Since then, studies have shown that paper-based books yield superior reading retention to that of e-books. This is not so much because of the appearance of the printed page and its impact on visual perception as it is the tactile experience and spatial awareness connected with turning physical pages and navigating through a volume that contains a table of contents and an index.

In 2012, during a presentation at the DRUPA International Printing and Paper Expo in Düsseldorf, Germany, Benny Landa, the pioneer of digital printing who developed the Indigo Press in 1993, said the following:

I bet there is not one person in this hall that believes that two hundred years from now man will communicate by smearing pigment onto crushed trees. The question on everyone’s mind is when will printed media be replaced by digital media. … It will take many decades before printed media is replaced by whatever it will be … many decades is way over the horizon for us and our children.

Since Landa’s talk at DRUPA was part of the introduction of a new press with a digital printing method called he called nanography, he was emphasizing that we need to live and work in the here and now and not get too far ahead of ourselves. Landa’s nanographic press is based on advanced imaging technology that transfers a film of ink pigment to almost any printing surface which is multiple magnitudes thinner than either offset or other digital presses. By removing the water in the inkjet process, the fusing of toner to paper in the xerography and the petroleum-based vehicles that carry pigment in traditional offset presses, nanography dramatically reduces the cost of reproduction by focusing on the transfer ultra-fine droplets of pure pigment (nanoink) first to a blanket and then to the substrate. The aim of nanography is to keep paper-based media economically viable by providing a variable imaging digital press that can compete with the costs of offset lithography and accommodate the needs of the hybrid digital and analog commercial printing marketplace.

While print volumes are in decline, society is not yet ready to make a full transition to electronic media and move entirely away from paper communications. This is a serious dilemma facing those working in the printing industry who are trying to navigate the difficulties of maintaining a viable business in an environment where print remains in demand—in some segments it is growing—but overall, it is a shrinking percentage of economic activity. With greater numbers of people and resources being redirected to communications and marketing products in the more promising and profitable big tech and social media sectors, the printing industry is being starved of talent and economic resources.

Rather than trying to put a date on the moment of transition to a post-printing and fully-digital age of communications, the more relevant question is how it will be accomplished. Landa had it right when he said that today most people believe that two hundred years from now, man will no longer communicate by “smearing pigment onto crushed trees.” When the character of print media is put in these terms, the historical distance of this analog form of communications from the long-term potential of the present digital age becomes clearer. Still, no clear vision or roadmap has yet been articulated for what is required for civilization to elevate itself beyond the age of print.

It is difficult to discuss the moment of a complete progression of human communication methods from Gutenberg to Jobs without reference to the work of the Canadian media theorist Marshall McLuhan. Although McLuhan’s presentation lacked a coherent perspective and tended to drift about in what he called the “mosaic approach,” he made numerous prescient observations about the forms of media and the evolution of communications technology. Sharing elements of the theory of disruptive continuity, McLuhan focused in on the reciprocal interaction of the modes of communication—spoken, printed and electronic—with the broader societal economic, cultural and ideological transformations in world history. He emphasized the way these transitions each fundamentally altered man’s consciousness and self-image. He also recognized that there was presently a “clash” between what he called the culture of the “electric age” with that of the age of print. During an interview with the British Broadcasting Corporation in 1965, McLuhan explained how he saw technology as an extension of man’s natural capabilities:

If the wheel is an extension of feet, and tools of hands and arms, then electromagnetism seems to be in its technological manifestations an extension of our nerves and becomes mainly an information system. It is above all a feedback or looped system. But the peculiarity, you see, after the age of the wheel, you suddenly encounter the age of the circuit. The wheel pushed to an extreme suddenly acquires opposite characteristics. This seems to happen with a good many technologies— that if they get pushed to a very distant point, they reverse their characteristics.

Among McLuhan’s most significant contributions are found in his 1962 work, The Gutenberg Galaxy: The Making of Typographical Man. He discusses the reliance of primitive oral culture upon auditory perception and the elevation of vision above hearing in the culture of print. He wrote his study, “is intended to trace the ways in which the forms of experience and of mental outlook and expression have been modified, first by the phonetic alphabet and then by printing.” For McLuhan, the transformations from spoken word culture to typography and from typography to the electronic age extended beyond the mental organization of experience. In the Preface to The Gutenberg Galaxy, McLuhan summarized how he saw the interactive relationship of media forms with the whole social environment:

Any technology tends to create a new human environment. Script and papyrus created the social environment we think of in connection with the empires if the ancient world. … Technological environments are not merely passive containers of people but are active processes that reshape people and other technologies alike. In our time the sudden shift from the mechanical technology of the wheel to the technology of electric circuitry represents one of the major shifts of all historical time. Printing from movable types created a quite unexpected new environment—it created the public. Manuscript technology did not have the intensity or power of extension necessary to create publics on a national scale. What we have called “nations” in recent centuries did not, and could not, precede the advent of Gutenberg technology any more than they can survive the advent of electric circuitry with its power of totally involving all people in all other people.

As early as 1962—seven years before the creation of the Internet and nearly three decades before the birth of the World Wide Web—McLuhan anticipated the historical, far-reaching and revolutionary implications of the information and electronic age on the global organization of society. Although he eschewed determinism in any form, McLuhan pointed to the potential for electronic media to drive mankind beyond the national particularism which is rooted in the technical, socio-historical and scientific eras connected with the age of print. McLuhan later used the phrase “global village” to describe his vision of a higher form of non-national organization driven by the methods of human interaction that were brought on by “the advent of electric circuitry” and “totally involve all people in all other people.” For McLuhan, the transformation from the typographic and mechanical age to the electric age began with the telegraph in the 1830s. The new media created by the properties of electricity were expanded considerably with telephone, radio, television and the computer in the nineteenth and twentieth centuries. McLuhan also wrote that the electronic media transformation revived oral culture and displaced the individualism and fragmentation of print culture with a “collective identity.”

McLuhan’s examination of the historical clash of the electronic media with the social environment of print culture and his prediction that a new collective human identity will be established from the transition to a global structure beyond the present fragmented national identities is highly significant. It points to the coming of the societal transformations that will be required for electronic media to thoroughly overcome print media as a completed historical process. In a similar way that Gutenberg’s invention spread across Europe and the world and planted the seeds of foundational transformation—in technology, politics and science—that developed over the next three and a half centuries, we are today likewise in the incubator of the new global transformation of electronic media. With this historically dynamic way of understanding the present, the worldwide spread of smartphones and social media to billions of people, despite national barriers placed upon the exchange of information as well as other differences such as language and ethnicity, humanity is being transformed with the emergence of a new homogeneous global culture. For this development to achieve its full potential, the social organization of man must be brought into alignment and there is no reason to believe that this adjustment from nations to a higher form of organization will take place with any less discontinuity than that of the period of world history that began with the rapid development of printing technology, the Enlightenment and the American and French Revolutions.

There are scientists and futurists who either proselytize or warn about the coming of the technological singularity, i.e., the moment in history when electronic media convergence and artificial intelligence will completely overtake the native capacities of humanity. The argument goes that these extensions of man will become irreversible, and civilization will be transformed in unanticipated ways either toward a utopian or dystopian future, depending on whether one supports or opposes the promises of the singularity. The twentieth century philosophical and intellectual movement known as transhumanism promotes the idea that the human condition will be dramatically improved through advanced technologies and cognitive enhancements. The dystopian opponents of transhumanist utopianism argue that technological advancements such as artificial intelligence should not be permitted to supplant the natural powers of the human mind on the grounds that they are morally compromising, and such a development poses an existential threat to society. Among these competing views, however, is the shared notion that the coming transformation of mankind will take place without a fundamental change in the social environment. Both the supporters and opponents of transhumanism envision that the extensions of man will evolve independently of any realignment of the economic or cultural foundations of society.

However, it is not possible to prognosticate about the future of communications technology outside of an understanding that the tendencies present in embryonic form nearly six centuries ago—particularly the democratization of information and knowledge that been vastly expanded in our time—bring with them powerful impulses for broad and fundamental societal change. In a world where every individual has the potential to communicate as both publisher and consumer of information with everyone else on the planet—regardless of geographic location, ethnicity, language or national origin—it appears entirely possible and necessary that new and higher forms of social organization must be achieved before this new media can carve a path to a truly post-printing age of mankind. While the existential threats are real, they do not come from the technology itself. The danger arises from the clash of the existing social structures against the expanding global integration of humanity. We have every reason to be optimistic about taking this next giant step into the future.

Concluded

Where is VR going and why you should follow it

Promotional image for Oculus Rift VR headset
Promotional image for Oculus Rift VR headset

On November 2, video game maker Activision Blizzard Entertainment announced a $5.9 billion purchase of King Digital Entertainment, maker of the mobile app game Candy Crush Saga. Activision Blizzard owns popular titles like Call of Duty, World of Warcraft and Guitar Hero—with tens of millions sold—for play on game consoles and PCs. By comparison, King has more than 500 million worldwide users playing Candy Crush on TVs, computers and (mostly) mobile devices.

While it is not the largest-ever acquisition of a game company—Activision bought Blizzard in 2008 for $19 billion—the purchase shows how much the traditional gaming industry believes that future success will be tied to mobile and social media. Other recent acquisitions indicate how the latest in gaming hardware and software have become strategically important for the largest tech companies:

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014
Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014

  • September 2014: Microsoft acquired Mojang for $2.5 billion
    Mojang’s Minecraft game has 10 million users worldwide and an active developer community. The Lego-like Minecraft is popular on both Microsoft’s Xbox game console and Windows desktop and notebook PCs. In making the purchase, Microsoft CEO Satya Nadella said, “Gaming is a top activity spanning devices, from PCs and consoles to tablets and mobile, with billions of hours spent each year.”
  • August 2104: Amazon acquired Twitch for $970 million
    The massive online retailer has offered online video since 2006 and the purchase of Twitch—the online and live streaming game service—adds 45 million users to Amazon’s millions of Prime Video subscribers and FireTV (stick and set top box) owners. Amazon’s CEO Jeff Bezos said of the acquisition, “Broadcasting and watching gameplay is a global phenomenon and Twitch has built a platform that brings together tens of millions of people who watch billions of minutes of games each month.”
  • March 2014: Facebook acquired Oculus for $2 billion
    Facebook users take up approximately 20% of all the time that people spend online each day. The Facebook acquisition of Oculus—maker of virtual reality headsets—is an anticipation that social media will soon soon include an immersive experience as opposed to scrolling through rectangular displays on PCs and mobile devices. According to Facebook CEO Mark Zuckerberg, “Mobile is the platform of today, and now we’re also getting ready for the platforms of tomorrow. Oculus has the chance to create the most social platform ever, and change the way we work, play and communicate.”

The integration of gaming companies into the world’s largest software, e-commerce and social media corporations is further proof that media and technology convergence is a powerful force drawing many different industries together. As is clear from the three CEO quotes above, a race is on to see which company can offer a mix of products and services sufficient to dominate the number of hours per day the public spends consuming information, news and entertainment on their devices.

What is VR?

Among the most important current trends is the rapid growth and widespread adoption of virtual reality (VR). Formerly of interest to hobbyists and gaming enthusiasts, VR technologies are now moving into mainstream daily use.

A short definition of VR is a computer-simulated artificial world. More broadly, VR is an immersive multisensory, multimedia experience that duplicates the real world and enables users to interact with the virtual environment and with each other. In the most comprehensive VR environments, the sight, sound, touch and smell of the real world are replicated.

Current and most commonly used VR technologies include a stereoscopic headset—which tracks the movement of a viewer’s head in 3 dimensions—and surround sound headphones that add a spatial audio experience. Other technologies such as wired gloves and omnidirectional treadmills can provide tactile and force feedback that enhance the recreation of the virtual environment.

New York Times VR promtion
The New York Times’ VR promotion included a Google Cardboard viewer that was sent along with the printed newspaper to 1 million subscribers

Recent events have demonstrated that VR use is becoming more practical and accessible to the general public:

  • On October 13, in a partnership between CNN and NextVR, the presidential debate was broadcast in VR as a live stream and stored for later on demand viewing. The CNN experience made it possible for every viewer to watch the event as though they were present, including the ability to see other people in attendance and observe elements of the debate that were not visible to the TV audience. NextVR and the NBA also employed the same technology to broadcast the October 27 season opener between the Golden State Warriors and New Orleans Pelicans, the first-ever live VR sporting event.
  • On November 5, The New York Times launched a VR news initiative that included the free distribution of Google Cardboard viewers—a folded up cardboard VR headset that holds a smartphone—to 1 million newspaper subscribers. The Times’ innovation required users to download the NYTvr app to their smartphone in order to watch a series of short news films in VR.

Origins of VR

Virtual reality is the product of the convergence of theater, camera, television, science fiction and digital media technologies. The basic ideas of virtual reality go back more than two hundred years and coincide with the desire of artists, performers and educators to recreate scenes and historical events. In the early days this meant painting panoramic views, constructing dioramas and staging theatrical productions where viewers had a 360˚ visual surround experience.

In the late 19th century, hundreds of cycloramas were built—many of them depicting major battles of the Civil War—where viewers sat in the center of a circular theater as the timeline of the historical event moved and was recreated around them in sequence. In 1899, a Broadway dramatization of the novel Ben Hur employed live horses galloping straight toward the audience on treadmills as a backdrop revolved in the opposite direction creating the illusion of high speed. Dust clouds were employed to provide additional sensory elements.

Kromscop viewer invented by Frederic Eugene Ives at the beginning of the 20th century
Frederic Eugene Ives’ Kromscop viewer

Contemporary ideas about virtual reality are associated with 3-D photography and motion pictures of the early twentieth century. Experimentation with color stereoscopic photography began in the late 1800s and the first widely distributed 3-D images were of the 1906 San Francisco earthquake and taken by Frederic Eugene Ives. As with present day VR, Ives’ images required both a special camera and viewing device called the Kromskop in order to see 3-D effect.

1950s-era 3-D View-Master with reels
1950s-era 3-D View-Master with reels

3-D photography was expanded and won popular acceptance beginning in the late 1930s with the launch of the View-Master of Edwin Eugene Mayer. The virtual experience of the View-Master system was enhanced with the addition of sound in 1970. Mayer’s company was eventually purchased by toy maker Mattel and later by Fischer-Price and the product remained successful until the era of digital photography in the early 2000s.

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats
An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

Experiments with stereoscopic motion pictures were conducted in the late 1800s. The first practical application of a 3-D movie took place in 1922 using the Teleview system of Laurens Hammond (inventor of the Hammond Organ) with a rotating shutter viewing device attached to the armrest of the theater seats.

Prefiguring the present-day inexpensive VR headset, the so-called “golden era” of 3-D film began in the 1950s and included cardboard 3-D glasses. Moviegoers got their first introduction to 3-D with stereophonic sound in 1953 with the film House of Wax starring Vincent Price. The popular enthusiasm for 3-D was eventually overtaken by the practical difficulties associated with the need to project two separate film reels in perfect synchronization.

1950s 3-D glasses and a movie audience wearing them
1950s 3-D glasses and a movie audience wearing them

Subsequent waves of 3-D movies in the second half of the twentieth century—projected from a single film strip—were eventually displaced by the digital film and audio methods associated with the larger formats and Dolby Digital sound of Imax, Imax Dome, Omnimax and Imax 3D. Anyone who has experienced the latest in 3-D animated movies such as Avatar (2009) can attest to the mesmerizing impact of the immersive experience made possible by the latest in these movie theater techniques.

Computers and VR

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966
Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

It is widely acknowledged that the theoretical possibility of creating virtual experiences that “convince” all the senses of their “reality” began with the work of Ivan Sutherland at MIT in the 1960s. Sutherland invented in 1966 the first head-mounted display—nicknamed the “Sword of Damocles”—that was designed to immerse the viewer in a simulated 3-D environment. In a 1965 essay called “The Ultimate Display,” Sutherland wrote about how computers have the ability to construct a “mathematical wonderland” that “should serve as many senses as possible.”

With increases in the performance and memory capacity of computers along with the decrease in the size of microprocessors and display technologies, Sutherland’s vision began to take hold in the 1980s and 1990s. Advances in vector based CGI software, especially flight simulators created by government researchers for military aircraft and space exploration, brought the term “reality engine” into use. These systems, in turn, spawned notions of complete immersion in “cyberspace” where sight, sound and touch are dominated by computer system generated sensations.

The term “virtual reality” was popularized during these years by Jaron Lanier and his VPL Laboratory. With VR products such as the Data Glove, the Eye Phone and Audio Sphere, Lanier combined with game makers at Mattel to create the first virtual experiences with affordable consumer products, despite their still limited functionality.

By the end of the first decade of the new millennium, many of the core technologies of present-day VR systems were developed enough to make simulated experiences more convincing and easy to use. Computer animation technologies employed by Hollywood and video game companies pushed the creation of 3-D virtual worlds to new levels of “realness.”

An offshoot of VR, called augmented reality (AR), took advantage of high resolution camera technologies and allowed virtual objects to appear within the actual environment and enabled users to view and interact with them on computer desktop and mobile displays. AR solutions became popular with advertisers offering unique promotional opportunities that capitalized on the ubiquity of smartphones and tablets.

Expectations

Scene from the 2009 movie Avatar
A scene from the 2009 animated film “Avatar”

Aside from news, entertainment and advertising, there are big possibilities opening up for VR in many business disciplines. Some experts expect that VR will impact almost every industry in a manner similar to that of PCs and mobile devices. Entrepreneurs and investors are creating VR companies with the aim of exploiting the promise of the new technology in education, health care, real estate, transportation, tourism, engineering, architecture and corporate communications (to name just a few).

Like consumer-level artificial intelligence, i.e. Apple Siri and Amazon Echo, present-day virtual reality technologies tend to fall frustratingly short of expectations. However, with the rapid evolution of core technologies—processors, software, video displays, sound, miniaturization and haptic feedback systems—it is conceivable that VR is ripe for a significant leap in the near future.

In many ways, VR is the ultimate product of media convergence as it is the intersection of multiple and seemingly unrelated paths of scientific development. As pointed out by Howard Rheingold in his authoritative 1991 book Virtual Reality, “The convergent nature of VR technology is one reason why it has the potential to develop very quickly from a scientific oddity into a new way of life … there is a significant chance that the deep cultural changes suggested here could happen faster than anyone has predicted.”

The mobile juggernaut

Mark Zuckerberg
Mark Zuckerberg

On August 27, Mark Zuckerberg posted the following message on his personal Facebook account, “We just passed an important milestone. For the first time ever, one billion people used Facebook in a single day. On Monday, 1 in 7 people on Earth used Facebook to connect with their friends and family.”

The Facebook one-billion-users-in-a-single-day accomplishment on August 24, 2015 is remarkable for the social network that was started by Zuckerberg and a group of college dormitory friends in 2004. With Facebook becoming available for public use less than ten years ago, the milestone illustrates the speed and extent to which social media has penetrated the daily lives of people all over the world.

While Facebook is very popular in the US and Canada, 83.1% of the 1 billion daily active users (DAUs) come from other parts of the world. Despite being barred in China—where there are 600 million internet users—Facebook has hundreds of millions of active users in India, Brazil, Indonesia, Mexico, UK, Turkey, Philippines, France and Germany.

Facebook's "Mobile Only" active users.
Facebook’s “Mobile Only” active users.

A major driver behind the global popularity and growth speed of Facebook is the mobile technology revolution. According to published data, Facebook reached an average of 844 million mobile active users during the month of June 2015 and industry experts are expecting this number to hit one billion in the very near future. Clearly, without smartphones, tablets and broadband wireless Internet access, Facebook could not have achieved the DAU milestone since many of the one billion people are either “mobile first” or “mobile only” users.

From mobile devices to wearables

When I last wrote about mobile technologies two-and-half years ago, the rapid rise of smartphones and tablets and the end of the PC era of computing was a dominant topic of discussion. Concerns were high that significant resources were being shifted toward mobile devices and advertising and away from older technologies and media platforms. The move from PCs and web browsers toward apps on smartphones and tablets was presenting even companies like Facebook and Google with a “mobility challenge.”

Today, while mobile device expansion has slowed and the dynamics within the mobile markets are becoming more complex, the overall trend of PC displacement continues. According to IDC, worldwide tablet market growth is falling, smartphone market growth is slowing and the PC market is shrinking. On the whole, however, smartphone sales represent more than 70% of total personal computing device shipments and, according to an IDC forecast, this will reach nearly 78% in 2019.

IDC's Worldwide Device Market 5 Year Forecast
IDC’s Worldwide Device Market 5 Year Forecast

According to IDC’s Tom Mainelli, “For more people in more places, the smartphone is the clear choice in terms of owning one connected device. Even as we expect slowing smartphone growth later in the forecast, it’s hard to overlook the dominant position smartphones play in the greater device ecosystem.”

While economic troubles in China and other market dynamics have led some analysts to the conclude that the smartphone boom has peaked, it is clear that consumers all over the world prefer the mobility, performance and accessibility of their smaller devices.

Ercisson's June 2015 Mobility Report projects 6.1 billion smartphone users by 2020.
Ercisson’s June 2015 Mobility Report projects 6.1 billion smartphone users by 2020.

According to the Ericsson Mobility Report, there will be 6.1 billion smartphone users by 2020. That is 70% of the world’s population.

Meanwhile, other technology experts are suggesting that wearables—smartwatches, fitness devices, smartclothing and the like—are expanding the mobile computing spectrum and making it more complex. Since many wearable electronic products integrate easily with smartphones, it is expected this new form will push mobile platforms into new areas of performance and power.

Despite the reserved consumer response to the Apple Watch and the failure of Google Glass, GfK predicts that 72 million wearables will be sold in 2015. Other industry analysts are also expecting wearables to become untethered from smartphones and usher in the dawn of “personalized” computing.

Five mobile trends to watch

With high expectations that mobile tech will continue to play a dominant role in the media and communications landscape, these are some major trends to keep an eye on:

Wireless Broadband: Long Term Evolution (LTE) connectivity reached 50% of the worldwide smartphone market by the end of 2014 and projections show this will likely be at 60% by the end of this year. A new generation of mobile data technology has appeared every ten years since 1G was introduced in 1981. The fourth generation (4G) LTE systems were first introduced in 2012. 5G development has been underway for several years now and it promises speeds of several tens of megabits per user with an expected commercial introduction sometime in the early 2020s.

Apple's A8 mobile processor is 50 times faster than the original iPhone processor.
Apple’s A8 mobile processor is 50 times faster than the original iPhone processor.

Mobile Application Processors: Mobile system-on-a-chip (SoC) development is one of the most intensely competitive sectors of computer chip technology today. Companies like Apple, Qualcomm and Samsung are all pushing the capabilities and speeds of their SoCs to get the maximum performance with the least energy consumption. Apple’s SoCs have set the benchmark in the industry for performance and the iPhone6 contains an A8 processor which is 40% more powerful than the previous A7 chip; and it is 50 times faster than the processor in the original iPhone. A new processor A9 will likely be be announced with the next generation iPhone in September 2015 and it is expected to bring a 29% performance boost over the A8.

Pressure Sensitive Screens: Called “force touch” by Apple, this new mobile display capability allows users to apply varying degrees of pressure to trigger specific functions on a device. Just like “touch” functionality—swiping, pinching, etc.—pressure sensitive interaction with a mobile device provides a new dimension to human-computer-interface. This feature was originally launched by Apple with the release of the Apple Watch which has a limited screen dimension on which to perform touch functions.

Customized Experiences: With mobile engagement platforms, smartphone users can receive highly targeted promotions and offers based upon their location within a retail establishment. Also known as proximity marketing, the technology uses mobile beacons with Bluetooth communications to send marketing text messages and other notifications to a mobile device that has been configured to receive them.

Mobile Apps: The mobile revolution has been a disruptive force for the traditional desktop software industry. Microsoft is now offering its Office Suite of applications to both iOS and Android users free of charge. In August, Adobe announced that it would be releasing a mobile and full-featured version of its iconic Photoshop software in October as a free download and as part of its Creative Cloud subscription.

With mobile devices, operating systems, applications and connectivity making huge strides and expanding across the globe by the billions it is obvious that every organization and business should be navigating its way behind this technology juggernaut. This begins with an internal review of your mobile practices:

  • Do you have a mobile communications and/or operations strategy?
  • Is your website optimized for a mobile viewing experience?
  • Are you encouraging the use of smartphones and tablets and building a mobile culture within your organization?
  • Are you using text messaging for any aspect of your daily work?
  • Are you using social media to communicate with your members, staff, prospects or clients?

If the answer to any of these questions is no, then it is time to act.