Is your head in The Cloud or in the sand?

The Cloud is everywhere all the time; it knows who you are, where you are and it is casting its shadow upon you right now. Driven by shifts in technology and culture, The Cloud is part of our personal and professional lives whether we like it or not. If you have a Facebook account, your Timeline is in The Cloud; if you have a Flickr account, your photos are in The Cloud; if you have a Netflix account, the movies you watch are stored in The Cloud; if you have a DropBox account, your documents are in The Cloud.

The Cloud or cloud computing has many forms. One can think of it as computing as a utility instead of with a piece of electronic hardware, a device or a program that you own. Cloud computing is associated with shared computer resources such as data storage systems or applications over the Internet.

Popular providers of cloud computing products and services: (clockwise from top left) Apple iCloud, Amazone Cloud Drive, Adobe Creative Cloud, Microsoft SkyDrive, Oracle Cloud Computing and IBM Cloud
Popular providers of cloud computing products and services: (clockwise from top left) Apple iCloud, Amazone Cloud Drive, Adobe Creative Cloud, Microsoft SkyDrive, Oracle Cloud Computing and IBM Cloud

In contrast to the personal computing model—where every system has unique copies of software and data and a large local storage volume—cloud computing distributes and replicates these assets on servers across the globe. Historically speaking, The Cloud is a return—in the age of the Internet, apps and social media—to the time-sharing terminal computing model of the 1950s. It maintains computer processes and data functions centrally and enables users to access them from anywhere and at any time.

The phrase “The Cloud” was originally used in the early 1990s as a metaphor to describe the Internet. Beginning in 2000, the technologies of cloud computing began to expand exponentially and since then have become ubiquitous. Solutions like Apple’s .Mac (2000), MobileMe (2008) and finally iCloud (2011) have enabled public familiarity with cloud computing models. Certainly the ability to access, edit and update your personal digital assets—documents, photos, music, video—from multiple devices is a key feature of The Cloud experience.

The development and proliferation of cloud file sharing (CFS) systems such as DropBox, Google Drive and Microsoft SkyDrive—offering multiple gigabytes of file storage for free—have also driven mass adoption. Some industry analysts report that there are more than 500 million CFS users today.

Beside benefits for the consumer, cloud-based solutions are being offered by enterprise computing providers such as IBM and Oracle with the promise of significant financial savings associated with shared and distributed resources. In fact, The Cloud has become such an important subject today that every supplier of computer systems—as well as online retailers like Amazon—is hoping to cash in on the opportunity by offering cloud solutions to businesses and consumers.

For those of us in the printing and graphic arts industries, a prototypical example of cloud computing is Adobe’s Creative Cloud. Adopters of Adobe CC are becoming accustomed to monthly software subscription fees as opposed to a one-time purchase of a serialized copy as well as shared data storage of their creative graphics content on Adobe’s servers.

Digital Convergence

The concepts of digital convergence were developed and expanded by Ihtiel de Sola Pool, Nicholas Negroponte and John Hagel III
The concepts of digital convergence were developed and expanded by Ithiel de Sola Pool, Nicholas Negroponte and John Hagel III

In a more general sense, The Cloud is part of the process of digital convergence, i.e. the coming together of all media and communications technologies into a unified whole. The concept of technology convergence was pioneered at MIT by the social scientist Ithiel de Sola Pool in 1983. In his breakthrough book Technologies of Freedom, De Sola Pool postulated that digital electronics would cause the modes of communication—telephone, newspapers, radio, and text—to combine into one “grand system.”

Nicholas Negroponte, founder of the MIT Media Lab, substantially developed the theory of digital convergence in the 1980s and 1990s. Long before the emergence of the World Wide Web, Negroponte was foretelling that digital technologies were causing the “Broadcast and Motion Picture Industry,” the “Computer Industry” and the “Print and Publishing Industry” to overlap with each other and become one. As early as 1978, Negroponte was predicting that this process would reach maturity by the year 2000.

At the center of digital convergence—and the growth and expansion of The Cloud—is the acceleration of electronic technology innovation. John Hagel III of The Center for the Edge at Deloitte has identified the following technological and cultural components that are responsible for this accelerated development.

Infrastructure and Users

The cost/performance trends of core digital technologies are closely associated with Moore’s Law, i.e. that the stated number of transistors on an affordable CPU doubles every two years. By extension this law of exponential innovation can also be applied to other digital technologies such as storage devices and Internet bandwidth. In simple terms, what this means is that the quantity of information that can be processed, transmitted and stored per dollar spent is accelerating over time. The development of digital convergence and of cloud computing is entirely dependent upon these electronic technology shifts. The following graphs illustrate this:

The cost of computing power has decreased significantly, from $222 per million transistors in 1992 to $0.06 per million transistors in 2012. The decreasing cost-performance curve enables the computational power at the core of the digital infrastructure.
The cost of computing power has decreased significantly, from $222 per million transistors in 1992 to $0.06 per million transistors in 2012. The decreasing cost-performance curve enables the computational power at the core of the digital infrastructure.
Similarly, the cost of data storage has decreased considerably, from $569 per gigabyte of storage in 1992 to $0.03 per gigabyte in 2012. The decreasing cost-performance of digital storage enables the creation of more and richer digital information.
Similarly, the cost of data storage has decreased considerably, from $569 per gigabyte of storage in 1992 to $0.03 per gigabyte in 2012. The decreasing cost-performance of digital storage enables the creation of more and richer digital information.
The cost of Internet bandwidth has also steadily decreased, from $1,245 per 1000 megabits per second (Mbps) in 1999 to $23 per 1000 Mbps in 2012. The declining cost-performance of bandwidth enables faster collection and transfer of data, facilitating richer connections and interactions.
The cost of Internet bandwidth has also steadily decreased, from $1,245 per 1000 megabits per second (Mbps) in 1999 to $23 per 1000 Mbps in 2012. The declining cost-performance of bandwidth enables faster collection and transfer of data, facilitating richer connections and interactions.

Culture: Installed Base

Tracking closely with the acceleration of computer technology innovation—and also driving it—is the adoption of rate of these technologies by people. Without the social and practical implementation of innovation, digital convergence and The Cloud could not have moved from the laboratory and theoretical possibility into modern reality. Both the number of Internet users and wireless subscriptions are core to the transformations in human activity that are fueling the shift from the era of the personal computer to that of mobile, social media and cloud computing.

Additionally, the use of the Internet continues to increase. From 1990 to 2012, the percent of the US population accessing the Internet at least once a month grew from near 0 percent to 71 percent. Widespread use of the Internet enables more widespread sharing of information and resources.
Additionally, the use of the Internet continues to increase. From 1990 to 2012, the percent of the US population accessing the Internet at least once a month grew from near 0 percent to 71 percent. Widespread use of the Internet enables more widespread sharing of information and resources.
More and more people are connected via mobile devices. From 1985 to 2012, the number of active wireless subscriptions relative to the US population grew from 0 to 100 percent (reflecting the fact that the same household can have multiple wireless subscriptions). Wireless connectivity is further facilitated by smartphones. Smart devices made up 55 percent of total wireless subscriptions in 2012, compared to only 1 percent in 2001.
More and more people are connected via mobile devices. From 1985 to 2012, the number of active wireless subscriptions relative to the US population grew from 0 to 100 percent (reflecting the fact that the same household can have multiple wireless subscriptions). Wireless connectivity is further facilitated by smartphones. Smart devices made up 55 percent of total wireless subscriptions in 2012, compared to only 1 percent in 2001.

Innovation Comparison

The full implications of these changes are hard to comprehend. Some experts point out that previous generations of disruptive technology—electricity, telephone, internal combustion engine, etc.—have, after an initial period of accelerated innovation, been followed by periods of stability and calm. In our time, the cost/performance improvement of digital technologies—and the trajectory of Moore’s Law—shows no sign of slowing down in the foreseeable future.

While it is increasingly difficult to keep up with the demands of this change, we are compelled to do so. The fact is that we have been in The Cloud for some time now means that our conceptions and plans must be reflective of this reality. We cannot attempt to hide from The Cloud in our personal and professional affairs anymore than we could have hidden from the personal computer or the smartphone. The key is to embrace The Cloud and find within it new opportunities for harnessing its power to become more effective and successful in our daily lives and business offerings to customers.

Nicolas Jenson: c. 1420 – 1480

Artist Robert Thom’s depiction of Nicolas Jenson at his engraving bench
Artist Robert Thom’s depiction of Nicolas Jenson at his engraving bench

The term incunabula (Latin for “cradle”) is used to denote the earliest period of printing from its birth in 1450 up to January 1, 1501. The books, pamphlets and broadsides printed with the movable metal type method associated with Gutenberg during these first fifty years are also commonly called incunabulum.

It is estimated that 35,000 editions were printed throughout Europe—over two-thirds from Germany and Italy—during the second half of the fifteenth century. Remarkably, nearly 80% of these volumes still exist today, most of which are held in large public collections such as the Bavarian State Library in Munich, the Vatican Library in Vatican City and the British Library in London.

The Lenox copy of the Gutenberg Bible on display at the New York Public Library. It was the first complete set brought to the US in 1847.
The Lenox copy of the Gutenberg Bible on display at the New York Public Library. It was the first complete set brought to the US in 1847.

The most famous incunabulum, of course, is the 42-line bible printed by Johannes Gutenberg in Mainz, Germany in the 1450s of which there are 48 copies remaining. Since they were printed in two volumes, many of these copies are incomplete. James Lenox brought the first complete set of the Gutenberg Bible to the US in 1847 after he bought it for $2,500; it now sits on display at the New York Public Library. The last sale of a complete Gutenberg Bible took place in 1978 and went for $2.2 million; it is estimated that one would sell for $25-$35 million today.

The British Library maintains an international electronic bibliographic database of extant incunabulum. Called the Incunabula Short Title Catalogue (ISTC), the database was begun in 1980 and currently contains 27,460 records. The ISTC is an extraordinary merger of modern and Renaissance information technology. That anyone can peruse these records—many of which have links to high-resolution images of 500-year old incunabulum—is a testament to both the lasting achievement of print and the significance of its electronic descendent, the World Wide Web.

* * * * *

Next to Gutenberg himself, Nicolas Jenson is recognized as the most important figure of the incunabula. Despite limited records of his life—his last will and testament, a few book introductions written by others and some document fragments—the legacy of Nicholas Jenson survives through his printed works.

According to Martin Lowry, the printing scholar and author of “Nicholas Jenson and Rise of Venetian Publishing in Renaissance Europe,” the first official biography of Jenson was written in the late 1700s and amounted to “a two-volume potpourri of erudition and fantasy.” While arguing that Nicolas Jenson has become something of a printing cult-figure, Lowry does conclude that Jenson’s “place at the very beginning of the typographic age gives him a special importance.”

It is known that Nicolas Jenson was born in Sommevoire, France, a town about 150 miles southeast of Paris. However, after reviewing Lowry’s research, it is difficult to simply repeat here the many other “facts” that are frequently given of Jenson’s early life: his date of birth, his employment experience and the origin of his metal working skills, the means by which he became familiar with the printing methods of Gutenberg and his route from France to Italy. The things that are repeated in many accounts of Jenson’s life are derived from murky historical anecdotes that are contradicted by other important facts.

An engraving depicting an early Venetian printing shop
An engraving depicting an early Venetian printing shop

Jenson is known to have begun printing in Venice in the late 1460s or early 1470s. Prior to his arrival in Venice, it appears that he spent some time in Vicenza, a mainland town about 30 miles to the west, where he developed his printing skills. Jenson’s arrival in Venice, the first non-German printer in recorded history, coincided with the establishment of several important printing firms in the Italian island city. The most notable of these was the enterprise of John and Wendelin of Speyer who arrived in Venice from Germany in 1468 and were granted a five year monopoly on printing by the city authorities.

Nicolas Jenson’s printer’s mark
Nicolas Jenson’s printer’s mark

The Venetian patrician class of scholar-statesmen considered the arrival of printing a major cultural development. It meant that the works of classical humanist teachings could be reproduced at rates that were inconceivable with the handwritten process of the scribes. The ruling elites encouraged the development of print and by the end of the century there were 150 firms operating in the highly competitive Venetian printing market.

Alongside of print’s cultural impact, there was a considerable business opportunity to be exploited. It was to this side of the incunabula that Jenson devoted most of his efforts. During the ten years that he was a printer in Venice, more than anyone else, Jenson brought investment into the printing industry. His businesses were very successful and he made a considerable fortune before his death in 1480.

However, the most important—and universally recognized—contribution of Nicolas Jenson to the development of printing was his design of an early roman typeface. Prior to Jenson, the style of print typography followed the blackletter example set by Gutenberg, i.e. heavy gothic forms that emulated the dominant pen and ink script of the monks of fifteenth century Germany.

The first page of Eusebius’ "Preparation for the Gospel" printed by Nicolas Jenson in 1470. It is thought to be the first appearance of a roman typeface.
The first page of Eusebius’ Preparation for the Gospel printed by Nicolas Jenson in 1470. It is thought to be the first appearance of a roman typeface.

Such were Nicolas Jenson’s metal working skills that he cut a groundbreaking roman type in 1470. Roman type is distinct from blackletter in that it emulates the square capital letters used in ancient Rome combined with the Carolingian minuscule (lowercase) used during the Holy Roman Empire. The first book to appear with Jenson’s new design was an edition of Eusebius’ Preparation for the Gospel originally written in 313 A.D.

The word roman, without a capital R, has come to denote Italian typefaces used during the Renaissance as well as later fonts derived from them such as Times Roman, for example. Although Jenson’s design was quite different in appearance from Gutenberg’s blackletter, it was also modeled on the scribal manuscript style that was popular in fifteenth century Italy.

A comparison of blackletter script (upper left) with Gutenberg’s blackletter type (lower left) and roman/Carolingian script (upper right) with Jenson’s roman type (lower right)
A comparison of blackletter script (upper left) with Gutenberg’s blackletter type (lower left) and roman/Carolingian script (upper right) with Jenson’s roman type (lower right)

It is a remarkable phenomenon of printing history that the essential forms of Jenson’s roman typeface designed more than 500 years ago are those that we continue to use most often and recognize today as the best and most readable typography. Of course, the characters in the alphabet of the Latin languages are those associated with Jenson’s contribution. But it should also be noted that Jenson designed and cut a Greek alphabet of a similar style.

Throughout the subsequent history of printing, many have noted the beauty and balance of Jenson’s roman type design. In particular, William Morris and the arts and crafts movement of the late nineteenth century focused upon Jenson’s creative genius. According to Lowry, Morris’ romantic affinity for medievalism led to an unjustified elevation of the contribution of Nicolas Jenson alongside those of Johannes Gutenberg and Aldus Manutius.

* * * * *

A search of the British Library’s ISTC for the term “Jenson” results in 113 hits. Many of the items in the database contain links to images of the pages printed by Nicolas Jenson himself on a Gutenberg-style printing press in Venice in the 1470s. A review of these entries shows that—despite language challenges—Jenson’s books appear very similar to those found today in our libraries and book stores. While some of them are adorned with ornate case bound covers and others include hand-illuminated art alongside the printed text, the essential elements of the book are very familiar to any modern reader.

Historians have strictly defined the incunabula as the first fifty years of the printing revolution beginning with Gutenberg. The incunabulum produced by the pioneers of print—including Nicolas Jenson—were devoted to a recreation of scribes’ handwriting such that the reading audience could understand and relate to the new media form.

The questions that arise naturally are: should we consider the early years of the digital revolution to be our modern “incunabula” in which the previous media generation is being replicated in electronic form? Or is the digital age leading to a new media that represents a departure from the forms that were developed and enriched during the Renaissance?

Genesis of the GUI

Thirty-five years ago Xerox made an important TV commercial. An office employee arrives at work and sits down at his desk while a voice-over says, “You come into your office, grab a cup of coffee and a Xerox machine presents your morning mail on a screen. … Push a button and the words and images you see on the screen, appear on paper. … Push another button and the information is sent electronically to similar units around the corner or around the world.”

Xerox 1979 TV Commercial
Frame from the Xerox TV commercial in 1979

The speaker goes on, “This is an experimental office system; it’s in use now at the Xerox research center in Palo Alto, California.” Although it was not named, the computer system being shown was called the Xerox Alto and the TV commercial was the first time anyone outside of a few scientists had seen a personal computer. You can watch the TV ad here: http://www.youtube.com/watch?v=M0zgj2p7Ww4

The Alto is today considered among the most important breakthroughs in PC history. This is not only because it was the first computer to integrate the mouse, email, desktop printing and Ethernet networking into one computer; above all, it is because the Alto was the first computer to incorporate the desktop metaphor of “point and click” applications, documents and folders known as the graphical user interface (GUI).

Xerox Alto Office System
Xerox Alto Office System

The real significance of the GUI achievement was that the Xerox engineers at the Palo Alto Research Center (PARC) made it possible for the computer to be brought out of the science lab and into the office and the home. With the Alto—the hardware was conceptualized by Butler Lampson and designed by Chuck Thacker at PARC in 1972—computing no longer required arcane command line entries or text-based programming skills.

The general public could use the Alto because it was based on easy-to-understand manipulation of graphical icons, windows and other objects on the display. This advance was no accident. Led by Alan Kay, inventor of object-oriented programming and expert in human-computer interaction (HCI), the Alto team set out from the beginning to make a computer that was “as easy to use as a pencil and piece of paper.”

Basing themselves on the foundational computer work of Ivan Sutherland (SketchPad) and Douglas Engelbart (oN-Line System), the educational theories of Marvin Minsky and Seymour Papert (Logo) and the media philosophy of Marshall McLuhan, Kay’s team designed an HCI that could be easily learned by children. In fact, much of the PARC team’s research was based on observing students as young as six years old interacting with the Alto as both users and programmers.

Xerox Alto SmallTalk desktop
An example of an Alto graphical user interface

The invention of GUI required two important technical innovations at PARC:

  1. Bitmap computer display: The Alto monitor was vertical instead of horizontal and, with a resolution of 606 by 808 pixels, it was 8 x 10 inches tall. It had dark pixels on a light gray background and therefore emulated a sheet of letter-size white paper. It had bit-mapped raster scan as a display method as opposed to the “character generators” of previous monitors that could only render alphanumeric characters in one size and style and were often green letters on a black background. With each dot on its display corresponding to one bit of memory, the Alto monitor technology was very advanced for its time. It was capable of multiple fonts and could even render black and white motion video.
  2. Software that supported graphics: Alan Kay’s team developed the SmallTalk programming language as the first object-oriented software environment. They built the first GUI with windows that could be moved around and resized and icons that represented different types of objects in the system. Programmers and designers on Kay’s team—especially Dan Ingalls and David C. Smith—and developed bitmap graphics software that enabled computer users to click on icons, dialogue boxes and drop down menus on the desktop. These functions represented the means of interaction with documents, applications, printers, and folders and thereby the user derived immediate feedback from their actions.
Alan Kay, Dan Ingalls and David C. Smith worked on the software programming and graphical user interface elements of the Xerox Alto
Alan Kay, Dan Ingalls and David C. Smith worked on the software programming and graphical user interface elements of the Xerox Alto

The Alto remained an experimental system until the end of the 1970s with 2,000 units made and used at PARC and by a wider group of research scientists across the country. It is an irony of computer and business history that the commercial product that was inspired by the Alto—the Xerox 8010 Information System or Star workstation—was launched in 1981 and did not achieve market success due in part to it’s $75,000 starting price ($195,000 today). As a personal computer, the Xerox Star was rapidly eclipsed by the IBM-PC, the very successful MS-DOS-based personal computer launched in 1981 without a GUI at a price of $1,595.

It is well known that Steve Jobs and a group of Apple Computer employees made a fortuitous visit to Xerox PARC in December 1979 and received an inside look at the Alto and its GUI. Upon seeing the Alto’s user interface, Jobs has been quoted as saying, “It was like a veil being lifted from my eyes. I could see the future of what computing was destined to be.”

Much of what Jobs and his team learned at PARC—in exchange for the purchase of 100,000 Apple shares by Xerox—was incorporated into the unsuccessful Apple Lisa computer (1982) and later the popular Macintosh (1984). The Apple engineers also implemented features that further advanced the GUI in ways that the PARC researchers had not thought of or were unable to accomplish. Apple Computer was so successful at implementing a GUI-based personal computer that many of the Xerox engineers left PARC and joined Steve Jobs, including Alan Kay and several of his team members.

In response to both the popularity and ease-of-use superiority of the GUI, Microsoft launched Windows in 1985 for the IBM-PC and PC clone markets. The early Windows interface was plagued with performance issues due in part to the fact that it was running as a second layer of programming on top of MS-DOS. With Windows 95, Microsoft developed perhaps the most successful GUI-based personal computer software up to that point.

First desktops: Xerox Star (1980), Apple Macintosh (1984) and Microsoft Windows (1985)
First desktops: Xerox Star (1980), Apple Macintosh (1984) and Microsoft Windows (1985)

Already by 1988, the GUI had become such an important aspect of personal computing that Apple filled a lawsuit against Microsoft for copyright infringement. In the end, the federal courts ruled against Apple in 1994 saying that “patent-like protection for the idea of the graphical user interface, or the idea of the desktop metaphor” was not available. Much of Apple’s case revolved around defending as its property something called the “look and feel” of the Mac desktop. While rejecting most of Apple’s arguments, the court did grant ownership of the trashcan icon, upon which Microsoft began using the recycling bin instead.

When looking back today, it is remarkable how the basic desktop and user experience design that was developed at Xerox PARC in the 1970s has remained the same over the past four decades. Color and shading have been added to make the icons more photographic and the folders and windows more dimensional. However, the essential user elements, visual indicators, scroll bars, etc. have not changed much.

With the advent of mobile (smartphone and tablet) computing, the GUI began to undergo more significant development. With the original iOS on the iPhone and iPod touch, Apple relied heavily upon so-called skeuomorphic GUI design, i.e. icons and images that emulate physical objects in the real world such as a textured bookcase to display eBooks in the iBook app.

Comparison of iOS 1 to iOS 7 user interface
Comparison of iOS 1 to iOS 7 user interface

Competitors—such as those with Android-based smartphones and tablets—have largely copied Apple’s mobile GUI approach. Beginning with iOS 7, however, Apple has moved aggressively away skeuomorphic elements in favor of flattened and less pictorial icons and frames, etc.

Multi-touch and gesture-based technology—along with voice user interface (VUI)—represent practical evolutionary steps in the progress of human-computer interaction. Swipe, pinch and rotate have become just as common for mobile users today as double-click, drag-and-drop and copy-and-paste were for the desktop generation. The same can be said of the haptic experience—tactile feedback such as vibration or rumbling on a controller—of VR and gaming systems that millions of young people are familiar with all over the world.

It is safe to say that it was the pioneering work of the research group at Xerox PARC that made computing something that everyone can do all the time. They were people with big ideas and big goals. In a 1977 article for Scientific American Alan Kay wrote, “How can communication with computers be enriched to meet the diverse needs of individuals? If the computer is to be truly ‘personal,’ adult and child users must be able to get it to perform useful activities without resorting to the services of an expert. Simple tasks must be simple, and complex ones must be possible.”