Archive for the Digital Media Category

How the index card launched the information age

Posted in Digital Media, People in Media History, Print Media with tags , , , , , , , , , , , , , , , , , , , , on September 30, 2016 by multimediaman

library-card-catalogOne year ago this month, the final order of library catalog cards was printed by the Online Computer Library Center (OCLC) in Dublin, Ohio. On October 2, 2015, The Columbus Dispatch wrote, “Shortly before 3 p.m. Thursday, an era ended. About a dozen people gathered in a basement workroom to watch as a machine printed the final sheets of library catalog cards to be made …”

The fate of the printed library card, an indispensable indexing tool for more than a century, was inevitable in the age of electronic information and the Internet. It is safe to say that nearly all print with purely informational content—as opposed to items fulfilling a promotional or a packaging function—is surely to be replaced by online alternatives.

Founded in 1967, the OCLC is a global cooperative with 16,000 member libraries. Although it no longer prints library cards, the OCLC continues to fulfill its mission by providing shared lirary resources such as catalog metadata and WorldCat.org, an international online database of library collections.

Speaking about the end of the card catalog era, Skip Prichard the CEO of the OCLC said, “The vast majority of libraries discontinued their use of the printed library catalog card many years ago. … But it is worth noting that these cards served libraries and their patrons well for generations, and they provided an important step in the continuing evolution of libraries and information science.”

The 3 x 5 card

Printed library catalog card

Printed library catalog card

The library catalog card is one form of the popular 3 x 5 index card that served as a filing system for a multitude of purposes for over two hundred years. While many of us have been around long enough to have used or maybe even still use them—for addresses and phone numbers, recipes, flash cards or research paper outlines—we may not be aware of the relationship that index cards have to modern information science.

The original purpose of the index card and its subsequent development represented the early stages of information theory and practice. Additionally, as becomes clear below, without the index card as the first functional system for organizing complex categories, subcategories and cross-references, studies in the natural sciences would have never gotten off the ground.

The index card became the indispensable tool for both organizing and comprehending the expansion of human knowledge at every level. Along with several important intermediary steps, the ideas that began with index cards eventually led to relational databases, document management systems, hyperlinks and the World Wide Web.

Carl Linnaeus and natural science

carl-linnaeus

Carl Linnaeus

The Swedish naturalist and physician Carl Linnaeus (1707–1778) is recognized as the creator of the index card. Linnaeus used the cards to develop his system of organizing and naming the species of all living things. Linnaean taxonomy is based on a hierarchy (kingdom, phylum, class, order, family, genus, species) and binomial species naming (homo erectus, tyrannosaurus rex, etc.). He published the first edition of his universal conventions in a small pamphlet called “The System of Nature” in 1735.

Beginning in his early twenties, Linnaeus was interested in producing a series of books on all known species of plants and animals. At that time, there were so many new species being discovered that Linnaeus knew as soon as a book was printed, a large amount of new information would already be available. He wanted to quickly and accurately revise his publications to take into account the new findings in subsequent editions.

As time went on, Linnaeus developed different functional methods of sorting through and organizing enormous amounts of information connected with his growing collection of plant, animal and shell specimens (eventually it rose to 40,000 samples). His biggest problem was creating a process that was both structured enough to facilitate retrieval of previously collected information and flexible enough to allow rearrangement and addition of new information.

Pages from an early edition of Linnaeus’ “The System of Nature”

Pages from an early edition of Linnaeus’ “The System of Nature”

Working with paper notations in the eighteenth century, he needed a system that would allow the flow of names, references, descriptions and drawings into and out of a fixed sequence for the purposes of comparison and rearrangement. This “packing” and “unpacking” of information was a continuous process that enabled Linnaeus’ research to keep up with the changes in what was known about living species.

Linear vs non-linear methods

At first, Linnaeus used notebooks. This linear method—despite his best efforts to leave pages open for updates and new information—proved to be unworkable and wasteful. As estimates of how much room to allow often proved incorrect, Linnaeus was forced to squeeze new details into ever shrinking available space or he ended up with unutilized blank pages.

After thirty years of working with notebooks, Linnaeus began to experiment with a filing system of information recorded on separate sheets of paper. This was later converted to small sheets of thick paper that could be quickly handled, shuffled through and laid out on a table in two-dimensions like a deck of playing cards. This is how the index card was born.

a-stack-of-linnaeus-index-cards

A stack of Linnaeus’ hand written index cards

Linnaeus’ index card system was able to represent the variation of living organisms by showing multiple affinities in a map-like fashion. In order to accommodate the ever-expanding knowledge of new species—today the database of taxonomy contains 8.7 million items—Linnaeus created a breakthrough method for managing complex information.

Melvil Dewey and DDC

While index cards continued to be used in Europe, an important step forward in information management was made in the US by Melvil Dewey (1851-1931), the creator of the well-known Dewey Decimal System (or Dewey Decimal Classification, DDC). Used by libraries for the cataloging of books since 1876, the DDC was based on index cards and introduced the concepts of “relative location” and “relative index” to bibliography. It also enabled libraries to add books to their collection based on subject categories and an infinite number of decimal expressions known as “call numbers.”

The young Melvil Dewey

The young Melvil Dewey

Previous to the DDC, libraries attempted to assign books to a permanent physical location based on their order of acquisition. This linear approach proved unworkable, especially as library collections grew rapidly in the latter part of the nineteenth century. With industrialization, libraries were overflowing with paper: letters, reports, memos, pamphlets, operation manuals, schedules as well as books were flooding in and the methods of cataloging and storing these collections needed to find a means of keep up.

In the 1870s, while working at Amherst College Library, Melvil Dewey became involved with libraries across the country. He was a founding member of the American Library Association and became editor of the The Library Journal, a trade publication that still exists today. In 1878, Dewey published the first edition of “A Classification and Subject Index for Cataloguing and Arranging the Books and Pamphlets of a Library” that elaborated on the use of the library card catalog index.

Precursor to the information age

Title page of the first edition of Dewey’s bibliographic classification system

Title page of the first edition of Dewey’s bibliographic classification system

Like many others of his generation, Melvil Dewey was committed to scientific management, standardization and the democratic ideal. By the end of the nineteenth century the Dewey classification system and his 3 x 5 card catalog were being used in nearly every school and public library in the US. The basic concept was that any member of society could walk into a library anywhere in the country, go to the card catalog and be able to locate the information they were looking for.

In 1876 Dewey created a company called Library Bureau and began providing card catalog supplies, cabinets and equipment to libraries across the country. Following the enormous success of this business, Dewey expanded the Library Bureau’s information management services to government agencies and large corporations at the turn of the twentieth century.

In 1896, Dewey formed a partnership with Herman Hollerith and the Tabulating Machine Company (TMC) to provide the punch cards used for the electro-mechanical counting system of the US government census operations. Dewey’s relationship with Hollerith is significant as TMC would be renamed International Business Machines (IBM) in 1924 and become an important force in the information age and creator of the first relational database.

Paul Otlet and multidimensional indexing

Paul Otlet working in his office in the 1930s

Paul Otlet working in his office in the 1930s

While Dewey’s classification system became the standard in US libraries, others were working on bibliographic cataloging ideas, especially in Europe. In 1895, the Belgians Paul Otlet (1868-1944) and Henri La Fontaine founded the International Institute of Bibliography (IIB) and began working on something they called the Universal Bibliographic Repertory (UBR), an enormous catalog based on index cards. Funded by the Belgian government, the UBR involved the collection of books, articles, photographs and other documents in order to create a one-of-a-kind international index.

As described by Otlet, the ambition of the UBR was to build “an inventory of all that has been written at all times, in all languages, and on all subjects.” Although they used the DDC as a starting point, Otlet and La Fontaine found limitations in Dewey’s classification system while working on the UBR. Some of the issues were related to Dewey’s American perspective; the DDC lacked some categories needed for information related to other regions of the world.

A section of the Universal Bibliographic Repertory

A section of the Universal Bibliographic Repertory

More fundamentally, however, Otlet and La Fontaine made an important conceptual breakthrough over Dewey’s approach. In particular, they conceived of a complex multidimensional indexing system that would allow for more deeply defined subject categories and cross-referencing of related topics.

Their critique was based on Otlet’s pioneering idea that the content of bibliographic collections needed to be separated from their form and that a “universal” classification system needed to be created that included new media and information sources (magazines, photographs, scientific papers, audio recordings, etc.) and moved away from the exclusive focus on the location of books on library shelves.

Analog information links and search

After Otlet and La Fontaine received permission from Dewey to modify the DDC, they set about creating the Universal Decimal Classification (UDC). The UDC extended Dewey’s cataloging expressions to include symbols (equal sign, plus sign, colon, quotation marks and parenthesis) for the purpose of establishing “links” between multiple topics. This was a very significant breakthrough that reflected the enormous growth of information taking place at the end of the nineteenth century.

By 1900, the UBR had more than 3 million entries on index cards and was supported by more than 300 IIB members from dozens of countries. The project was so successful that Otlet began working on a plan to copy the UBR and distribute it to major cities around the world. However, with no effective method for reproducing the index cards, other than typing them out by hand, this project ran up against the technical limitations of the time.

henri-la-fontaine-with-staff-members-of-the-mundaneum

Henri La Fontaine and staff members at the Mundaneum in Mons, Belgium. At its peak in 1924, the catalog contained 18 million index cards.

In 1910, Otlet and La Fontaine shifted their attention to the establishment of the Mundaneum in Mons, Belgium. Again with government support, the aim of this institution was to bring together all of the world’s knowledge in a single UDC index. They created the gigantic repository as a service where anyone in the world could submit an inquiry on any topic for a fee. This analog search service would provide information back to the requester in the form of index cards copied from the Mundaneum’s bibliographic catalog.

By 1924, the Mundaneum contained 18 million index cards housed in 15,000 catalog drawers. Plagued by financial difficulties and a reduction of support from the Belgian government during the Depression and lead up to World War II, Paul Otlet realized that further management of the card catalog had become impractical. He began to consider more advanced technologies—such as photomechanical recording systems and even ideas for electronic information sharing—to fulfill his vision.

Although the Mundaneum was sacked by the Nazi’s in 1940 and most of the index cards destroyed, the ideas of Paul Otlet anticipated the technologies of the information age that were put into practice after the war. The pioneering work of others—such as Emanuel Goldberg, Vannevar Bush, Douglas Englebart and Ted Nelson—would lead to the creation of the Internet, World Wide Web and search engines in the second half of the twentieth century.

Steve Case and “The Third Wave” of the Internet

Posted in Digital Media, Internet, Mobile Media, People in Media History with tags , , , , , , , , , , on May 25, 2016 by multimediaman

Steve Case and The Third Wave

In 1980, Alvin Toffler published The Third Wave, a sequel to his 1970 best-seller Future Shock and an elaboration of his ideas about the information age and its stressful impact on society. In contrast to his first book, Toffler sought in The Third Wave to convince readers not to dread the future but instead to embrace the potential at the heart of the information revolution.

Alvin Toffler

Alvin Toffler

Actually, Alvin—and his co-author wife Heidi Toffler—were among the few writers to appreciate early on the transformative power of electronic communications. Long before the word “Internet” was used by anyone but a few engineers working for the US Department of Defense—and after reporting for Fortune magazine on foundational Third Wave companies like IBM, AT&T and Xerox—Toffler began to hypothesize about “information overload” and the disruptive force of networked data and communications upon manufacturing, business, government and the family.

"The Third Wave" (1980) by Alvin Toffler

“The Third Wave” (1980) by Alvin Toffler

For example, one can read in the The Third Wave, “Humanity faces a quantum leap forward. It faces the deepest social upheaval and creative restructuring of all time. Without clearly recognizing it, we are engaged in building a remarkable new civilization from the ground up. This is the meaning of the Third Wave.” Appearing today as a little excessive, these words would certainly have seemed in 1980 to be a wild exaggeration by two fanatical tech futurists.

But Alvin and Heidi were really onto something. More than 35 years later, who can deny the truth behind Toffler’s basic ideas about the global information revolution and its consequences? The Internet, networked PCs, the World Wide Web, wireless broadband, smartphones, social media and, ultimately, the Internet of Things have changed and are changing every aspect of society.

To his credit, Steve Case—who cofounded the early Internet company America Online—has written a new book called The Third Wave: An Entrepreneur’s Vision of the Future that borrows its title from Toffler’s pioneering work. As Case explains in the preface, he was motivated by Toffler’s theories as a college student because they “completely transformed the way I thought about the world—and what I imagined for the future.”

Steve Case’s The Third Wave

First Wave Internet companies

First Wave Internet companies

In Steve Case’s book, “The Third Wave” refers to three phases of Internet development as opposed to Toffler’s stages of civilization. For Case, the first wave was the construction of the “on ramps”—including AOL and others like Sprint, Apple and Microsoft—to the information superhighway. The second wave was about building on top of first wave infrastructure by companies like Google, Amazon, eBay, Facebook, Twitter and others that have developed “software as a service” (SAS).

Case’s Third Wave of the Internet is the promise of connecting everything to everything else, i.e. the rebuilding of entire sectors of the economy with “smart” technologies. While the ideas surrounding what he calls the Internet of Everything are not new—Case does not claim to have originated the concept—the new book does discuss important barriers to the realization of the Third Wave of Internet connectivity and how to overcome them.

Second Wave Internet companies

Second Wave Internet companies

Case argues that Third Wave companies will require a new set of principles in order to be successful, that following the playbook of Second Wave companies will not do. He writes, “The playbook they need, instead, is one that worked during the First Wave, when the Internet was still young and skepticism was still high; when the barriers to entry were enormous, and when partnerships were a necessity to reaching your customers; when the regulatory system was coming to grips with a new reality and struggling to figure out the appropriate path forward.”

In much of the book, Case reviews his ideas about the transformation of the health care, education and food industries by applying the culture of innovation and ambition for change that is commonly found in Silicon Valley. However, he cautions that current Second Wave models of venture capital investment, views about the role of government and aversion to collaboration among entrepreneurs threaten to stall or kill Third Wave change before it can get started.

The story of AOL

In some ways, the most interesting aspects of Case’s book deal with the origin, growth and decline of America Online (AOL). Case gives a candid explication of the trials and tribulations of his innovative dial-up Internet company from 1983 to 2003. Case explains that prior to the achievement of significant consumer (27.6 million users by 2002) and Wall Street ($222 billion market cap by 1999) success, AOL and its precursors went through a series of near death experiences.

Steve Case in 1987 before the founding of America Online

Steve Case in 1987 before the founding of America Online

For example, he tells the story of a deal that he signed with Apple in 1987 that was cancelled by the Cupertino-based company during implementation. Case had sold Apple customer service executives on a partnership with his then Quantum Computer Services to build an online support system called Apple Link Personal Edition that would be offered to customers as a software add-on. Disagreements between Apple and Quantum over how to sell the product to computer users ultimately killed the project.

Facing the termination of the investment funding that was tied to the $5 million agreement, Case and the other founders decided to sue Apple for breach of contract. Acknowledging their liability to Quantum, Apple agreed to pay $3 million to “tear up the contract.” Starting over with their new source of cash, Case and his partners restarted their company as America Online and they made an approach directly to consumers to sign up for their service.

This tale and others reinforces one of the key themes of Case’s book: Third Wave entrepreneurs will need to persevere through “the long slog” to success.

The January 24, 2000 cover of Time magazine with Steve Case and Jerry Levin announcing the AOL-Time Warner merger.

The January 24, 2000 cover of Time magazine with Steve Case and Jerry Levin announcing the AOL-Time Warner merger.

The end of Steve Case’s relationship with AOL is also a lesson in the leadership skills required for Third Wave success. In a chapter entitled “Matter of Trust” (the longest of the book), Steve Case relives the story of the merger/acquisition of Time Warner with/by AOL. It is a cautionary tale of both the excesses of Wall Street valuations during the dot com boom and the crisis of traditional media companies in the face of Third Wave innovation.

Case says that while the combination of AOL with Time Warner in 2000—the largest corporate merger in history up to that point—made sense at the time, two months later the dot com bubble burst and the company lost eighty percent of its value within a year. This was followed by a series of leadership battles that proved there were deep seated feelings of “personal mistrust and lingering resentments” among top Time Warner executives over the business potential of the Internet and the up-start start-up called AOL.

Steve Case writes that, although the dot com crash was certainly a factor, “It came down to emotions and egos and, ultimately, the culture itself. That something with the potential to be the first trillion-dollar company could end up losing $200 billion in value should tell you just how important the people factor is. It doesn’t really matter what the plan is if you can’t get your people aligned around achieving the same objectives.”

What now?

For those of us that were in the traditional media business—i.e. print, television and radio—the word “disruption” hardly describes the impact of the Internet over the past three decades. When companies like AOL were getting started with their modems and dial-up connections, most of us were looking pretty good. We had little time or interest in the tacky little AOL “You’ve Got Mail” audio message. As we reluctantly embraced IBM, Apple and Microsoft as partners in our front office and production operations, we were later making smug remarks about the absurdity of eBay and Amazon as legitimate business ideas.

Internet of Things

IoT is at the center of Case’s Third Wave of innovation.

Steve Case’s book represents a timely warning to the enterprises and business leaders of today who similarly dismiss the notions of IoT.  He points to Uber and Airbnb and shows that the hospitality and transportation industries are being right now turned on their sides by this new wave of information-enabled “sharing” businesses.

Actually, Case is an unlikely spokesman for the next wave of innovation having personally made out quite well (his net worth stands at $1.37 billion) despite the shipwreck that became AOL Time Warner. If he had been born twenty-five years later, Case could possibly have been another Mark Zuckerberg of Facebook and rode the Second Wave of the Internet (Zuckerberg got his start in coding by hacking AOL Instant Messenger) over the ruins of the dot com bust.

But that was then and this is now. Case has decided to commit himself to investment in present day entrepreneurships through his Revolution Growth venture capital fund. His book is kind of a roadmap for those who want to learn from his experience and bravely launch into the Third Wave of the Internet and build start-ups of a new kind. As Alvin Toffler wrote in Future Shock, “If we do not learn from history, we shall be compelled to relive it. True. But if we do not change the future, we shall be compelled to endure it. And that could be worse.”

Books, e-books and the e-paper chase

Posted in Digital Media, Mobile, Mobile Media, Paper, Print Media with tags , , , , , , , , , , , , , on March 22, 2016 by multimediaman

Last November Amazon opened its first retail book store in Seattle near the campus of the University of Washington. More than two decades after it pioneered online book sales—and initiated the e-commerce disruption of the retail industry—the $550 billion company seemed to be taking a step backward with its “brick and mortar” Amazon Books.

Amazon Books opened in Seattle on November 3, 2015

Amazon opened its first retail book store in Seattle on November 3, 2015

However, Amazon launched its store concept with a nod to traditional consumer shopping habits, i.e. the ability to “kick the tires.” Amazon knows very well that many customers like to browse the shelves in bookstores and fiddle with electronic gadgets like the Kindle, Fire TV and Echo before they make buying decisions.

So far, the Seattle book store has been successful and Amazon has plans to open more locations. Some unique features of the Amazon.com buying experience have been extended to the book store. Customer star ratings and reviews are posted near book displays; shoppers are encouraged to use the Amazon app and scan bar codes to check prices.

Amazon’s book store initiative was also possibly motivated by the persistence and strength of the print book market. Despite the rapid rise of e-books, print books have shown a resurgence of late. Following a sales decline of 15 million print books in 2013 to just above 500 million units, the past two years have seen an increase to 560 million in 2014 and 570 million in 2015. Meanwhile, the American Booksellers Association reported a substantial increase in independent bookstores over the past five years (1,712 member stores in 2,227 locations in 2015, up from 1,410 in 1,660 locations in 2010).

Print books and e-books

After rising rapidly since 2008, e-book sales have stabilized at between 25% and 30% of total book sales

After rising rapidly since 2008, e-book sales have stabilized at between 25% and 30% of total book sales

The ratio of e-book to print book sales appears to have leveled off at around 1 to 3. This relationship supports recent public perception surveys and learning studies that show the reading experience and information retention properties of print books are superior to that of e-books.

The reasons for the recent uptick in print sales and the slowing of e-book expansion are complex. Changes in the overall economy, adjustments to bookstore inventory from digital print technologies and the acclimation of consumers to the differences between the two media platforms have created a dynamic and rapidly shifting landscape.

As many analysts have insisted, it is difficult to make any hard and fast predictions about future trends of either segment of the book market. However, two things are clear: (1) the printed book will undergo little further evolution and (2) the e-book is headed for rapid and dramatic innovation.

Amazon launched the e-book revolution in 2007 with the first Kindle device. Although digital books were previously available in various computer file formats and media types like CD-ROMs for decades, e-books connected with Amazon’s Kindle took off in popularity beginning in 2008. The most important technical innovation of the Kindle—and a major factor in its success—was the implementation of the e-paper display.

Distinct from backlit LCD displays on most mobile devices and personal computers, e-paper displays are designed to mimic the appearance of ink on paper. Another important difference is that the energy requirements of e-paper devices are significantly lower than LCD-based systems. Even in later models that offer automatic back lighting for low-light reading conditions, e-paper devices will run for weeks on a single charge while most LCD systems require a recharge in less than 24-hours.

Nick Sheridon and Gyricon

The theory behind the Kindle’s ink-on-paper emulation was originated in the 1970s at the Xerox Palo Alto Research Center in California by Nick Sheridon. Sheridon developed his concepts while working to overcome limitations with the displays of the Xerox Alto, the first desktop computer. The early monitors could only be viewed in darkened office environments because of insufficient brightness and contrast.

Nick Sheridon and his team at Xerox PARC invented Gyricon in 1974, a thin layer of transparent plastic composed of bichromal beads that rotate to create an image

Nick Sheridon and his team at Xerox PARC invented Gyricon in 1974, a thin layer of transparent plastic composed of bichromal beads that rotate with changes in voltage to create an image on the surface

Sheridon sought to develop a display that could match the contrast and readability of black ink on white paper. Along with his team of engineers at Xerox, Sheridon developed Gyricon, a substrate with thousands of microscopic plastic beads—each of which were half black and half white—suspended in a thin and transparent silicon sheet. Changes in voltage polarity caused either the white or black side of the beads to rotate up and display images and text without backlighting or special ambient light conditions.

After Xerox cancelled the Alto project in the early 1980s, Sheridon took his Gyricon technology in a new direction. By the late 1980s, he was working on methods to manufacture a new digital display system as part of the “paperless office.” As Sheridon explained later, “There was a need for a paper-like electronic display—e-paper! It needed to have as many paper properties as possible, because ink on paper is the ‘perfect display.’”

In 2000, Gyricon LLC was founded as a subsidiary of Xerox to develop commercially viable e-paper products. The startup opened manufacturing facilities in Ann Arbor, Michigan and developed several products including e-signage that utilized Wi-Fi networking to remotely update messaging. Unfortunately, Xerox shut down the entity in 2005 due to financial problems.

Pioneer of e-paper Nick Sheridon

Pioneer of e-paper, Nicholas Sheridan

Among the challenges Gyricon faced were making a truly paper-like material that had sufficient contrast and resolution while keeping manufacturing costs low. Sheridan maintained that e-paper displays would only be viable economically if units were sold for less than $100 so that “nearly everyone could have one.”

As Sheridon explained in a 2009 interview: “The holy grail of e-paper will be embodied as a cylindrical tube, about 1 centimeter in diameter and 15 to 20 centimeters long, that a person can comfortably carry in his or her pocket. The tube will contain a tightly rolled sheet of e-paper that can be spooled out of a slit in the tube as a flat sheet, for reading, and stored again at the touch of a button. Information will be downloaded—there will be simple user interface—from an overhead satellite, a cell phone network, or an internal memory chip.”

E Ink

By the 1990s competitors began entering the e-paper market. E Ink, founded in 1998 by a group of scientists and engineers from MIT’s Media Lab including Russ Wilcox, developed a concept similar to Sheridon’s. Instead of using rotating beads with white and black hemispheres, E Ink introduced a method of suspending microencapsulated cells filled with both black and white particles in a thin transparent film. Electrical charges to the film caused the black or white particles to rise to the top of the microcapsules and create the appearance of a printed page.

E Ink cofounder Russ Wilcox

E Ink cofounder Russ Wilcox

E Ink’s e-paper technology was initially implemented by Sony in 2004 in the first commercially available e-reader called LIBRIe. In 2006, Motorola integrated an E Ink display in its F3 cellular phone. A year later, Amazon included E Ink’s 6-inch display in the first Amazon Kindle which became by far the most popular device of its kind.

Kindle Voyage (2014) and Kindle Paperwhite (2015) with the latest e-paper displays (Carta) from E ink

Kindle Voyage (2014) and Kindle Paperwhite (2015) with the latest e-paper displays (Carta) from E ink

Subsequent generations of Kindle devices have integrated E Ink displays with progressively improved contrast, resolution and energy consumption. By 2011, the third generation Kindle included touch screen capability (the original Kindle had an integrated hardware keyboard for input).

The current edition of the Kindle Paperwhite (3rd Generation) combines back lighting and a touch interface with E Ink Carta technology and a resolution of 300 pixels per inch. Many other e-readers such as the Barnes & Noble Nook, the Kobo, the Onyx Boox and the PocketBook also use E Ink products for their displays.

Historical parallel

The quest to replicate, as closely as possible in electronic form, the appearance of ink on paper is logical enough. In the absence of a practical and culturally established form, the new media naturally strives to emulate that which came before it. This process is reminiscent of the evolution of the first printed books. For many decades, print carried over the characteristics of the books that were hand-copied by scribes.

It is well known that Gutenberg’s “mechanized handwriting” invention (1440-50) sought to imitate the best works of the Medieval monks. The Gutenberg Bible, for instance, has two columns of print text while everything else about the volume—paper, size, ornamental drop caps, illustrations, gold leaf accents, binding, etc.—required techniques that preceded the invention of printing. Thus, the initial impact of Gutenberg’s system was an increase in the productivity of book duplication and the displacement of scribes; it would take some time for the implications of the new process to work its way through the function, form and content of books.

Ornamented title page of the Gutenberg Bible printed in 1451

Ornamented title page of the Gutenberg Bible printed in 1451

More than a half century later—following the spread of Gutenberg’s invention to the rest of Europe—the book began to evolve dramatically and take on attributes specific to printing and other changes taking place in society. For example, by the first decade of the 1500s, books were no longer stationary objects to be read in exclusive libraries and reading rooms of the privileged few. As their cost dropped, editions became more plentiful and literacy expanded, books were being read everywhere and by everybody.

By the middle 1500s, both the form and content of books became transformed. To facilitate their newfound portability, the size of books fell from the folio (14.5” x 20”) to the octavo dimension (7” x 10.5”). By the beginning of the next century, popular literature—the first European novel is widely recognized as Cervantes’ Don Quixote of 1605—supplanted verse and classic texts. New forms of print media developed such as chapbooks, broadsheets and newspapers.

Next generation e-paper

It seems clear that the dominance of LCD displays on computers, mobile and handheld devices is a factor in the persistent affinity of the public for print books. Much of the technology investment and advancement of the past decade—coming from companies such as Apple Computer—has been been committed to computer miniaturization, touch interface and mobility, not the transition from print to electronic media. While first decade e-readers have made important strides, most e-books are still being read on devices that are visually distant from print books, impeding a more substantial migration to the new media.

Additionally, most current e-paper devices have many unpaper-like characteristics such as relatively small size, inflexibility, limited bit-depth and the inability to write ton them. All current model e-paper Kindles, for example, are limited to 6-inch displays with 16 grey levels beneath a heavy and fragile layer of glass and no support for handwriting.

The Sony Digital Paper System (DPT-S1) is based on E Ink’s Mobius e-paper display technology: 13.3” format, flexible and supports stylus handwriting

The Sony Digital Paper System (DPT-S1) is based on E Ink’s Mobius e-paper display technology: 13.3” format, flexible and supports stylus handwriting

A new generation of e-paper systems is now being developed that overcome many of these limitations. In 2014, Sony released its Digital Paper System (DPT-S1) that is a letter-size e-reader and e-notebook (for $1,100 at launch and currently selling for $799). The DPT-S1 is based on E Ink’s Mobius display, a 13.3” thin film transistor (TFT) platform that is flexible and can accept handwriting from a stylus.

Since it does not have any glass, the new Sony device weighs 12.6 oz or about half of a similar LCD-based tablet. With the addition of stylus-based handwriting capability, the device functions like an electronic notepad and, meanwhile, notes can be written in the margins of e-books and other electronic documents.

These advancements and others show that e-paper is positioned for a renewed surge into things that have yet to be conceived. Once a flat surface can be curved or even folded and then made to transform itself into any image—including a color image—at any time and at very low cost and very low energy consumption, then many things are possible like e-wall paper, e-wrapping paper, e-milk cartons and e-price tags. The possibilities are enormous.