Archive for the Digital Media Category

Streaming and the era of on-demand media

Posted in Audio, Digital Media, Video with tags , , , , , , , , , on January 16, 2016 by multimediaman

On January 6, Netflix went live with its video-streaming service in 130 new countries across the globe. The expansion—covering most of the world except for China—was announced by Netflix cofounder and CEO Reed Hastings during a keynote speech at the International Consumer Electronics Show in Las Vegas. Hastings said, “Today, right now, you are witnessing the birth of a global TV network.”

Reed Hastings, CEO of Netflix announcing the global expansion of the streaming video service on January 6

Reed Hastings, CEO of Netflix announcing the global expansion of the streaming video service on January 6

Prior to this latest announcement, Netflix had 40 million subscribers in the US and 20 million subscribers internationally in a total of 60 countries and available in 17 languages. According to Hastings, the company’s goal is to reach 200 countries by the end of 2016 and sign up 90 million US and 450 million worldwide subscribers.

The rapid expansion of Netflix is part of the transformation of TV program and movie viewing that has been underway for a decade or more. While “linear TV”— programming that is presented at specific times and on non-portable screens—is still popular, it is being rapidly overtaken by the new personalized, on-demand and mobile subscription services like Netflix.

According to Netflix, the growth of Internet TV is driven by (1) advancements in Internet reliability and performance, (2) time and place flexibility of on-demand viewing and (3) accelerating innovation of streaming video technology. A possible fourth driver of Netflix’s success is its subscription-based user model. Unlike previous on-demand solutions that often required consumers to purchase one at a time—or rent for a specified period of time—their own copies of movies and music, streaming media solutions like Netflix offer subscribers access to the entire content library without limitations for a monthly fee.

Streaming media

Popular video and music streaming services

Popular video and music streaming services

Streaming media refers to video or audio content that is transmitted in a compressed digital form over the Internet and played immediately, rather than being downloaded onto a computer hard drive or other storage media for later playback. Therefore, users do not need to wait for the entire media file to be sent before playing it; the media file is delivered in a continuous stream and can be watched or listened to as soon as the playing process is able to begin.

Media streaming originated with “elevator music” known as Muzak in the early 1950s. It was a service that transmitted music over electrical lines in retail stores and building lobbies. The first efforts to stream music and video on computers and digital networks ran up against the limitations of CPU performance, network bandwidth and data stream interruptions associated with “buffering.”

Attempts in the 1990s by Microsoft (Windows Media Player), Apple (QuickTime) and RealNetworks (RealPlayer) to develop streaming technologies on desktop computers made important breakthroughs. However, each of these solutions required proprietary file formats and media players that resulted in an unworkable system for users.

By the early 2000s, the adoption of broadband internet and improvements in CPU and data throughput along with efforts to create a single, unified format led to the adoption of Adobe Flash as a de facto standard for streaming media. By 2005, when the social media and video sharing service YouTube was established, Flash became the dominant streaming technology on the Internet. More recently—especially since 2011—HTML5 has advanced as an international standard on computers and mobile devices and it will eventually supplant Flash.

Music industry streaming revenue is growing fast and download revenue is falling

Music industry streaming revenue is growing fast and download revenue is falling

Streaming media has been transforming the music industry along side of TV and movies. While digital downloads still represent the largest percentage of music sales in the US, they are falling. Meanwhile, streaming music services like Pandora, Spotify and Apple Music have already overtaken physical CD sales and represent about one third of the industry’s income. Some analysts expect revenue from music streaming to surpass that of digital downloads in the near future.

Consumers and content

Streaming media has fundamentally shifted the relationship between consumers and entertainment content. During the era of broadcast radio (1920s) and television (1950s), consumers needed a “set” to receive the analog programs of radio stations and TV channels. Meanwhile, audience members had to be on front of their radio or TV—with “rabbit ears” antenna adjusted optimally—on a schedule set by the broadcasters. The cost of programming was paid for by commercial advertising and corporate sponsors.

In the cable and satellite era (1970s), consumers began paying for content with subscription fees and programming was “commercial free.” Along with home recording devices—at first analog magnetic tape systems like VCRs (1970s) and digital recording devices like DVRs (late 1990s)—came an important shift in viewing behavior. Consumers could do what is now called “time shifted viewing,” i.e. they could choose when they wanted to experience the recorded content. 

Vinyl records, magnetic tapes and optical recording formats preceded downloading and streaming

Vinyl records, magnetic tapes and optical recording formats preceded downloading and streaming

At first, music publishers mass produced and marketed analog audio recordings—records (1950s) and then audio tapes (1970s)—and consumers purchased and owned a library of recordings. These records and tapes could be enjoyed at any time and place as long as there was an audio system with a stereo turntable or cassette player available.

The same was true of mass produced CD audio (1980s) and DVD video (2000s) optical discs. While these digital formats improved portability and their quality did not deteriorate from repeated play—the way that analog magnetic and vinyl did—they required a new generation of optical devices. Portable CD (1980s) and DVD players (late 1990s) addressed this issue, but consumers still had to maintain a library of purchased titles.

With digital downloading of music and video over the Internet, content could finally be played anywhere and at anytime on portable digital players like iPods (2001) and notebook PCs. However, consumers were still required to purchase the titles they wanted to enjoy. Instead of owning bookshelves and cabinets full of CD and DVD jewel cases, downloaded electronic files had to be maintained on MP3 players, computer hard drives and digital media servers.

When Internet-based media streaming arrived alongside of mobile and wireless computing the real potential of time and place independent content viewing became a reality. Add to these the subscription model—with (potentially) the entire back catalog of recorded music, TV shows and movies available for a relatively small monthly fee—and consumers began flocking in large numbers to services like Netflix and Spotify.

Streaming media trends to watch 2016

Media industry analysts have been following the impact of these streaming content and technologies and some of their recent insights and trend analyses are below:

Streaming media device adoption in US households

Streaming media device adoption in US households with broadband Internet

  • Streaming devices:
    • Linear TV content still dominates US households. However, there are signs that streaming media devices such as Roku, Apple TV, Chromecast and Amazon Fire are rapidly shifting things. The adoption of these devices went from about 17% in 2014 to about 28% of US households with broadband internet in 2015 [Park Associates]
Nielsen on demand music streams

On-demand music streaming includes music videos

  • Streaming vs. downloading:
    • Online music streams doubled from 164.5 billion to 317 billions songs
    • Digital song sales dropped 12.5% from 1.1 billion to 964.8 million downloads
    • Digital album sales dropped 2.9% from 106.5 million to 103.3 million downloads [Nielsen 2015 Music Report]
Cable TV subscriptions have been declining with the rise of "cord cutting" and streaming media

Cable TV subscriptions have been declining with the rise of “cord cutting” and streaming media

  • Cable TV:
    • The cord-cutting trend—households that are ending their cable TV service—is accelerating. Total households with cable subscriptions fell from 83% in 2014 to under 80% in 2015 [Pacific Crest].
    • Scheduled “linear” TV fell and recorded “linear” TV was flat (or even increased slightly) from 2014 to 2015, while streamed on-demand video increased [Ericsson ConsumerLab].

While streaming audio and video are growing rapidly, traditional radio and TV still represent by far the largest percentages of consumer activity. Obviously, some of the cultural and behavior changes involved in streaming media run up against audience demographics: some older consumers are less likely to shift their habits while some younger consumers have had fewer or no “linear” experiences.

As the Ericsson ConsumerLab study shows, teenagers spend less than 20% of their TV viewing time watching a TV screen; the other 80% is spent in front of desktop and laptop computers, tablets and smartphones. Despite these differences, streaming content use is soaring and the era of “linear” media is rapidly coming to an end. Just like the relationship between eBooks and print books, the electronic alternative is expanding rapidly while the analog form persists and, in some ways, is stronger than ever. Nonetheless, the new era of time and place independent on-demand media is fast approaching.

Where is VR going and why you should follow it

Posted in Digital Media, Mobile Media, Social Media, Video with tags , , , , , , , , , , , , , , , , , , , , , , on November 15, 2015 by multimediaman
Promotional image for Oculus Rift VR headset

Promotional image for Oculus Rift VR headset

On November 2, video game maker Activision Blizzard Entertainment announced a $5.9 billion purchase of King Digital Entertainment, maker of the mobile app game Candy Crush Saga. Activision Blizzard owns popular titles like Call of Duty, World of Warcraft and Guitar Hero—with tens of millions sold—for play on game consoles and PCs. By comparison, King has more than 500 million worldwide users playing Candy Crush on TVs, computers and (mostly) mobile devices.

While it is not the largest-ever acquisition of a game company—Activision bought Blizzard in 2008 for $19 billion—the purchase shows how much the traditional gaming industry believes that future success will be tied to mobile and social media. Other recent acquisitions indicate how the latest in gaming hardware and software have become strategically important for the largest tech companies:

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014

  • September 2014: Microsoft acquired Mojang for $2.5 billion
    Mojang’s Minecraft game has 10 million users worldwide and an active developer community. The Lego-like Minecraft is popular on both Microsoft’s Xbox game console and Windows desktop and notebook PCs. In making the purchase, Microsoft CEO Satya Nadella said, “Gaming is a top activity spanning devices, from PCs and consoles to tablets and mobile, with billions of hours spent each year.”
  • August 2104: Amazon acquired Twitch for $970 million
    The massive online retailer has offered online video since 2006 and the purchase of Twitch—the online and live streaming game service—adds 45 million users to Amazon’s millions of Prime Video subscribers and FireTV (stick and set top box) owners. Amazon’s CEO Jeff Bezos said of the acquisition, “Broadcasting and watching gameplay is a global phenomenon and Twitch has built a platform that brings together tens of millions of people who watch billions of minutes of games each month.”
  • March 2014: Facebook acquired Oculus for $2 billion
    Facebook users take up approximately 20% of all the time that people spend online each day. The Facebook acquisition of Oculus—maker of virtual reality headsets—is an anticipation that social media will soon soon include an immersive experience as opposed to scrolling through rectangular displays on PCs and mobile devices. According to Facebook CEO Mark Zuckerberg, “Mobile is the platform of today, and now we’re also getting ready for the platforms of tomorrow. Oculus has the chance to create the most social platform ever, and change the way we work, play and communicate.”

The integration of gaming companies into the world’s largest software, e-commerce and social media corporations is further proof that media and technology convergence is a powerful force drawing many different industries together. As is clear from the three CEO quotes above, a race is on to see which company can offer a mix of products and services sufficient to dominate the number of hours per day the public spends consuming information, news and entertainment on their devices.

What is VR?

Among the most important current trends is the rapid growth and widespread adoption of virtual reality (VR). Formerly of interest to hobbyists and gaming enthusiasts, VR technologies are now moving into mainstream daily use.

A short definition of VR is a computer-simulated artificial world. More broadly, VR is an immersive multisensory, multimedia experience that duplicates the real world and enables users to interact with the virtual environment and with each other. In the most comprehensive VR environments, the sight, sound, touch and smell of the real world are replicated.

Current and most commonly used VR technologies include a stereoscopic headset—which tracks the movement of a viewer’s head in 3 dimensions—and surround sound headphones that add a spatial audio experience. Other technologies such as wired gloves and omnidirectional treadmills can provide tactile and force feedback that enhance the recreation of the virtual environment.

New York Times VR promtion

The New York Times’ VR promotion included a Google Cardboard viewer that was sent along with the printed newspaper to 1 million subscribers

Recent events have demonstrated that VR use is becoming more practical and accessible to the general public:

  • On October 13, in a partnership between CNN and NextVR, the presidential debate was broadcast in VR as a live stream and stored for later on demand viewing. The CNN experience made it possible for every viewer to watch the event as though they were present, including the ability to see other people in attendance and observe elements of the debate that were not visible to the TV audience. NextVR and the NBA also employed the same technology to broadcast the October 27 season opener between the Golden State Warriors and New Orleans Pelicans, the first-ever live VR sporting event.
  • On November 5, The New York Times launched a VR news initiative that included the free distribution of Google Cardboard viewers—a folded up cardboard VR headset that holds a smartphone—to 1 million newspaper subscribers. The Times’ innovation required users to download the NYTvr app to their smartphone in order to watch a series of short news films in VR.

Origins of VR

Virtual reality is the product of the convergence of theater, camera, television, science fiction and digital media technologies. The basic ideas of virtual reality go back more than two hundred years and coincide with the desire of artists, performers and educators to recreate scenes and historical events. In the early days this meant painting panoramic views, constructing dioramas and staging theatrical productions where viewers had a 360˚ visual surround experience.

In the late 19th century, hundreds of cycloramas were built—many of them depicting major battles of the Civil War—where viewers sat in the center of a circular theater as the timeline of the historical event moved and was recreated around them in sequence. In 1899, a Broadway dramatization of the novel Ben Hur employed live horses galloping straight toward the audience on treadmills as a backdrop revolved in the opposite direction creating the illusion of high speed. Dust clouds were employed to provide additional sensory elements.

Kromscop viewer invented by Frederic Eugene Ives at the beginning of the 20th century

Frederic Eugene Ives’ Kromscop viewer

Contemporary ideas about virtual reality are associated with 3-D photography and motion pictures of the early twentieth century. Experimentation with color stereoscopic photography began in the late 1800s and the first widely distributed 3-D images were of the 1906 San Francisco earthquake and taken by Frederic Eugene Ives. As with present day VR, Ives’ images required both a special camera and viewing device called the Kromskop in order to see 3-D effect.

1950s-era 3-D View-Master with reels

1950s-era 3-D View-Master with reels

3-D photography was expanded and won popular acceptance beginning in the late 1930s with the launch of the View-Master of Edwin Eugene Mayer. The virtual experience of the View-Master system was enhanced with the addition of sound in 1970. Mayer’s company was eventually purchased by toy maker Mattel and later by Fischer-Price and the product remained successful until the era of digital photography in the early 2000s.

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

Experiments with stereoscopic motion pictures were conducted in the late 1800s. The first practical application of a 3-D movie took place in 1922 using the Teleview system of Laurens Hammond (inventor of the Hammond Organ) with a rotating shutter viewing device attached to the armrest of the theater seats.

Prefiguring the present-day inexpensive VR headset, the so-called “golden era” of 3-D film began in the 1950s and included cardboard 3-D glasses. Moviegoers got their first introduction to 3-D with stereophonic sound in 1953 with the film House of Wax starring Vincent Price. The popular enthusiasm for 3-D was eventually overtaken by the practical difficulties associated with the need to project two separate film reels in perfect synchronization.

1950s 3-D glasses and a movie audience wearing them

1950s 3-D glasses and a movie audience wearing them

Subsequent waves of 3-D movies in the second half of the twentieth century—projected from a single film strip—were eventually displaced by the digital film and audio methods associated with the larger formats and Dolby Digital sound of Imax, Imax Dome, Omnimax and Imax 3D. Anyone who has experienced the latest in 3-D animated movies such as Avatar (2009) can attest to the mesmerizing impact of the immersive experience made possible by the latest in these movie theater techniques.

Computers and VR

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

It is widely acknowledged that the theoretical possibility of creating virtual experiences that “convince” all the senses of their “reality” began with the work of Ivan Sutherland at MIT in the 1960s. Sutherland invented in 1966 the first head-mounted display—nicknamed the “Sword of Damocles”—that was designed to immerse the viewer in a simulated 3-D environment. In a 1965 essay called “The Ultimate Display,” Sutherland wrote about how computers have the ability to construct a “mathematical wonderland” that “should serve as many senses as possible.”

With increases in the performance and memory capacity of computers along with the decrease in the size of microprocessors and display technologies, Sutherland’s vision began to take hold in the 1980s and 1990s. Advances in vector based CGI software, especially flight simulators created by government researchers for military aircraft and space exploration, brought the term “reality engine” into use. These systems, in turn, spawned notions of complete immersion in “cyberspace” where sight, sound and touch are dominated by computer system generated sensations.

The term “virtual reality” was popularized during these years by Jaron Lanier and his VPL Laboratory. With VR products such as the Data Glove, the Eye Phone and Audio Sphere, Lanier combined with game makers at Mattel to create the first virtual experiences with affordable consumer products, despite their still limited functionality.

By the end of the first decade of the new millennium, many of the core technologies of present-day VR systems were developed enough to make simulated experiences more convincing and easy to use. Computer animation technologies employed by Hollywood and video game companies pushed the creation of 3-D virtual worlds to new levels of “realness.”

An offshoot of VR, called augmented reality (AR), took advantage of high resolution camera technologies and allowed virtual objects to appear within the actual environment and enabled users to view and interact with them on computer desktop and mobile displays. AR solutions became popular with advertisers offering unique promotional opportunities that capitalized on the ubiquity of smartphones and tablets.

Expectations

Scene from the 2009 movie Avatar

A scene from the 2009 animated film “Avatar”

Aside from news, entertainment and advertising, there are big possibilities opening up for VR in many business disciplines. Some experts expect that VR will impact almost every industry in a manner similar to that of PCs and mobile devices. Entrepreneurs and investors are creating VR companies with the aim of exploiting the promise of the new technology in education, health care, real estate, transportation, tourism, engineering, architecture and corporate communications (to name just a few).

Like consumer-level artificial intelligence, i.e. Apple Siri and Amazon Echo, present-day virtual reality technologies tend to fall frustratingly short of expectations. However, with the rapid evolution of core technologies—processors, software, video displays, sound, miniaturization and haptic feedback systems—it is conceivable that VR is ripe for a significant leap in the near future.

In many ways, VR is the ultimate product of media convergence as it is the intersection of multiple and seemingly unrelated paths of scientific development. As pointed out by Howard Rheingold in his authoritative 1991 book Virtual Reality, “The convergent nature of VR technology is one reason why it has the potential to develop very quickly from a scientific oddity into a new way of life … there is a significant chance that the deep cultural changes suggested here could happen faster than anyone has predicted.”

Hermann Zapf (1918–2015): Digital typography

Posted in Digital Media, People in Media History, Phototypesetting, Typography with tags , , , , , , , , , , , , , on September 30, 2015 by multimediaman
Hermann Zapf: November 8, 1918 – June 4, 2015

Hermann Zapf: November 8, 1918 – June 4, 2015

On Friday, June 12, Apple released its San Francisco system font for OSX, iOS and watchOS. Largely overlooked amid the media coverage of other Apple product announcements, the introduction of San Francisco was a noteworthy technical event.

San Francisco is a neo-grotesk, sans serif and Pan European typeface with characters in Latin as well as Cyrillic and Greek scripts. It is significant because it is the first font to be designed specifically for all of Apple’s display technologies. Important variations have been introduced into San Francisco to optimize its readability on Apple desktop, notebook, TV, mobile and watch devices.

It is also the first font designed by Apple in two decades. San Francisco extends Apple’s association with typographic innovation that began in the mid-1980s with desktop publishing. From a broader historical perspective, Apple’s new font confirms of the ideas developed more than fifty years ago by renowned calligrapher and type designer Hermann Zapf. Sadly, Zapf died at the age of 96 on June 4, 2015 just one week before Apple’s San Francisco announcement.

Hermann Zapf’s contributions to typography are extensive and astonishing. He designed more than 200 typefaces—the popular Palatino (1948), Optima (1952), Zapf Dingbats (1978) and Zapf Chancery (1979) among them—including fonts in Arabic, Pan-Nigerian, Sequoia and Cherokee. Meanwhile, Zapf’s exceptional calligraphic skills were such that he famously penned the Preamble of the Charter of the United Nations in four languages for the New York Pierpont Morgan Library in 1960.

Preamble of the charter of The United Nations

Zapf’s calligraphic skills were called upon for the republication of the Preamble of the UN Charter in 1960 for the Pierpont Morgan Library in New York City.

While he made many extraordinary creative accomplishments—far too many to list here—Hermann Zapf’s greatest legacy is the way he thought about type and its relationship to technology as a whole. Herman Zapf was among the first and perhaps the most important typographers to theorize about the need for new forms of type driven by computer and digital technologies.

Early life

Hermann Zapf was born in Nuremburg on November 8, 1918 during the turbulent times at the end of World War I. As he wrote later in life, “On the day I was born, a workers’ and soldiers’ council took political control of the city. Munich and Berlin were rocked by revolution. The war ended, and the Republic was declared in Berlin on 9 November 1918. The next day Kaiser Wilhelm fled to Holland.”

At school, Hermann took an interest in technical subjects. He spent time in the library reading scientific journals and at home, along with his older brother, experimenting with electronics. He also tried hand lettering and created his own alphabets.

Hermann left school in 1933 with the intention of becoming an engineer. However, economic crisis and upheaval in Germany—including the temporary political detention of his father in March 1933 at the prison camp in Dachau—prevented him from pursuing his plans.

Apprentice years

Barred from attending the Ohm Technical Institute in Nuremberg for political reasons, Hermann sought an apprenticeship in lithography. He was hired in February 1934 to a four-year apprenticeship as a photo retoucher by Karl Ulrich and Company.

In 1935, after reading books by Rudolf Koch and Edward Johnson on lettering and illuminating techniques, Hermann taught himself calligraphy. When management saw the quality of Hermann’s lettering, the Ulrich firm began to assign him work outside of his retouching apprenticeship.

Hermann refused to take the test at his father’s insistence on the grounds that the training had been interrupted by many unrelated tasks. He never received his journeyman’s certificate and left Nuremburg for Frankfurt to find work.

Zapf’s Gilgengart designed originally in 1938

Zapf’s Gilgengart designed originally in 1938

Zapf started his career in type design at the age of 20 after he was employed at the Fürsteneck Workshop House, a printing establishment run by Paul Koch, the son of Rudolf Koch. As he later explained, “It was through the print historian Gustav Mori that I first came into contact with the D. Stempel AG type foundry and Linotype GmbH in Frankfurt. It was for them that I designed my first printed type in 1938, a fraktur type called ‘Gilgengart’.”

War years

Hermann Zapf was conscripted in 1939 and called up to serve in the German army near the town of Pirmasens on the French border. After a few weeks, he developed heart trouble and was transferred from the hard labor of shovel work to the writing room where he composed camp reports and certificates.

When World War II started, Hermann was dismissed for health reasons. In April 1942 he was called up again, this time for the artillery. Hermann was quickly reassigned to the cartographic unit where he became well-known for his exceptional map drawing skills. He was the youngest cartographer in the German army through the end of the war.

An example of calligraphy from the sketchbook that Hermann Zapf kept during World War II.

An example of calligraphy from the sketchbook that Hermann Zapf kept during World War II.

Zapf was captured after the war by the French and held in a field hospital in Tübingen. As he recounted, “I was treated very well and they even let me keep my drawing instruments. They had a great deal of respect for me as an ‘artiste’ … Since I was in very poor health, the French sent me home just four weeks after the end of the war. I first went back to my parents in my home town of Nuremberg, which had suffered terrible damage.”

Post-war years

In the years following the war, Hermann taught and gave lessons in calligraphy in Nuremberg. In 1947, he returned to Frankfurt and took a position with the Stempel AG foundry with little qualification other than his sketch books from the war years.

From 1948 to 1950, while he worked at Stempel on typography designs for metal punch cutting, he developed a specialization in book design. Hermann also continued to teach calligraphy twice a week at the Arts and Crafts School in Offenbach.

Zapf’s Palatino (1948) and Optima (1952) fonts

Zapf’s Palatino (1948) and Optima (1952) fonts

It was during these years, that Zapf designed Palatino and Optima. Working closely with the punch cutter August Rosenberg, Hermann design Palatino and named it after the 16th century Italian master of calligraphy Giambattista Palatino. In the Palatino face, Zapf attempted to emulate the forms of the great humanist typographers of the Renaissance.

Optima, on the other hand, expressed more directly the genius of Zapf’s vision and foreshadowed his later contributions. Optima can be described as a hybrid serif-and-sans serif typeface because it blends features of both: serif-less thick and thin strokes with subtle swelling at the terminals that suggest serifs. Zapf designed Optima during a visit to Italy in 1950 when he examined inscriptions at the Basilica di Santa Croce in Florence. It is remarkably modern, yet clearly derived from the Roman monumental capital model.

By the time Optima was released commercially by Stempel AG in 1958, the industry had begun to move away from metal casting methods and into phototypesetting. As many of his most successful fonts were reworked for the new methods, Zapf recognized—perhaps before and more profoundly than most—that phototypesetting was a transitional technology on the path from analog to an entirely new digital typography.

Digital typography

To grasp the significance of Zapf’s work, it is important to understand that, although “cold” photo type was an advance over “hot” metal type, both are analog technologies, i.e. they require the transfer of “master” shapes from manually engraved punches or hand drawn outlines to final production type by way of molds or photomechanical processes.

Due to the inherent limitations of metal and photomechanical media, analog type masters often contain design compromises. Additionally, the reproduction from one master generation to the next has variations and inconsistencies connected with the craftsmanship of punch cutting or outline drawing.

With digital type, the character shapes exist as electronic files that “describe” fonts in mathematical vector outlines or in raster images plotted on an XY coordinate grid. With computer font data, typefaces have many nuances and features that could never be rendered in metal or photo type. Meanwhile, digital font masters can be copied precisely without any quality degradation from one generation to the next.

Hermann Zapf in 1960

Hermann Zapf in 1960

From the earliest days of computers, Hermann Zapf began advocating for the advancement of digital typography. He argued that type designers needed to take advantage of the possibilities opened up by the new technologies and needed to create types that reflected the age. Zapf also combined knowledge of the rules of good type design with a recognition that fonts needed to be created specifically for electronic displays (at that time CRT-based monitors and televisions).

In 1959, at the age of 41, Zapf wrote in an industry journal, “It is necessary to combine the purpose, the simplicity and the beauty of the types, created as an expression of contemporary industrial society, into one harmonious whole. We should not seek this expression in imitations of the Middle Ages or in revivals of nineteenth century material., as sometimes seems the trend; the question for us is satisfying tomorrow’s requirements and creating types that are a real expression of our time but also represent a logical continuation of the typographic tradition of the western world.”

Warm reception in the US

 Despite a very cold response in Germany—his ideas about computerized type were rejected as “unrealistic” by the Technical University in Darmstadt where he was a lecturer and by leading printing industry representatives—Hermann persevered. Beginning in the early 1960s, Zapf delivered a series of lectures in the US that were met with enthusiasm.

For example, a talk he delivered at Harvard University in October 1964 became so popular that it led to an offer for a professorship at the University of Texas in Austin. The governor even also made Hermann an “Honorary Citizen of the State of Texas.” In the end, Zapf turned down the opportunity due to family obligations in Germany.

Among his many digital accomplishments are the following:

  • Rudolf Hell

    Rudolf Hell

    When digital typography was born in 1964 with the Digiset system of Rudolf Hell, Hermann Zapf was involved. By the early 1970s, Zapf created some of the first fonts designed specifically for any digital system: Marconi, Edison, and Aurelia.

  • In 1976, Hermann was asked to head a professorship in typographic computer programming at Rochester Institute of Technology (RIT) in Rochester, New York, the first of its kind in the world. Zapf taught at RIT for ten years and was able to develop his conceptions in collaboration with computer scientists and representatives of IBM and Xerox.
  • With Aaron Burns

    With Aaron Burns

    In 1977, Zapf partnered with graphic designers Herb Lubalin and Aaron Burns and founded Design Processing International, Inc. (DPI) in New York City. The firm developed software with menu-driven typesetting features that could be used by non-professionals. The DPI software was focused on automating hyphenation and justification as opposed to the style of type design.

  • In 1979, Hermann began a collaboration with Professor Donald Knuth of Stanford University to develop a font that was adaptable for mathematical formulae and symbols.
  • With Peter Karnow

    With Peter Karnow

    In the 1990s, Hermann Zapf continued to focus on the development of professional typesetting algorithms with his “hz -program” in collaboration with Peter Karow of the font company URW. Eventually the Zapf composition engine was incorporated by Adobe Systems into the InDesign desktop publishing software.

Zapf’s legacy

Hermann Zapf actively participated—into his 70s and 80s—in some of the most important developments in type technology of the past fifty years. This was no accident. He possessed both a deep knowledge of the techniques and forms of type history and a unique appreciation for the impact of information technologies on the creation and consumption of the written word.

In 1971, Zapf gave a lecture in Stockholm called “The Electronic Screen and the Book” where he said, “The problem of legibility is as old as the alphabet, for the identification of a letterform is the basis of its practical use. … To produce a clear, readable text that is pleasing to the eye and well arranged has been the primary goal of typography in all the past centuries. With a text made visible on a CRT screen, new factors for legibility are created.”

More than 40 years before the Apple design team set out to create a font that is legible on multiple computer screens, the typography visionary Hermann Zapf was theorizing about the very same questions.