Archive for the Social Media Category

Where is VR going and why you should follow it

Posted in Digital Media, Mobile Media, Social Media, Video with tags , , , , , , , , , , , , , , , , , , , , , , on November 15, 2015 by multimediaman
Promotional image for Oculus Rift VR headset

Promotional image for Oculus Rift VR headset

On November 2, video game maker Activision Blizzard Entertainment announced a $5.9 billion purchase of King Digital Entertainment, maker of the mobile app game Candy Crush Saga. Activision Blizzard owns popular titles like Call of Duty, World of Warcraft and Guitar Hero—with tens of millions sold—for play on game consoles and PCs. By comparison, King has more than 500 million worldwide users playing Candy Crush on TVs, computers and (mostly) mobile devices.

While it is not the largest-ever acquisition of a game company—Activision bought Blizzard in 2008 for $19 billion—the purchase shows how much the traditional gaming industry believes that future success will be tied to mobile and social media. Other recent acquisitions indicate how the latest in gaming hardware and software have become strategically important for the largest tech companies:

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014

Major acquisitions of gaming companies by Microsoft, Amazon and Facebook took place in 2014

  • September 2014: Microsoft acquired Mojang for $2.5 billion
    Mojang’s Minecraft game has 10 million users worldwide and an active developer community. The Lego-like Minecraft is popular on both Microsoft’s Xbox game console and Windows desktop and notebook PCs. In making the purchase, Microsoft CEO Satya Nadella said, “Gaming is a top activity spanning devices, from PCs and consoles to tablets and mobile, with billions of hours spent each year.”
  • August 2104: Amazon acquired Twitch for $970 million
    The massive online retailer has offered online video since 2006 and the purchase of Twitch—the online and live streaming game service—adds 45 million users to Amazon’s millions of Prime Video subscribers and FireTV (stick and set top box) owners. Amazon’s CEO Jeff Bezos said of the acquisition, “Broadcasting and watching gameplay is a global phenomenon and Twitch has built a platform that brings together tens of millions of people who watch billions of minutes of games each month.”
  • March 2014: Facebook acquired Oculus for $2 billion
    Facebook users take up approximately 20% of all the time that people spend online each day. The Facebook acquisition of Oculus—maker of virtual reality headsets—is an anticipation that social media will soon soon include an immersive experience as opposed to scrolling through rectangular displays on PCs and mobile devices. According to Facebook CEO Mark Zuckerberg, “Mobile is the platform of today, and now we’re also getting ready for the platforms of tomorrow. Oculus has the chance to create the most social platform ever, and change the way we work, play and communicate.”

The integration of gaming companies into the world’s largest software, e-commerce and social media corporations is further proof that media and technology convergence is a powerful force drawing many different industries together. As is clear from the three CEO quotes above, a race is on to see which company can offer a mix of products and services sufficient to dominate the number of hours per day the public spends consuming information, news and entertainment on their devices.

What is VR?

Among the most important current trends is the rapid growth and widespread adoption of virtual reality (VR). Formerly of interest to hobbyists and gaming enthusiasts, VR technologies are now moving into mainstream daily use.

A short definition of VR is a computer-simulated artificial world. More broadly, VR is an immersive multisensory, multimedia experience that duplicates the real world and enables users to interact with the virtual environment and with each other. In the most comprehensive VR environments, the sight, sound, touch and smell of the real world are replicated.

Current and most commonly used VR technologies include a stereoscopic headset—which tracks the movement of a viewer’s head in 3 dimensions—and surround sound headphones that add a spatial audio experience. Other technologies such as wired gloves and omnidirectional treadmills can provide tactile and force feedback that enhance the recreation of the virtual environment.

New York Times VR promtion

The New York Times’ VR promotion included a Google Cardboard viewer that was sent along with the printed newspaper to 1 million subscribers

Recent events have demonstrated that VR use is becoming more practical and accessible to the general public:

  • On October 13, in a partnership between CNN and NextVR, the presidential debate was broadcast in VR as a live stream and stored for later on demand viewing. The CNN experience made it possible for every viewer to watch the event as though they were present, including the ability to see other people in attendance and observe elements of the debate that were not visible to the TV audience. NextVR and the NBA also employed the same technology to broadcast the October 27 season opener between the Golden State Warriors and New Orleans Pelicans, the first-ever live VR sporting event.
  • On November 5, The New York Times launched a VR news initiative that included the free distribution of Google Cardboard viewers—a folded up cardboard VR headset that holds a smartphone—to 1 million newspaper subscribers. The Times’ innovation required users to download the NYTvr app to their smartphone in order to watch a series of short news films in VR.

Origins of VR

Virtual reality is the product of the convergence of theater, camera, television, science fiction and digital media technologies. The basic ideas of virtual reality go back more than two hundred years and coincide with the desire of artists, performers and educators to recreate scenes and historical events. In the early days this meant painting panoramic views, constructing dioramas and staging theatrical productions where viewers had a 360˚ visual surround experience.

In the late 19th century, hundreds of cycloramas were built—many of them depicting major battles of the Civil War—where viewers sat in the center of a circular theater as the timeline of the historical event moved and was recreated around them in sequence. In 1899, a Broadway dramatization of the novel Ben Hur employed live horses galloping straight toward the audience on treadmills as a backdrop revolved in the opposite direction creating the illusion of high speed. Dust clouds were employed to provide additional sensory elements.

Kromscop viewer invented by Frederic Eugene Ives at the beginning of the 20th century

Frederic Eugene Ives’ Kromscop viewer

Contemporary ideas about virtual reality are associated with 3-D photography and motion pictures of the early twentieth century. Experimentation with color stereoscopic photography began in the late 1800s and the first widely distributed 3-D images were of the 1906 San Francisco earthquake and taken by Frederic Eugene Ives. As with present day VR, Ives’ images required both a special camera and viewing device called the Kromskop in order to see 3-D effect.

1950s-era 3-D View-Master with reels

1950s-era 3-D View-Master with reels

3-D photography was expanded and won popular acceptance beginning in the late 1930s with the launch of the View-Master of Edwin Eugene Mayer. The virtual experience of the View-Master system was enhanced with the addition of sound in 1970. Mayer’s company was eventually purchased by toy maker Mattel and later by Fischer-Price and the product remained successful until the era of digital photography in the early 2000s.

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

An illustration of the Teleview system that mounted a viewer containing a rotation mechanism in the armrest of theater seats

Experiments with stereoscopic motion pictures were conducted in the late 1800s. The first practical application of a 3-D movie took place in 1922 using the Teleview system of Laurens Hammond (inventor of the Hammond Organ) with a rotating shutter viewing device attached to the armrest of the theater seats.

Prefiguring the present-day inexpensive VR headset, the so-called “golden era” of 3-D film began in the 1950s and included cardboard 3-D glasses. Moviegoers got their first introduction to 3-D with stereophonic sound in 1953 with the film House of Wax starring Vincent Price. The popular enthusiasm for 3-D was eventually overtaken by the practical difficulties associated with the need to project two separate film reels in perfect synchronization.

1950s 3-D glasses and a movie audience wearing them

1950s 3-D glasses and a movie audience wearing them

Subsequent waves of 3-D movies in the second half of the twentieth century—projected from a single film strip—were eventually displaced by the digital film and audio methods associated with the larger formats and Dolby Digital sound of Imax, Imax Dome, Omnimax and Imax 3D. Anyone who has experienced the latest in 3-D animated movies such as Avatar (2009) can attest to the mesmerizing impact of the immersive experience made possible by the latest in these movie theater techniques.

Computers and VR

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

Recent photo of Ivan Sutherland; he invented the first head-mounted display at MIT in 1966

It is widely acknowledged that the theoretical possibility of creating virtual experiences that “convince” all the senses of their “reality” began with the work of Ivan Sutherland at MIT in the 1960s. Sutherland invented in 1966 the first head-mounted display—nicknamed the “Sword of Damocles”—that was designed to immerse the viewer in a simulated 3-D environment. In a 1965 essay called “The Ultimate Display,” Sutherland wrote about how computers have the ability to construct a “mathematical wonderland” that “should serve as many senses as possible.”

With increases in the performance and memory capacity of computers along with the decrease in the size of microprocessors and display technologies, Sutherland’s vision began to take hold in the 1980s and 1990s. Advances in vector based CGI software, especially flight simulators created by government researchers for military aircraft and space exploration, brought the term “reality engine” into use. These systems, in turn, spawned notions of complete immersion in “cyberspace” where sight, sound and touch are dominated by computer system generated sensations.

The term “virtual reality” was popularized during these years by Jaron Lanier and his VPL Laboratory. With VR products such as the Data Glove, the Eye Phone and Audio Sphere, Lanier combined with game makers at Mattel to create the first virtual experiences with affordable consumer products, despite their still limited functionality.

By the end of the first decade of the new millennium, many of the core technologies of present-day VR systems were developed enough to make simulated experiences more convincing and easy to use. Computer animation technologies employed by Hollywood and video game companies pushed the creation of 3-D virtual worlds to new levels of “realness.”

An offshoot of VR, called augmented reality (AR), took advantage of high resolution camera technologies and allowed virtual objects to appear within the actual environment and enabled users to view and interact with them on computer desktop and mobile displays. AR solutions became popular with advertisers offering unique promotional opportunities that capitalized on the ubiquity of smartphones and tablets.

Expectations

Scene from the 2009 movie Avatar

A scene from the 2009 animated film “Avatar”

Aside from news, entertainment and advertising, there are big possibilities opening up for VR in many business disciplines. Some experts expect that VR will impact almost every industry in a manner similar to that of PCs and mobile devices. Entrepreneurs and investors are creating VR companies with the aim of exploiting the promise of the new technology in education, health care, real estate, transportation, tourism, engineering, architecture and corporate communications (to name just a few).

Like consumer-level artificial intelligence, i.e. Apple Siri and Amazon Echo, present-day virtual reality technologies tend to fall frustratingly short of expectations. However, with the rapid evolution of core technologies—processors, software, video displays, sound, miniaturization and haptic feedback systems—it is conceivable that VR is ripe for a significant leap in the near future.

In many ways, VR is the ultimate product of media convergence as it is the intersection of multiple and seemingly unrelated paths of scientific development. As pointed out by Howard Rheingold in his authoritative 1991 book Virtual Reality, “The convergent nature of VR technology is one reason why it has the potential to develop very quickly from a scientific oddity into a new way of life … there is a significant chance that the deep cultural changes suggested here could happen faster than anyone has predicted.”

The mobile juggernaut

Posted in Mobile, Mobile Media, Social Media with tags , , , , , , , on August 31, 2015 by multimediaman
Mark Zuckerberg

Mark Zuckerberg

On August 27, Mark Zuckerberg posted the following message on his personal Facebook account, “We just passed an important milestone. For the first time ever, one billion people used Facebook in a single day. On Monday, 1 in 7 people on Earth used Facebook to connect with their friends and family.”

The Facebook one-billion-users-in-a-single-day accomplishment on August 24, 2015 is remarkable for the social network that was started by Zuckerberg and a group of college dormitory friends in 2004. With Facebook becoming available for public use less than ten years ago, the milestone illustrates the speed and extent to which social media has penetrated the daily lives of people all over the world.

While Facebook is very popular in the US and Canada, 83.1% of the 1 billion daily active users (DAUs) come from other parts of the world. Despite being barred in China—where there are 600 million internet users—Facebook has hundreds of millions of active users in India, Brazil, Indonesia, Mexico, UK, Turkey, Philippines, France and Germany.

Facebook's "Mobile Only" active users.

Facebook’s “Mobile Only” active users.

A major driver behind the global popularity and growth speed of Facebook is the mobile technology revolution. According to published data, Facebook reached an average of 844 million mobile active users during the month of June 2015 and industry experts are expecting this number to hit one billion in the very near future. Clearly, without smartphones, tablets and broadband wireless Internet access, Facebook could not have achieved the DAU milestone since many of the one billion people are either “mobile first” or “mobile only” users.

From mobile devices to wearables

When I last wrote about mobile technologies two-and-half years ago, the rapid rise of smartphones and tablets and the end of the PC era of computing was a dominant topic of discussion. Concerns were high that significant resources were being shifted toward mobile devices and advertising and away from older technologies and media platforms. The move from PCs and web browsers toward apps on smartphones and tablets was presenting even companies like Facebook and Google with a “mobility challenge.”

Today, while mobile device expansion has slowed and the dynamics within the mobile markets are becoming more complex, the overall trend of PC displacement continues. According to IDC, worldwide tablet market growth is falling, smartphone market growth is slowing and the PC market is shrinking. On the whole, however, smartphone sales represent more than 70% of total personal computing device shipments and, according to an IDC forecast, this will reach nearly 78% in 2019.

IDC's Worldwide Device Market 5 Year Forecast

IDC’s Worldwide Device Market 5 Year Forecast

According to IDC’s Tom Mainelli, “For more people in more places, the smartphone is the clear choice in terms of owning one connected device. Even as we expect slowing smartphone growth later in the forecast, it’s hard to overlook the dominant position smartphones play in the greater device ecosystem.”

While economic troubles in China and other market dynamics have led some analysts to the conclude that the smartphone boom has peaked, it is clear that consumers all over the world prefer the mobility, performance and accessibility of their smaller devices.

Ercisson's June 2015 Mobility Report projects 6.1 billion smartphone users by 2020.

Ercisson’s June 2015 Mobility Report projects 6.1 billion smartphone users by 2020.

According to the Ericsson Mobility Report, there will be 6.1 billion smartphone users by 2020. That is 70% of the world’s population.

Meanwhile, other technology experts are suggesting that wearables—smartwatches, fitness devices, smartclothing and the like—are expanding the mobile computing spectrum and making it more complex. Since many wearable electronic products integrate easily with smartphones, it is expected this new form will push mobile platforms into new areas of performance and power.

Despite the reserved consumer response to the Apple Watch and the failure of Google Glass, GfK predicts that 72 million wearables will be sold in 2015. Other industry analysts are also expecting wearables to become untethered from smartphones and usher in the dawn of “personalized” computing.

Five mobile trends to watch

With high expectations that mobile tech will continue to play a dominant role in the media and communications landscape, these are some major trends to keep an eye on:

Wireless Broadband: Long Term Evolution (LTE) connectivity reached 50% of the worldwide smartphone market by the end of 2014 and projections show this will likely be at 60% by the end of this year. A new generation of mobile data technology has appeared every ten years since 1G was introduced in 1981. The fourth generation (4G) LTE systems were first introduced in 2012. 5G development has been underway for several years now and it promises speeds of several tens of megabits per user with an expected commercial introduction sometime in the early 2020s.

Apple's A8 mobile processor is 50 times faster than the original iPhone processor.

Apple’s A8 mobile processor is 50 times faster than the original iPhone processor.

Mobile Application Processors: Mobile system-on-a-chip (SoC) development is one of the most intensely competitive sectors of computer chip technology today. Companies like Apple, Qualcomm and Samsung are all pushing the capabilities and speeds of their SoCs to get the maximum performance with the least energy consumption. Apple’s SoCs have set the benchmark in the industry for performance and the iPhone6 contains an A8 processor which is 40% more powerful than the previous A7 chip; and it is 50 times faster than the processor in the original iPhone. A new processor A9 will likely be be announced with the next generation iPhone in September 2015 and it is expected to bring a 29% performance boost over the A8.

Pressure Sensitive Screens: Called “force touch” by Apple, this new mobile display capability allows users to apply varying degrees of pressure to trigger specific functions on a device. Just like “touch” functionality—swiping, pinching, etc.—pressure sensitive interaction with a mobile device provides a new dimension to human-computer-interface. This feature was originally launched by Apple with the release of the Apple Watch which has a limited screen dimension on which to perform touch functions.

Customized Experiences: With mobile engagement platforms, smartphone users can receive highly targeted promotions and offers based upon their location within a retail establishment. Also known as proximity marketing, the technology uses mobile beacons with Bluetooth communications to send marketing text messages and other notifications to a mobile device that has been configured to receive them.

Mobile Apps: The mobile revolution has been a disruptive force for the traditional desktop software industry. Microsoft is now offering its Office Suite of applications to both iOS and Android users free of charge. In August, Adobe announced that it would be releasing a mobile and full-featured version of its iconic Photoshop software in October as a free download and as part of its Creative Cloud subscription.

With mobile devices, operating systems, applications and connectivity making huge strides and expanding across the globe by the billions it is obvious that every organization and business should be navigating its way behind this technology juggernaut. This begins with an internal review of your mobile practices:

  • Do you have a mobile communications and/or operations strategy?
  • Is your website optimized for a mobile viewing experience?
  • Are you encouraging the use of smartphones and tablets and building a mobile culture within your organization?
  • Are you using text messaging for any aspect of your daily work?
  • Are you using social media to communicate with your members, staff, prospects or clients?

If the answer to any of these questions is no, then it is time to act.

AI and the future of information

Posted in Digital Media, Mobile, Social Media with tags , , , , , , , , , on June 30, 2015 by multimediaman
Amazon Echo intelligent home assistant

Amazon Echo intelligent home assistant

Last November, Amazon revealed its intelligent home assistant called Echo. The black cylinder-shaped device is always on and ready for your voice commands. It can play music, read audio books and it is connected to Alexa, Amazon’s cloud-based information service. Alexa can answer any number of questions regarding the weather, news, sports scores, traffic reports and your schedule in a human-like voice.

Echo has an array of seven microphones and it can hear—and also learn—your voice, speech pattern and vocabulary even from across the room. With additional plugins, Echo can control your automated home devices like lights, thermostat, kitchen appliances, security system and more with just the sound of your voice. This is certainly a major leap from “Clap on, Clap off” (watch “The Clapper” video from the mid-1980s here: https://www.youtube.com/watch?v=Ny8-G8EoWOw).

As many critics have pointed out, the Echo is Amazon’s response to Siri, Apple’s voice-activate intelligent personal assistant and knowledge navigator. Siri was launched as an integrated feature of the iPhone 4S in October 2011 and the iPad released in May 2012. Siri is also now part of the Apple Watch, a wearable device, that adds haptics—tactile feedback—and voice recognition along with a digital crown control knob to the human computer interface (HCI).

If you have tried to use any of these technologies, you know that they are far from perfect. As the New York Times reviewer, Farhad Manjoo explained, “If Alexa were a human assistant, you’d fire her, if not have her committed.” Often times, using any of the modern artificial intelligence (AI) systems can be an exercise in futility. However, it is important to recognize that computer interaction has come a long way since the transition from mainframe consoles and command line interfaces were replaced by the graphical, point and click interaction of the desktop.

What is artificial intelligence?

The pioneers of artificial intelligence theory: Alan Turing, John McCarthy, Marvin Minsky and Ray Kurzweil

The pioneers of artificial intelligence theory: Alan Turing, John McCarthy, Marvin Minsky and Ray Kurzweil

Artificial intelligence is the simulation of the functions of the human brain—such as visual perception, speech recognition, decision-making, and translation between languages—by man-made machines, especially computers. The field was started by the noted computer scientist Alan Turing shortly after WWII and the term was coined in 1956 by John McCarthy, a cognitive and computer scientist and Stanford University professor. McCarthy developed one of the first programming languages called LISP in the late 1950s and is recognized for having been an early proponent of the idea that computer services should be provided as a utility.

McCarthy worked with Marvin Minsky at MIT in the late 1950s and early 1960s and together they founded what has become known as the MIT Computer Science and Artificial Intelligence Laboratory. Minsky, a leading AI theorist and cognitive scientist, put forward a range of ideas and theories to explain how language, memory, learning and consciousness work.

The core of Minsky’s theory—what he called the society of mind—is that human intelligence is a vast complex of very simple processes that can be individually replicated by computers. In his 1986 book The Society of Mind Minsky wrote, “What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.”

The theory, science and technology of artificial intelligence have been advancing rapidly with the development of microprocessors and the personal computer. These advancements have also been aided by the growth in understanding of the functions of the human brain. The field of neuroscience has vastly expanded in recent decades our knowledge of the parts of the brain, especially the neocortex and its role in the transition from sensory perceptions to thought and reasoning.

Ray Kurzweil has been a leading theoretician of AI since the 1980s and has pioneered the development of devices for text-to-speech, speech recognition, optical character recognition and music synthesizers (Kurzweil K250). He sees the development of AI as a necessary outcome of computer technology and has written widely—The Age of Intelligent Machines (1990), The Age of Spiritual Machines (1999), The Singularity is Near (2005) and How to Create a Mind (2012)—that this is a natural extension of the biological capacities of the human mind.

Kurzweil, who corresponded as a New York City high school student with Marvin Minksy, has postulated that artificial intelligence can solve many of society’s problems. Kurzweil believes—based on the exponential growth rate of computing power, processor speed and memory capacity—that humanity is rapidly approaching a “singularity” in which machine intelligence will be infinitely more powerful than all human intelligence combined. He predicts that this transformation will occur in 2029; a moment in time when developments in computer technology, genetics, nanotechnology, robotics and artificial intelligence will transform the minds and bodies of humans in ways that cannot currently be comprehended.

Some fear that the ideas of Kurzweil and his fellow adherents of transhumanism represent an existential threat to society and mankind. These opponents—among them the physicist Stephen Hawking and the pioneer of electric cars and private spaceflight Elon Musk—argue that artificial intelligence will become the biggest “blow back” in history such as depicted in Kubrick’s film 2001: A Space Odyssey.

While much of this discussion remains speculative, anyone who watched in 2011 as the IBM supercomputer Watson defeated two very successful Jeopardy! champions (Ken Jennings and Brad Rutter) knows that AI has already advanced a long way. Unlike the human contestants, Watson was able to commit 200 million pages of structured and unstructured content, including the full text of Wikipedia, into four terabytes of its memory.

Media and interface obsolescence

Today, the advantages of artificial intelligence are available to great numbers of people in the form of personal assistants like Echo and Siri. Even with their limitations, these tools allow instant access to information almost anywhere and anytime with a series of simple voice commands. When combined with mobile, wearable and cloud computing, AI is making all previous forms of information access and retrieval—analog and digital alike—obsolete.

There was a time not that long ago when gathering important information required a trip—with pen and paper in hand—to the library or to the family encyclopedia in the den, living room or study. Can you think of the last time you picked up a printed dictionary? The last complete edition of the Oxford English Dictionary—all 20 volumes—was printed in 1989. Anyone born after 1993 is likely to have never seen an encyclopedia (the last edition of the Encyclopedia Britannica was printed in 2010). Further still, GPS technologies have driven most printed maps into bottom drawers and the library archives.

Instant messaging vs email communications

Among teenagers, instant messaging has overtaken email as the primary form of electronic communications

But that is not all.  The technology convergence embodied in artificial intelligence is making even more recent information and communication media forms relics of the past. Optical discs have all but disappeared from computers and the TV viewing experience as cloud storage and time-shifted streaming video have become dominant. Social media (especially photo apps) and instant messaging have also made email a legacy form of communication for an entire generation of young people.

Meanwhile, the advance of the touch/gesture interface is rapidly replacing the mouse and, with improvements in speech-to-text technology, is it not easy to visualize the disappearance of the QWERTY keyboard (a relic from the mechanical limitations of the 19th century typewriter)? Even the desktop computer display is in for replacement by cameras and projectors that can make any surface an interactive workspace.

In his epilogue to How to Create a Mind, Ray Kurzweil writes, “I already consider the devices I use and the cloud computing resources to which they are virtually connected as extensions of myself, and feel less than complete if I am cut off from these brain extenders.” While some degree of skepticism is justified toward Kurzweil’s transhumanist theories as a form of technological utopianism, there is no question that artificial intelligence is a reality and that it will be with us—increasingly integrated into us and as an extension of us—for now and evermore.