Update from National Association of Broadcasters Conference

The National Association of Broadcasters Conference is one of the biggest conferences in North America for media. Several representatives from Towson University including staff and faculty from the Office of Technology Services, the College of Fine Arts and Communications and myself visited the convention in Nevada to learn about the trends in technology, hardware, software and innovations in media. My goal for the conference was to attend as many lectures on emerging media and to gauge any major trends in technology that may apply to us at Towson. This April, the theme carried on from last year (my first trip to NAB) with its’ slogan “The MET Effect: Media Entertainment and Technology.”

 

I visited many workshops and panels on Virtual Reality and Augmented Reality that included major players such as Google and Intel. One common theme that I caught my ear during the talks is that experts see a rise in prominence of Augmented Reality over Virtual Reality. Currently, the popular belief is that Virtual Reality is king because of its immersive and captivating qualities but some experts believe that there will soon be an industry shift that will prioritize augmented reality media creation. The belief stems from observations that have suggested that users simply are not turning around to see the full “world” in 360 VR headsets. Also, most people have not made the commitment to purchase a headset for their mobile phones, let alone a whole system such as the oculus rift.

On the other hand, augmented reality can be experienced easily with mobile phones. Newer phones such as the iPhone X  and Google Pixel are being manufactured with future augmented reality capabilities in mind. Google has responded to this trend by shifting their focus to 180-degree video as opposed to 360. Their argument is that scaling content down to 180-degrees of view will bring immersive media “back to the storyteller.” That is to say, that creators will once again have control of the viewers gaze while still providing an immersive experience. Google has partnered with multiple manufacturers to create dual lens systems that are specifically manufactured for 180-degree videos. Their content can already be experienced on YouTube with a headset.

3D Render of Stephen's Hall

3D Render of Stephen’s Hall

Another topic that generated interest at NAB was Photogrammetry. Photogrammetry is a concept of using 2D images and stitching the pictures together to create a 3D rendering of physical objects large and small.See a rendering that the Office of Academic Innovation created of Stephens Hall in the spring. Some news networks have started to document events using this concept, including Aftermath VR which is documenting artifacts and even entire city blocks relating to the Euromaidan revolution in Kyiv, Ukraine in 2014. USA Todayhas also launched a new project entitled “The Wall: Unknown Stories, Unintended Consequences.” which documents the proposed border wall on the southern border. Photogrammetry is a simple concept with powerful possibilities for documentation and instruction. Is there an artifact or object that you would like to document for a class at Towson? OAI will be hosting Structure From Motion – Build Your Own 3D Files in August.

Finally, a massive topic of debate and conversation at NAB was Artificial Intelligence in media. Last year, Adobe debuted some AI capabilities that they added to their video editing software Premiere Pro. This AI capability allows for automated audio editing which saves a lot of time for video editors like myself. This year, other broadcasters and media companies introduced some impressive AI capabilities. Yuko Yamanouchi from NHK Channel (Japan) gave a presentation on the channel’s artificial intelligence initiatives. The channel has advanced computer screen “readers” that can detect and analyze image content on the screen. The computers can then immediately generate descriptive audio that in theory would assist visually impaired consumers. Their image analysis can also analyze live audio so efficiently that they can generate computer graphic 3D simulations of people signing on-screen, all synchronously. IBM also introduced some of their technology in video editing. Their Watson interface has the capability of analyzing on-screen content to auto-edit clips, faster and more efficiently than any human. Will this lead to fewer jobs for editors? I am not sure but I welcome technology that cuts my editing time down!

To receive more information on emerging technologies, use the widget to the right to Subscribe by Email to TU’s Fresh Tech blog.

 

Print Friendly, PDF & Email

Merino, David A.