comment 0

The Future of AV Programming: Part 3

This is one part in a series of posts about my journey through the AV world.  I’ve broken these up into bite-sized portions that shouldn’t take more than 10 minutes to read.  I’m hoping to explore the future of systems programming in the Audio/Visual sense.  Let me know if I wander off into a tangent somewhere, I tend to forget where I’m going.

I went to ITT Tech and studied subjects designed so I could land a job in IT.  Instead, I took a left turn and ended up in AV.  Well, joke’s on me because AV and IT ended up merging anyways.  I may have traded away my knowledge about ASN.1 for the Inverse Square Law, but I think you can agree I’m the winner there.

Convergence

Here’s a problem solved by that monster AMX system we sold: multi-site dialing.  Back in the day, you needed a separate device called an MCU if you wanted to bring a bunch of people together over video.  Unfortunately, it wasn’t cheap.  We’re talking about $25k to buy one that would allow 8 sites to link up.  And it pretty much required a T1 installation to provide enough bandwidth.  What our system did was allow you to place up to 2 simultaneous video calls using TWO codecs.  People on Call A could talk to people on Call B because the system routed audio/video between people in the room and the TWO codecs.  It was a clever way to get around a feature missing from the hardware endpoint.  Similarly, computer content shared from one leg of the call could be pushed to everyone else.  Of course, it meant having to spend twice as much money on hardware, but these were customers who must have really needed that dialing ability.

It wouldn’t be until later that TANDBERG added a software MCU to their endpoints to handle the low-end requirements for conference calling.  I’m not sure how many systems we sold with TWO codecs, but after the built-in software “multi-site” became available, I’m sure we sold zero.  Video conferencing had also been moving away from circuit-switched ISDN networks for quite some time.  H.323 was a well-established protocol and bandwidth finally got cheap enough to do HD-quality calls over the Internet.  Competing SIP standards muscled their way in as well, and the whole Unified Communication boom was born.

Cisco bought out TANDBERG in 2010 and suddenly became a huge contender in the AV world.  They developed new hardware, introduced their own touch panel controls, etc.  While IT still didn’t like video traffic moving across their network, they likely warmed to the idea a little since the equipment bore a recognized name.  Once you get people to adopt video collaboration, it becomes another one of those necessities, like email.  We started to see more systems installed that weren’t much beyond a hardware codec, a touchpanel, and a display.  Control systems were still necessary in larger rooms that had integrated audio, multiple displays, an A/V switcher, etc.  So this led to a few confusing designs where our engineers placed a Cisco and a Crestron touchpanel at the conference table, side-by-side.  How is a user supposed to make sense of this?  Which panel does what?

Cisco absorbed some of the qualities of a programmable touchpanel so they could edge companies like Crestron off the table.  And it worked.  Video endpoint touchpanels started to be the sole user interface we sold.  The control programming happened more and more in the background, making sure that enough of the room operation was still automated.  Our users liked this consistent look across all their video rooms, and it looked like programming might not be needed as much going forward.

The convergence of AV and IT brought us more equipment sales, but a lot of these low-end systems were self-standing: they didn’t require a programmer.  These low-end systems push into more and more rooms as the demand for video everywhere increases.  So what seems to happen is: customers accept a system that isn’t personalized, even though it may not deliver everything they would like in the space.  What previously had a control system that could interact with projectors, screens, shades, lights, climate, integrated audio, and A/V switchers, now only has a hardware codec with a consistent UI that can turn a couple displays on and off.  I may sound bitter, that quantity won out over quality, but I think the true winner here is: standardized user interface.

Custom programming has long been fraught with poor user interface design.  “Cluttered and non-intuitive.”  Those adjectives come to mind after looking at some of the touchpanel layouts I’ve run across.  What really brought this issue into focus though was the iPhone.  Apple created a UI that is globally recognized, easy to use, and expected from most touch devices now.  As a result, user interfaces in our industry have simplified a lot, look more consistent from page to page, and feel way more polished than they did 10 years ago.

So what is the future for the AV programmer?  How can expensive programming time be leveraged to capitalize on cheap(er) hardware?  What goals are we trying to achieve with this technology?  As somebody who has invested years into this field, where do I go from here?

Find out, in the next (and last) part!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s