For a while now, we’ve been using our tablets and smartphones to enhance the experience of what we’re watching on TV. All it takes is a simple hashtag or web address to be shown on screen and users are able to interact with each other, or even the TV show itself, around entertainment content. That’s about to change.
Thanks to ACR (automatic content recognition), in the future, your devices will know what second screen content to serve based on what they see or hear around them. For example, if you’re watching a TV show or movie and it starts trending on Twitter while you’re watching, your phone could alert you to the conversation taking place and prompt you to join in.
It doesn’t stop with social media. If you’re watching a fashion program, your phone could search for the best deals on the items being shown and give you the option to buy, right there and then.
ACR is nothing new. As far back as 2002, Shazam created a service where you could identify music playing around you by dialing a phone number and holding your phone up to the source. The service would then send you a text message with the artist name and title of the track. In 2008, Shazam became an iPhone app and since then has been extended to allow users to get more information and offers relating to what they are watching on TV.
Gracenote is also becoming a big player in the ACR business, supplying the technology and metadata for content recognition across multiple platforms and devices including, musiXmatch, Amazon and Xbox Music.
Mass adoption and privacy concerns
Smart TV manufacturers have been using ACR for a while. Samsung Smart TVs have had ACR functionality since 2012, allowing people to purchase products directly whilst watching TV adverts.
This adoption is not without its critics. One of the side benefits of ACR is that it allows your viewing/listening data to be collected and analysed by advertisers, broadcasters and other third parties, which many feel is intrusive. While this may be the case, we’ve become used to the privacy trade-off. Giving up personal information to gain access to content is the norm.
The future of ACR
Over the next few years, we’ll begin to see the use of ACR technology becoming more commonplace and things will get really interesting when other technologies like location awareness and wearables come into play.
Picture the scene: You’re in the cinema and your movie is about to start, but a trailer for the new Star Wars film catches your attention. It would appear that it’s not only your attention that has been caught. Your phone knows you’re a Star Wars fan because of all the Star Wars related content you consume and when it recognises the Star Wars trailer combined with your location, it puts two and two together, and sends a message to your smartwatch asking if you’d like to follow this up after your movie. You tap “yes” and your watch goes back to sleep, allowing you to watch the movie.
As soon as the movie finishes, your watch pipes up again, telling you that the cinema has availability for the Star Wars movie you showed interest in. It shows you the available times and asks if you’d like to book.
That’s just the near future. In the not so far future, ACR will not only recognise content that it has been programmed to recognise, it will learn to recognise the sights and sounds you encounter day-to-day. This could be anything from the song of a blackbird to the sound of a police siren. The combination of ACR and artificial intelligence is where this gets exciting.
ACR is here to stay
Whether you find it useful or creepy, ACR is here to stay and will soon be one of those technologies we take for granted. Making devices aware of their environment and able to react/interact accordingly will be a huge game changer. On devices of the future, environmental interfaces will be as important as human interfaces.