Department of Engineering / News / Sports calibrated

Department of Engineering

Sports calibrated

Sports calibrated

The view from the top of the stands of Lee Valley VeloPark, London.

New methods of gathering quantitative data from video – whether shot on a mobile phone or an ultra-high definition camera – may change the way that sport is experienced, for athletes and fans alike. 

The techniques we’re developing here are helping to advance how we experience sport, both as athletes and as fans

Dr Joan Lasenby

The bat makes contact with the ball; the ball flies back, back, back; and a thousand mobile phones capture it live as the ball soars over the fence and into the cheering crowd. Baseball is America’s pastime and, as for many other spectator sports, mobile phones have had a huge effect on the experience of spending an afternoon at the ballpark.

But what to do with that video of a monster home run or a spectacular diving catch once the game is over? What did that same moment look like from the other end of the stadium? How many other people filmed exactly the same thing but from different vantage points? Could something useful be saved from what would otherwise be simply a sporting memory?

 Dr Joan Lasenby of the Department of Engineering’s Signal Processing and Communications Group has been working on ways of gathering quantitative information from video, and thanks to an ongoing partnership with Google, a new method of digitally ‘reconstructing’ shared experiences such as sport or concerts is being explored at YouTube.

The goal is for users to upload their videos in collaboration with the event coordinator, and a cloud-based system will identify where in the space the video was taken from, creating a map of different cameras from all over the stadium. The user can then choose which camera they want to watch, allowing them to experience the same event from dozens or even hundreds of different angles.

But although stitching together still images is reasonably straightforward, doing the same thing with video, especially when the distance between cameras can be on a scale as massive as a sports stadium, is much more difficult.

“There’s a lot of information attached to the still images we take on our phones or cameras, such as the type of camera, the resolution, the focus, and so on,” explained Joan. “But the videos we upload from our phones have none of that information attached, so patching them together is much more difficult.”

Using a series of videos taken on mobile phones during a baseball game, the researchers developed a method of using visual information contained in the videos, such as a specific advertisement or other distinctive static features of the stadium, as a sort of ‘anchor’ which enables the video’s location to be pinpointed.

“Another problem we had to look at was a way to separate the good frames from the bad,” said Dr Stuart Bennett, a postdoctoral researcher in Joan’s group who developed this new method of three-dimensional reconstruction while a PhD student. “With the videos you take on your phone, usually you’re not paying attention to the quality of each frame as you would with a still image. We had to develop a way of efficiently, and automatically, choosing the best frames and deleting the rest.”

To identify where each frame originated from in the space, the technology selects the best frames automatically via measures of sharpness and edge or corner content and then selects those which match. The system works with as few as two cameras, and the team has tested it with as many as ten. YouTube has been stress testing it further, expecting that the technology has the potential to improve fan engagement in the sports and music entertainment sectors.

Although the technology is primarily intended for use in an entertainment context, Joan points out it could potentially be applied for surveillance purposes as well. “It is a possible application down the road, and could one day be used by law enforcement to help provide information at the crime scene,” she said. “At the moment, a lot of surveillance is done with fixed cameras, and you know everything about the camera. But this sort of technology might be able to give you information about what’s going on in a particular video shot on a phone by making locations in that video identifiable.”Another area where Joan’s group is extracting quantitative data from video is in their partnership with British Cycling. Over the past decade, the UK has become a dominant force in international cycling, thanks to the quality of its riders and equipment, its partnerships with industry and academia and its use of technology to help improve speeds on the track and on the road.

“In sport, taking qualitative videos and photographs is commonplace, which is extremely useful, as athletes aren’t robots,” said Professor Tony Purnell, Head of Technical Development for the Great Britain Cycling Team and Royal Academy of Engineering Visiting Professor at the Department of Engineering. “But what we wanted was to start using image processing not just to gather qualitative information, but to get some good quantitative data as well.”

Currently, elite cyclists are filmed on a turbo trainer, which is essentially a stationary bicycle in a lab or in a wind tunnel. The resulting videos are then assessed to improve aerodynamics or help prevent injuries. “But for cyclists, especially sprinters, sitting on a constrained machine just isn’t realistic,” said Joan. “When you look at a sprinter on a track, they’re throwing their bikes all over the place to get even the tiniest advantage. So we thought that if we could get quantitative data from video of them actually competing, it would be much more valuable than anything we got from a stationary turbo trainer.”

To obtain this sort of data, the researchers utilised the same techniques as are used in the gaming industry, where markers are used to obtain quantitative information about what’s happening – similar to the team’s work with Google. One thing that simplifies the gathering of quantitative information from these videos is the ability to ‘subtract’ the background, so that only the athlete remains. But doing this is no easy task, especially as the boards of the velodrome and the legs of the cyclist are close to the same colour. Additionally, things that might appear minor to the human eye, such as shadows or changes in the light, make the maths of doing this type of subtraction extremely complicated. Working with undergraduate students, graduate students and postdoctoral researchers, however, Joan’s team has managed to develop real-time subtraction methods to extract the data that may give the British team the edge as they prepare for the Rio Olympics in 2016.

“Technology is massively important in sport,” said Joan. “The techniques we’re developing here are helping to advance how we experience sport, both as athletes and as fans.”

Inset images: credit British Cycling                   

The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways that permit your use and sharing of our content under their respective Terms.