Department of Engineering / News / AI-based ‘no-touch touchscreen’ could reduce risk of pathogen spread from surfaces

Department of Engineering

AI-based ‘no-touch touchscreen’ could reduce risk of pathogen spread from surfaces

AI-based ‘no-touch touchscreen’ could reduce risk of pathogen spread from surfaces

A ‘no-touch touchscreen’ developed for use in cars could also have widespread applications in a post-COVID-19 world, by reducing the risk of transmission of pathogens on surfaces.

Touchscreens and other interactive displays are something most people use multiple times per day, but they can be difficult to use while in motion, whether that’s driving a car or changing the music on your phone while you’re running

Simon Godsill

The patented technology, known as ‘predictive touch’, was developed by engineers at the University of Cambridge as part of a research collaboration with Jaguar Land Rover. It uses a combination of artificial intelligence and sensor technology to predict a user’s intended target on touchscreens and other interactive displays or control panels, selecting the correct item before the user’s hand reaches the display.

More and more passenger cars have touchscreen technology to control entertainment, navigation or temperature control systems. However, users can often miss the correct item – for example due to acceleration or vibrations from road conditions – and have to reselect, meaning that their attention is taken off the road, increasing the risk of an accident.

In lab-based tests, driving simulators and road-based trials, the predictive touch technology was able to reduce interaction effort and time by up to 50% due to its ability to predict the user’s intended target with high accuracy early in the pointing task.

As lockdown restrictions around the world continue to ease, the researchers say the technology could also be useful in a post-COVID-19 world. Many everyday consumer transactions are conducted using touchscreens: ticketing at rail stations or cinemas, ATMs, check-in kiosks at airports, self-service checkouts in supermarkets, as well as many industrial and manufacturing applications. Eliminating the need to actually touch a touchscreen or other interactive display could reduce the risk of spreading pathogens – such as the common cold, influenza or even coronavirus – from surfaces.

In addition, the technology could also be incorporated into smartphones, and could be useful while walking or jogging, allowing users to easily and accurately select items without the need for any physical contact. It even works in situations such as a moving car on a bumpy road, or if the user has a motor disability which causes a tremor or sudden hand jerks, such as Parkinson’s disease or cerebral palsy.

“Touchscreens and other interactive displays are something most people use multiple times per day, but they can be difficult to use while in motion, whether that’s driving a car or changing the music on your phone while you’re running,” said Professor Simon Godsill from Cambridge’s Department of Engineering, who led the project. “We also know that certain pathogens can be transmitted via surfaces, so this technology could help reduce the risk for that type of transmission.”

The technology uses machine intelligence to determine the item the user intends to select on the screen early in the pointing task, speeding up the interaction. It uses a gesture tracker, including vision-based or RF-based sensors, which are increasingly common in consumer electronics; contextual information such as user profile, interface design, environmental conditions; and data available from other sensors, such as an eye-gaze tracker, to infer the user’s intent in real time.

“This technology also offers us the chance to make vehicles safer by reducing the cognitive load on drivers and increasing the amount of time they can spend focused on the road ahead. This is a key part of our Destination Zero journey,” said Lee Skrypchuk, Human Machine Interface Technical Specialist at Jaguar Land Rover.

It could also be used for displays that do not have a physical surface such as 2D or 3D projections or holograms. Additionally, it promotes inclusive design practices and offers additional design flexibilities, since the interface functionality can be seamlessly personalised for given users and the display size or location is no longer constrained by the user ability to reach-touch.

“Our technology has numerous advantages over more basic mid-air interaction techniques or conventional gesture recognition, because it supports intuitive interactions with legacy interface designs and doesn’t require any learning on the part of the user,” said Dr Bashar Ahmad, who led the development of the technology and the underlying algorithms with Professor Godsill. “It fundamentally relies on the system to predict what the user intends and can be incorporated into both new and existing touchscreens and other interactive display technologies.”

This software-based solution for contactless interactions has reached high technology readiness levels and can be seamlessly integrated into existing touchscreens and interactive displays, so long as the correct sensory data is available to support the machine learning algorithm.

The technology was developed between 2012 and 2018 by the Centre for Advanced Photonics and Electronics (CAPE) as part of the CAPE Motion Adaptive Touchscreen System for Automotive - MATSA (1 and 2) project.

The text in this work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Images, including our videos, are Copyright ©University of Cambridge and licensors/contributors as identified.  All rights reserved. We make our image and video content available in a number of ways that permit your use and sharing of our content under their respective Terms.