The Covid-19 pandemic has led to the adoption of face masks in physical teaching spaces across the world. This has in-turn presented a number of challenges for practitioners in the face to face delivery of content and in effectively engaging learners in practical settings, where face coverings are an ongoing requirement. Being unable to identify the mouth movements of a speaker due to the lower portion of the face being obscured can lead to issues in clarity, attention, emotional recognition, and trust attribution, negatively affecting the learning experience. This is further exacerbated for those who require specialist support and those with impairments, particularly those centred around hearing. EmotiMask embeds an LED matrix within a face mask to replicate mouth movements and emotional state through speech detection and intelligent processing. By cycling through different LED configurations, the matrix can approximate speech in-progress, as well as various mouth patterns linked to distinct emotional states. An initial study placed EmotiMask within a HE practical session containing 10 students, with results suggesting a positive effect on clarity and emotional recognition over typical face masks. Further feedback noted that it was easier to identify the current speaker with EmotiMask, however speech amplification, additional led configurations, and improved portability are desired points of refinement. This study represents a step towards a ubiquitous solution for tackling some of the challenges presented when teaching in a pandemic or similar situations where face coverings are a requirement and has potential value in other sectors where such scenarios present themselves.
Stuart O'Connor, De Montfort University, United Kingdom
Salim Hasshu, De Montfort University, United Kingdom
Simon Colreavy-Donelly, University of Limerick, Ireland
Stefan Kuhn, De Montfort University, United Kingdom
Fabio Caraffini, De Montfort University, United Kingdom
Alan Ryan, University of Limerick, Ireland