Supporting Clinical Practice with Ethical Emotion AI
Two weeks ago I attended CES 2024, a massive trade fair where over 130,000 people attended to see the latest in consumer electronics. BLUESKEYE AI had a stand there, in the Digital Health Zone, with the aim to find new (business) customers. For four days me and the others in the team would explain what we do. Invariably, I’d start with ‘BLUESKEYE specialises at recognising medically relevant face and voice behaviour, to help pharmaceutical companies and the automotive industry detect conditions such as depression, fatigue, pain, and many others'.
There are always a fair few well-connected clinicians there, often acting as a technology scout for companies in the health sectors, and as you can imagine they got really excited about the possibilities for healthcare! We’d have a good conversation about how BLUESKEYE AI could help them, and I came away really inspired about how we can support clinical practice with our Ethical Emotion AI. To give everyone working in healthcare the same benefit of that conversation without having to fly to CES, I will try to set out some of the ways in which BLUESKEYE AI can support clinical practice in this article.
To do so, I want to start out with what our products currently can, and can’t do. We are specialists in analysing just about every aspect of the face and facial behaviour. Key capabilities that we’re particularly proud of include behaviour detection of:
apparent emotion in terms of valence and arousal (how positive/negative you appear to feel, and how much energy you appear to have,
facial muscle action intensity,
facial points: fast, accurate, and really robust to variations in head pose, illumination, and partial occlusions of the face,
gaze direction and gaze patterns, and
head pose and head actions.
These behaviours have been shown to be indicators of e.g. depression, acute pain, ADHD, facial tics, facial muscle atrophy, and many other medical conditions. We call these behaviomedical conditions. One thing that most behaviomedical conditions have in common is that they’re currently measured using really subjective, noisy measurement tools such as self-report questionnaires. What BLUESKEYE brings to the table is objective, engaging, repeatable measures that ultimately increases the signal to noise ration of all the data you collect for your studies!
And this behaviour detection can run on a person’s own Android or Apple mobile phone, even quite old ones. It is available as a Software Development Kit (SDK) to allow people to create their own App, and we have created a Health Foundation Platform with off-the-shelf, pick and mix modules of app information screens, interactive tasks, questionnaires, and of course our state of the art behaviour detection AI to make the whole process easy to set up, compliant, and get going. The Health Foundation Platform comes with a safe to use back-end for storing the behaviour data, a dashboard that gives insight in the behaviour data collected so far, and BLUESKEYE AI provides data analysis services if you require those.
Both the SDK and the Health Foundation Platform software products are compliant with all relevant industry standards and is increasingly built under quality management system.
Ok, so I know what you do. How is this useful to me, a clinician?
At the moment, the SDK is not a medical device, so use in clinical pathways is limited to using it as a tool in your research. Whilst consumer apps may be created using the SDK for wellbeing and self help purposes, using this in your clinical practice to triage or aid in diagnosis firmly puts it in the software as medical device category, and thus requires medical device certification.
At BLUESKEYE AI we are undergoing clinical trials for selected medical conditions so that we can get medical device certification for detecting them, which unlocks use for diagnosis and monitoring. Our current clinical trial is to prove the safety of our behaviour sensing to detect perinatal depression, in women from 3 months pregnant to 6 months postpartum.
Is all this use of Emotion AI ethical?
Yes, at least the way BLUESKEYE does is. All our algorithms process face and voice data on a person’s mobile phone, so that this private data never has to go to the cloud. We advocate giving the user/patient the option to share the information about their medical condition with their clinical team, so that they have the ultimate say about what data is shared. The EU AI Act requires providers of systems with Emotion AI to notify the users that their behaviour is being measured (which is what our SDK is classed as), and the GDPR/HIPAA acts safeguard how their private data is being used. Broadly speaking, I think that there are ample consumer protection measures to make sure that Emotion AI will be used ethically, at least in Europe and the US.
I see great use for what you’re making. How do I get started?
If you have a study in mind that would benefit from using our behaviour detection SDK, or want to work with us to build a model for detecting one of the behaviomedical conditions I listed, just connect to me on LinkedIn with your idea, and we’ll get in touch!
PS
Note to self: I should really write an explainer about valence and arousal! Soon, my friends, soon.
Written by BlueSkeye Founding CEO, Prof Michel Valstar
See original post on LinkedIn HERE