Scientific Rigour

There are very few people in the world with the knowledge necessary to design our AI solutions. It requires a PhD and a number of years of working in the field afterwards to even attempt to build an emotion recognition system that actually works, and it requires a team of people to detect medically relevant behaviour from face and voice analysis.

BlueSkeye AI has:

  • A dedicated data team of 14 people

  • 10 people with relevant PhDs

  • A dedicated R&D team of 5 people with over with over 110 peer reviewed publications in relevant areas

  • Over 100 years of combined research experience in machine learning, face and voice analysis, and medicine

  • Our Co-founder and Chief Scientific Officer, Michel Valstar is the second most cited person in the world in social signal processing, an h-index of 51 and over 17k citations

Our Science

Below are a selection of just some of the important peer reviewed publications that have been integral to the development of our technology  

Understanding human behaviour based on face and voice analysis

Spectral representation of behaviour primitives for depression analysis

S Song, S Jaiswal, L Shen, M Valstar

IEEE Transactions on Affective Computing

Automatic detection of ADHD and ASD from expressive behaviour in RGBD data

S Jaiswal, MF Valstar, A Gillott, D Daley

2017 12th IEEE International Conference on Automatic Face & Gesture …

Design and Evaluation of Virtual Human Mediated Tasks for Assessment of Depression and Anxiety

JO Egede, D Price, DB Krishnan, S Jaiswal, N Elliott, R Morriss, ...

Proceedings of the 21st ACM International Conference on Intelligent Virtual …

Designing an Adaptive Embodied Conversational Agent for Health Literacy: a User Study

J Egede, MJG Trigo, A Hazzard, M Porcheron, E Bodiaj, JE Fischer, ...

Proceedings of the 21st ACM International Conference on Intelligent Virtual …

Self-supervised Learning of Person-specific Facial Dynamics for Automatic Personality Recognition

S Song, S Jaiswal, E Sanchez, G Tzimiropoulos, L Shen, M Valstar

IEEE Transactions on Affective Computing

Affective Processes: stochastic modelling of temporal context for emotion and facial expression recognition

E Sanchez, MK Tellamekala, M Valstar, G Tzimiropoulos

Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …

How to distinguish posed from spontaneous smiles using geometric features

MF Valstar, H Gunes, M Pantic

Proceedings of the 9th international conference on Multimodal interfaces, 38-45

Personality Recognition by Modelling Person-specific Cognitive Processes using Graph Representation

Z Shao, S Song, S Jaiswal, L Shen, M Valstar, H Gunes

Proceedings of the 29th ACM International Conference on Multimedia, 357-366

Fundamental face analysis

Fully automatic recognition of the temporal phases of facial actions

MF Valstar, M Pantic

IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 42 …

Action unit detection using sparse appearance descriptors in space-time video volumes

B Jiang, MF Valstar, M Pantic

2011 IEEE International Conference on Automatic Face & Gesture Recognition …

Audio-Visual Predictive Coding for Self-Supervised Visual Representation Learning

MK Tellamekala, M Valstar, M Pound, T Giesbrecht

2020 25th International Conference on Pattern Recognition (ICPR), 9912-9919

Deep learning the dynamic appearance and shape of facial action units

S Jaiswal, M Valstar

2016 IEEE winter conference on applications of computer vision (WACV), 1-8

Local evidence aggregation for regression-based facial point detection

B Martinez, MF Valstar, X Binefa, M Pantic

IEEE transactions on pattern analysis and machine intelligence 35 (5), 1149-1163

Cascaded continuous regression for real-time incremental face tracking

E Sánchez-Lozano, B Martinez, G Tzimiropoulos, M Valstar

European Conference on Computer Vision, 645-661

A transfer learning approach to heatmap regression for action unit intensity estimation

I Ntinou, E Sanchez, A Bulat, M Valstar, Y Tzimiropoulos

IEEE Transactions on Affective Computing

Social robotics and virtual assistants

Building autonomous sensitive artificial listeners

M Schroder, E Bevacqua, R Cowie, F Eyben, H Gunes, D Heylen, ...

IEEE transactions on affective computing 3 (2), 165-183

Databases, benchmarks, and tools

The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent

G McKeown, M Valstar, R Cowie, M Pantic, M Schroder

IEEE transactions on affective computing 3 (1), 5-17

The first facial expression recognition and analysis challenge

MF Valstar, B Jiang, M Mehu, M Pantic, K Scherer

2011 IEEE International Conference on Automatic Face & Gesture Recognition …

Avec 2013: the continuous audio/visual emotion and depression recognition challenge

M Valstar, B Schuller, K Smith, F Eyben, B Jiang, S Bilakhia, S Schnieder, ...

Proceedings of the 3rd ACM international workshop on Audio/visual emotion …

The NoXi database: multimodal recordings of mediated novice-expert interactions

A Cafaro, J Wagner, T Baur, S Dermouche, M Torres Torres, C Pelachaud, ...

Proceedings of the 19th ACM International Conference on Multimodal …

Web-based database for facial expression analysis

M Pantic, M Valstar, R Rademaker, L Maat

2005 IEEE international conference on multimedia and Expo, 5 pp.


For a complete set of relevant publications, please see Prof Valstar’s Google Scholar account.