FaceAI Web SDK is a fast and lightweight SDK that allows you to add facial expression and facial emotion recognition AI to any website and app. It works on any HTML5 Web browsers and does not store any of the biometric data – ie none of your personal data goes to a server.

Released: November 27, 2020

Table of Contents

How to use FaceAI Web SDK?

  • Once you have downloaded the SDK, you will find the following .js file.
    • EnxFaceAI.js – This is a standard JavaScript library.
  • Now, you may use the EnxFaceAI.js file in your HTML file to make use of the SDK.
<html>
<head>
<script language="javascript" src="path/EnxFaceAI.js"></script>
</head>
<body></body>
</html>

Note: You must enable FaceAI while defining a Room for Facial Expression Analysis to work. Use “{ "settings": { "facex": true; }} to define a room.

Initialise FaceAI

FaceAI analyses Video Streams within an ongoing EnableX video session. To get started with analysis, you must bind the FaceAI Object with the Room in which Video Session is on.

Method: EnxFaceAI.init(connectedRoomInfo, stream, config, callback) – To start analysis on given Stream

Parameters:

  • connectedRoomInfo – JSON Object. Pass the Response-JSON returned as Callback of EnxRtc.joinRoom() or EnxRoom.connect() method.
  • stream – The Stream Object which will be analyzed. You may analyze Local Stream Object or Remote Stream Object (Stream Reference may be bound in Active Talkers List)
  • config – JSON Object. This is to configure or customise parameter using which the Face AI would analyse. Each Face AI Method’s configuration needs to be passed with initiation process itself. In case you skip any method’s configuration, the same method would still work with its default behaviour.
    • faceDetector – JSON Object. This is to configure or customise the parameter which the Face Detector would analyse.
      • maxInputFrameSize – Number. Default 160 (pixel). Input Frame Size in pixels for Face Detection.
      • multiFace – Boolean. Default true. Enables multi-face detection, i.e. allows to detect more than one face. It can slow down performance on lower-end devices, since the face tracker will be disabled and a full detection will occur for each frame.
    • facePose– JSON Object. This is to configure or customise parameter using which the Face Pose would be analysed.
      • smoothness – Number. Default 0.65. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceAge – JSON Object. This is to configure or customise parameter using which the Face Age would be analysed.
      • rawOutput – Boolean. Default false. It disables all the filters and fires the event even if the prediction has a very poor quality. Set it to true only if you want the raw signal, for example to analyze a single photo.
    • faceGender – JSON Object. This is to configure or customise parameter using which the Face Gender would be analysed.
      • smoothness – Number. Default 0.95. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
      • threshold – Number. Default 0.70. Range 0.5-1. It controls the minimum value of confidence for which mostConfident output returns the predicted gender name instead of undefined.
    • faceEmotion – JSON Object. This is to configure or customise parameter using which the Face Gender would be analysed.
      • enableBalancer – Boolean. Default false. Experimental filter able to adjust emotions, according to the emotional baseline of each person.
      • smoothness – Number. Default 0.95. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceFeatures – JSON Object. This is to configure or customize parameter using which the Face Features would be analyzed.
      • smoothness – Number. Default 0.90. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceArousalValence – JSON Object. This is to configure or customize parameter using which the Face Arousal Valence would be analyzed.
      • smoothness – Number. Default 0.70. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceAttention – JSON Object. This is to configure or customize parameter using which the Face Attention would be analyzed.
      • smoothness – Number. Default 0.83. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
      • riseSmoothness – Number. Same as smoothness, but is applied only when attention value is increasing. By default it has the same value as smoothness parameter.
      • fallSmoothness – Number. Same as smoothness, but is applied only when attention value is decreasing. By default it has the same value as smoothness parameter.
    • faceWish – JSON Object. This is to configure or customize parameter using which the Face Wish would be analyzed.
      • smoothness: Number. Default 0.80. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time.
  • callback: Callback to know if the room is enabled for FaceAI Analysis an that the client point is connected to an active session

Example:

faceDetectorConfig = {
	maxInputFrameSize: 200,
	fullFrameDetection: true
};  

facePoseConfig = {
	smoothness: 0.65
};

faceAgeConfig = {};

faceGenderConfig = {
	smoothness: 0.95,
	threshold: 0.70
};

faceEmotionConfig = {
	smoothness: 0.95,
	threshold: 0.70
};

faceFeaturesConfig = {
	smoothness: 0.90 
};

faceArousalValenceConfig = {
	smoothness: 0.70 
};

faceAttentionConfig = {
	smoothness: 0.85
};

faceWishConfig = {
	smoothness: 0.80
};

config = {
	faceDetector: faceDetectorConfig,
	facePose: facePoseConfig,
	faceAge: faceAgeConfig,
	faceGender: faceGenderConfig,
	faceEmotion: faceEmotionConfig,
	faceFeatures: faceFeaturesConfig,
	faceArousalValence: faceArousalValenceConfig,
	faceAttention: faceAttentionConfig,
	faceWish: faceWishConfig
}

localStream = EnxRtc.joinRoom(token, config, (response, error) => {
	if (error && error != null) { }

	if (response && response != null) {
		const FaceAI  = new EnxFaceAI(); // Construct the Object
		FaceAI.init(response, localStream, config, function (event) {
			// event.result == 0 - All Ok to process	
		})
	}
})

Face Detector

This is to detect how many faces are there in a Video Stream. The Event Listener continuously receives data in JSON as the detector tries to detect faces in the changing video frame.

Method: EnxFaceAI.startFaceDetector(callback) – To start analysis

Parameters:

  • callback: Callback to know that processing request has been accepted.

Event Listener:

  • face-detector – This event notifies repeatedly with Face Detection analysis report with JSON Object. JSON Object Reference appended below:
{
    faces: Array(n),
    rects: Array(n),
    status: string,
    fullFrameDetection: Boolean,
    totalFaces: Number,
    totalFacesChangedFrom: Number | undefined
}

JSON Object Explanation:

  • faces: Array. The detected faces in form of ImageData objects (zero or one; or multiple faces, if fullFrameDetection is true)
  • rects: Array of objects. Describes the bounding boxes (zero or one; or multiple rects, if fullFrameDetection is true)
    • x: Upper left point x coordinate
    • y: Upper left point y coordinate
    • width: Width of the bounding box
    • height: Height of the bounding box
  • status: String. Its status of the face tracker
    • INIT: Detector initializing; zero or many faces could be returned
    • TRACK_OK: Detector is correctly tracking one face; one face is returned
    • RECOVERING: Detector lost a face and attempting to recover and continue tracking; zero faces are returned
  • fullFrameDetection: Boolean. It is true when detection was full-frame and multiple faces can be returned, false otherwise.
  • totalFaces: Number. It represents the total number filtered of faces detected, smoothened over an interval of time. By default, one face is the maximum number. If multi-face is enabled, the maximum is 6. This output is not synchronized with faces and rects arrays, do not use it to count their lengths!
  • totalFacesChangedFrom: Number. Optional. When there is a significant change in the number of faces, it is defined and represents the previous number of faces. In case no change occurred, it is undefined. This output is not synchronized with faces and rects arrays.

Note: if you ever notice some false positives in the events, i.e. the face is detected as present even if there is no one, you can further filter the results by the confidence property of the elements contained in the rects array (e.g. rects[0].confidence > 10)

Example:

// Start Face Detector
faceAI.startFaceDetector((res) => {
	if (res.result === 0) {
		window.addEventListener("face-detector", (evt) => {
			console.log(evt.detail, "face-detector");
		});
	}
});

Face Pose

This is to analyse face rotation and position in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI detects face rotation in the video stream. Face Rotation angle data is represented in terms of radiants as Pitch, Roll and Yaw.

Method: EnxFaceAI.startFacePose(callback) – To start analysis

Parameters:

  • callback: Callback to know that processing request has been accepted.

Event Listener:

  • face-pose – This event notifies repeatedly with Face Rotation & Position analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		pose: {
			pitch: Number, 
			roll: Number, 
			yaw: Number
		}
	}
}

JSON Object Explanation:

  • output: Face Rotation & Position Report
    • pose: Filtered (smoothened) pose rotation angles expressed in radiants as pitch, roll and yaw.

Example:

// Start Face Pose
faceAI.startFacePose((res) => {
	if (res.result === 0) 
		window.addEventListener("face-pose", (evt) => {
			console.log(evt.detail, "face-pose");
		});
	}
});

Notes:

  • max and min ranges for rotation angles are currently limited to +- (Pi/2) in radians, corresponding to +- (90°) in degrees, for each of the 3 axes
  • the ZERO point is when a face looks straight at the camera

Face Age

This is to analyse and predict face age in a Video Stream. Face Age predicts within an age range. The Event Listener continuously gets data in JSON as FaceAI analyses face age. If the prediction quality is poor, the event is not fired.

Method: EnxFaceAI.startFaceAge(callback) – To start analysis

Parameters:

  • callback: Callback to know that processing request has been accepted.

Event Listener:

  • face-age – This event notifies repeatedly with Face AGE analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		age: {
			-18: Number, 
			18-35: Number, 
			35-51: Number, 
			51-: Number
		}, 
		numericAge: Number
	}
}

JSON Object Explanation:

  • output: Face Age Analysis Report
    • age: Filtered (smoothened) age prediction:
      • -18: Probability Weightage suggesting less than 18 years old.
      • 18-35: Probability Weightage suggesting between 18 to 35 years old.
      • 35-51: Probability Weightage suggesting between 18 35 years old.
      • 51-: Probability Weightage suggesting equal or greater than 51 years old.
    • numericAge: Numeric. Estimated Age

Example:

// Start Face Age
faceAI.startFaceAge((res) => {
	if (res.result === 0) {
		window.addEventListener("face-age", (evt) => {
			console.log(evt.detail, "face-age");
		});
	}
});

Note: in case of poor quality of the prediction, by default, the event is not fired (i.e. skipped for that frame).

Face Gender

This is to analyse face gender in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI analyses face gender.

Method: EnxFaceAI.startFaceGender(callback) – To start analysis

Parameters:

  • callback: Callback to know that processing request has been accepted.

Event Listener:

  • face-gender– This event notifies repeatedly with Face Gender analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		gender: {
			Female: Number, 
			Male: Number
		}, 
		mostConfident: String
	}
}

JSON Object Explanation:

  • output: Face Gender Report
    • gender: Filtered (smoothened) probabilities of the gender prediction:
      • Female: Probability weightage for gender is female
      • Male: Probability weightage for gender is male
    • mostConfident: Gender name of the most likely result if its smoothened probability is above the threshold, otherwise it is undefined.

Example:

// Start Face Gender
faceAI.startFaceGender((res) => {
	if (res.result === 0) 
		window.addEventListener("face-gender", (evt) => {
			console.log(evt.detail, "face-gender");
		});
	}
});

Face Emotion

This is to analyse face emotions in Video Stream. It analyses basic 8 emotions in a human face, viz. Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral. It also returns most dominate emotion on face. The Event Listener continuously getss data in JSON as FaceAI analyses face emotion.

Method: EnxFaceAI.startFaceEmotion(callback) – To start analysis

Parameters:

  • callback: Callback to know that processing request has been accepted.

Event Listener:

  • face-emotion – This event notifies repeatedly with Face Gender analysis report with JSON Object. JSON Object Reference appended below:

Example:

{	output: {
		dominantEmotion: String,
		emotion: {
			Angry: Number, 
			Disgust: Number, 
			Fear: Number, 
			Happy: Number, 
			Neutral: Number, 
			Sad: Number, 
			Surprise: Number
		}
	}
}

JSON Object Explanation:

  • output: Face Emotion Report
    • dominantEmotion: Name of Dominant Emotion if present, otherwise it is undefined.
    • emotion: Filtered (smoothened) values of the probability distribution of emotions. The sum of all the probabilities is always 1, each probability in the distribution has a value between 0 and 1.
      • Angry: Probability for Angry.
      • Disgust: Probability for Disgust.
      • Fear: Probability for Fear.
      • Happy: Probability for Happy.
      • Sad: Probability for Sad.
      • Surprise: Probability for Surprise.
      • Neutral: Probability for Neutral.

Example:

// Start Face Emotion
faceAI.startFaceEmotion((res) => {
	if (res.result === 0) {
		window.addEventListener("face-emotion", (evt) => {
			console.log(evt.detail, "face-emotion");
		});
	}
});

Face Features

This is to analyse face features in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI analyses face features.

Method: EnxFaceAI.startFaceFeatures(callback) – To start analysis

Parameters:

  • callback: Callback to know that processing request has been accepted.

Event Listener:

  • face-features – This event notifies repeatedly with Face Features analysis report with JSON Object. JSON Object Reference appended below:

JSON Object Explanation:

{	output: {
		features: {
			ArchedEyebrows: Number, 
			Attractive: Number,
			....
			....
		}
	}
}
  • output: Face Features Report
    • features: Filtered (smoothened) probabilities of each face independent feature in range 0.0 – 1.0. The following features are evaluated:
      • Arched Eyebrows
      • Attractive
      • Bags Under Eyes
      • Bald
      • Bangs
      • Beard 5 O’Clock Shadow
      • Big Lips
      • Big Nose
      • Black Hair
      • Blond Hair
      • Brown Hair
      • Chubby
      • Double Chin
      • Earrings
      • Eyebrows Bushy
      • Eyeglasses
      • Goatee
      • Gray Hair
      • Hat
      • Heavy Makeup
      • High Cheekbones
      • Lipstick
      • Mouth Slightly Open
      • Mustache
      • Narrow Eyes
      • Necklace
      • Necktie
      • No Beard
      • Oval Face
      • Pale Skin
      • Pointy Nose
      • Receding Hairline
      • Rosy Cheeks
      • Sideburns
      • Straight Hair
      • Wavy Hair

Example:

// Start Face Features
faceAI.startFaceFeatures((res) => {
	if (res.result === 0) {
		window.addEventListener("face-features", (evt) => 
			console.log(evt.detail, "face-features");
		});
	}
});

Face Arousal Valence

This is to analyse face arousal valence in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI analyses face arousal valence.

Method: EnxFaceAI.startFaceArousalValence(callback) – To start analysis

Parameters:

  • callback: Callback to know that processing request has been accepted.

Event Listener:

  • face-arousal-valence – This event notifies repeatedly with Face Arousal Valence analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		arousalvalence: {
			arousal: Number, 
			valence: Number,
			affects38 : { "Afraid": Number, "Amused": Number, .. },
			affects98 : { "Adventurous": Number, "Afraid": Number, .. },
			quadrant : String
		}
	}
}

JSON Object Explanation:

  • output: Face Arousal Valence Report
    • arousalvalence: Filtered (smoothened) values.
      • arousal: Range 1.0 to 1.0. It represents the degree of engagement (positive arousal), or disengagement (negative arousal).
      • valence: Range -1.0 to 1.0. It represents the degree of pleasantness (positive valence), or unpleasantness (negative valence).
      • affects38: An object containing the smoothened probabilities of the 38 affects in range [0.00, 1.00]: Afraid, Amused, Angry, Annoyed, Uncomfortable, Anxious, Apathetic, Astonished, Bored, Worried, Calm, Conceited, Contemplative, Content, Convinced, Delighted, Depressed, Determined, Disappointed, Discontented, Distressed, Embarrassed, Enraged, Excited, Feel Well, Frustrated, Happy, Hopeful, Impressed, Melancholic, Peaceful, Pensive, Pleased, Relaxed, Sad, Satisfied, Sleepy, Tired
      • affects98: An object containing the smoothened probabilities of the 98 affects in range [0.00, 1.00]: Adventurous, Afraid, Alarmed, Ambitious, Amorous, Amused, Wavering, Angry, Annoyed, Anxious, Apathetic, Aroused, Ashamed, Worried, Astonished, At Ease, Attentive, Bellicose, Bitter, Bored, Calm, Compassionate, Conceited, Confident, Conscientious, Contemplative, Contemptuous, Content, Convinced, Courageous, Defient, Dejected, Delighted, Depressed, Desperate, Despondent, Determined, Disappointed, Discontented, Disgusted, Dissatisfied, Distressed, Distrustful, Doubtful, Droopy, Embarrassed, Enraged, Enthusiastic, Envious, Excited, Expectant, Feel Guilt, Feel Well, Feeling Superior, Friendly, Frustrated, Glad, Gloomy, Happy, Hateful, Hesitant, Hopeful, Hostile, Impatient, Impressed, Indignant, Insulted, Interested, Jealous, Joyous, Languid, Light Hearted, Loathing, Longing, Lusting, Melancholic, Miserable, Passionate, Peaceful, Pensive, Pleased, Polite, Relaxed, Reverent, Sad, Satisfied, Selfconfident, Serene, Serious, Sleepy, Solemn, Startled, Suspicious, Taken Aback, Tense, Tired, Triumphant, Uncomfortable
      • quadrant: A string representing one of the four quadrants in the cirumplex model of affect (“High Control”, “Obstructive”, “Low Control”, “Conductive”, or “Neutral”)

Example:

// Start Face Arousal Valence
faceAI.startFaceArousalValence((res) => {
	if (res.result === 0) {
		window.addEventListener("face-arousal-valence", (evt) => {
			console.log(evt.detail, "face-arousal-valence");
		});
	}
});

Face Attention

This is to analyse face attention in a Video Stream. The Event Listener continuously get data in JSON as FaceAI analyses face attention.

Method: EnxFaceAI.startFaceAttention(callback) – To start analysis

Parameters:

  • callback: Callback to know that processing request has been accepted.

Event Listener:

  • face-attention – This event notifies repeatedly with Face Attention analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		attention: Number
	}
}

JSON Object Explanation:

  • output: Face Attention Report
    • attention: Filtered value (smoothened) in range [0.0, 1.0]. A value close to 1.0 represents attention, a value close to 0.0 represents distraction.

Example:

// Start Face Attention
faceAI.startFaceAttention((res) => {
	if (res.result === 0) {
		window.addEventListener("face-attention", (evt) => {
			console.log(evt.detail, "face-attention");
		});
	}
});

Face Wish

This is to analyze face wish in a Video Stream. The Event Listener keeps getting data in JSON as FaceAI keeps analyzing face wish.

Method: EnxFaceAI.startFaceWish(callback) – To start analysis

Parameters:

  • callback: Callback to know that processing request has been accepted.

Event Listener:

  • face-wish – This event notifies repeatedly with Face Wish analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		wish: Number
	}
}

JSON Object Explanation:

  • output: Face Wish Report
    • wish: Filtered value (smoothened) in range [0, 1.0]. A value closer to 0 represents a lower wish, a value closer to 1.0 represents a higher wish.

Example:

// Start Face Wish
faceAI.startFaceWish((res) => {
	if (res.result === 0) {
		window.addEventListener("face-wish", (evt) => {
			console.log(evt.detail, "face-wish");
		});
	}
});

Stop Face AI

This is to stop analyzing face any further. An initialized face analysis must be stopped to end usage of Face AI during a session failing which it would implicitly end at end of current Video Session.

Method: EnxFaceAI.stopFaceAI(callback) – To stop analysis

Parameters:

  • callback: Callback to know that analysis has stopped.

Example:

// Stop Face AI Analysis
faceAI.stopFaceAI((evt) => {
	if (evt.result === 0) {	
		// Stopped
	}
});