FaceAI iOS SDK is a fast and lightweight SDK that allows you to add facial expression and facial emotion recognition AI to iOS app. It works on the App’s UI itself and does not send or store any of the biometric data to a server.
Released: October 10, 2022
Table of Contents
- How to use EnxWebAIiOS iOS SDK?
- Initialize FaceAI
- Face Detector
- Face Pose
- Face Age
- Face Gender
- Face Emotion
- Face Features
- Face Arousal Valence
- Face Attention
- Face Wish
- Stop FaceAI
How to use EnxFaceAIiOS iOS SDK?
The EnxFaceAIiOS
directory contains the EnxFaceAIiOS.framework
iOS SDK. Add this Framework in your project. The EnxFaceAIiOS
iOS SDK is supported by iOS 10 or higher, XCode 9 and later.
- Install CocoaPods as described in CocoaPods Getting Started
- In Terminal, go to Project Directory and run
pod init
- To integrate
EnxFaceAIiOS
into your Xcode project using CocoaPods, specify pod nameEnxFaceAIiOS
- After adding all required library in PodFile, go to terminal and run
pod Install
- Re-open your Project in Xcode using the new
.xcworkspace
file
Important: You must enable FaceAI while defining a Room for Facial Expression Analysis to work. Use “{ "settings": { "facex": true; }}
to define a room.
Initialise FaceAI
FaceAI analyses Video Streams within an ongoing EnableX video session. To get started with analysis, you must bind the FaceAI Object with the Room in which Video Session is on.
Clas: EnxFaceAI
Method: -(void)initFaceAI:(NSDictionary *)roomMeta room:(EnxRoom * Nonnull)room stream:(EnxStream *)stream delegate:(id _Nonnull)delegate;
– To start analysis on given Stream
Parameters:
roomMeta
– NSDictionary Object. It contains information of the connected Room.room
– Instance of the connected Room usingEnxRTCiOS
native SDKstream
– The Local Stream Object which will be analyzed.delegate
–EnxFaceAIDelegate
object is the delegate for theEnxFaceAI
objectfaceDetector
– JSON Object. This is to configure or customise the parameter which the Face Detector would analyse.maxInputFrameSize
– Number. Default 160 (pixel). Input Frame Size in pixels for Face Detection.multiFace
– Boolean. Default true. Enables multi-face detection, i.e. allows to detect more than one face. It can slow down performance on lower-end devices, since the face tracker will be disabled and a full detection will occur for each frame.
facePose
– JSON Object. This is to configure or customise parameter using which the Face Pose would be analysed.smoothness
– Number. Default 0.65. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
faceAge
– JSON Object. This is to configure or customise parameter using which the Face Age would be analysed.rawOutput
– Boolean. Default false. It disables all the filters and fires the event even if the prediction has a very poor quality. Set it to true only if you want the raw signal, for example to analyze a single photo.
faceGender
– JSON Object. This is to configure or customise parameter using which the Face Gender would be analysed.smoothness
– Number. Default 0.95. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.threshold
– Number. Default 0.70. Range 0.5-1. It controls the minimum value of confidence for whichmostConfident
output returns the predicted gender name instead of undefined.
faceEmotion
– JSON Object. This is to configure or customise parameter using which the Face Gender would be analysed.- enableBalancer – Boolean. Default false. Experimental filter able to adjust emotions, according to the emotional baseline of each person.
smoothness
– Number. Default 0.95. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
faceFeatures
– JSON Object. This is to configure or customize parameter using which the Face Features would be analyzed.smoothness
– Number. Default 0.90. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
faceArousalValence
– JSON Object. This is to configure or customize parameter using which the Face Arousal Valence would be analyzed.smoothness
– Number. Default 0.70. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
faceAttention
– JSON Object. This is to configure or customize parameter using which the Face Attention would be analyzed.smoothness
– Number. Default 0.83. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.riseSmoothness
– Number. Same as smoothness, but is applied only when attention value is increasing. By default it has the same value as smoothness parameter.fallSmoothness
– Number. Same as smoothness, but is applied only when attention value is decreasing. By default it has the same value as smoothness parameter.
faceWish
– JSON Object. This is to configure or customize parameter using which the Face Wish would be analyzed.smoothness
: Number. Default 0.80. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time.
callback
: Callback to know if the room is enabled for FaceAI Analysis an that the client point is connected to an active session
faceDetectorConfig = { maxInputFrameSize: 200, fullFrameDetection: true }; facePoseConfig = { smoothness: 0.65 }; faceAgeConfig = {}; faceGenderConfig = { smoothness: 0.95, threshold: 0.70 }; faceEmotionConfig = { smoothness: 0.95, threshold: 0.70 }; faceFeaturesConfig = { smoothness: 0.90 }; faceArousalValenceConfig = { smoothness: 0.70 }; faceAttentionConfig = { smoothness: 0.85 }; faceWishConfig = { smoothness: 0.80 }; config = { faceDetector: faceDetectorConfig, facePose: facePoseConfig, faceAge: faceAgeConfig, faceGender: faceGenderConfig, faceEmotion: faceEmotionConfig, faceFeatures: faceFeaturesConfig, faceArousalValence: faceArousalValenceConfig, faceAttention: faceAttentionConfig, faceWish: faceWishConfig } localStream = EnxRtc.joinRoom(token, config, (response, error) => { if (error && error != null) { } if (response && response != null) { const FaceAI = new EnxFaceAI(); // Construct the Object FaceAI.init(response, localStream, config, function (event) { // event.result == 0 - All Ok to process }) } })
Face Detector
This is to detect how many faces are there in a Video Stream. The Event Listener continuously receives data in JSON as the detector tries to detect faces in the changing video frame.
Class: EnxFaceAI
Method: E-(void)enableFaceDetector:(BOOL)enable;
– To start/stop analysis
Parameters:
enable
: Boolean. Set totrue
to enable/start Face Detector analysis. Set tofalse
to disable/stop analysis
Delegate Method:
–FaceAI:didFaceDetectorData:value:
Gets called repeatedly with Face Detection analysis report with JSON Object. JSON Object Reference appended below:
{ faces: Array(n), rects: Array(n), status: string, fullFrameDetection: Boolean, totalFaces: Number, totalFacesChangedFrom: Number | undefined }
faces
: Array. The detected faces in form of ImageData objects (zero or one; or multiple faces, if fullFrameDetection is true)rects
: Array of objects. Describes the bounding boxes (zero or one; or multiple rects, if fullFrameDetection is true)x
: Upper left point x coordinatey
: Upper left point y coordinatewidth
: Width of the bounding boxheight
: Height of the bounding box
status
: String. Its status of the face trackerINIT
: Detector initializing; zero or many faces could be returnedTRACK_OK
: Detector is correctly tracking one face; one face is returnedRECOVERING
: Detector lost a face and attempting to recover and continue tracking; zero faces are returned
fullFrameDetection
: Boolean. It is true when detection was full-frame and multiple faces can be returned, false otherwise.totalFaces
: Number. It represents the total number filtered of faces detected, smoothened over an interval of time. By default, one face is the maximum number. If multi-face is enabled, the maximum is 6. This output is not synchronized with faces and rects arrays, do not use it to count their lengths!totalFacesChangedFrom
: Number. Optional. When there is a significant change in the number of faces, it is defined and represents the previous number of faces. In case no change occurred, it is undefined. This output is not synchronized with faces and rects arrays.
Note: if you ever notice some false positives in the events, i.e. the face is detected as present even if there is no one, you can further filter the results by the confidence property of the elements contained in the rects
array (e.g. rects[0].confidence > 10
)
// Start Face Detector [FaceAI enableFaceDetector:true]; - (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceDetectorData:(NSString *_Nullable)type value:(NSString *_Nullable)value{ }
Face Pose
This is to analyse face rotation and position in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI detects face rotation in the video stream. Face Rotation angle data is represented in terms of radiants as Pitch, Roll and Yaw.
Class: EnxFaceAI
Method: -(void)enableFacePose:(BOOL)enable;
– To start/stop analysis
Parameters:
enable
: Boolean. Set totrue
to enable/start Face Pose analysis. Set tofalse
to disable/stop analysis
Delegate Method:
–FaceAI:didFacePoseData:value:
Called repeatedly with Face Rotation & Position analysis report with JSON Object. JSON Object Reference appended below:
{ output: { pose: { pitch: Number, roll: Number, yaw: Number } } }
output
: Face Rotation & Position Reportpose
: Filtered (smoothened) pose rotation angles expressed in radiants aspitch
,roll
andyaw
.

// Start Face Pose [FaceAI enableFacePose:true]; - (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFacePoseData:(NSString *_Nullable)type value:(NSString *_Nullable)value{ }
Notes:
- max and min ranges for rotation angles are currently limited to +- (Pi/2) in radians, corresponding to +- (90°) in degrees, for each of the 3 axes
- the ZERO point is when a face looks straight at the camera
Face Age
This is to analyse and predict face age in a Video Stream. Face Age predicts within an age range. The Event Listener continuously gets data in JSON as FaceAI analyses face age. If the prediction quality is poor, the event is not fired.
Class: EnxFaceAI
Method: -(void)enableFaceAge:(BOOL)enable;
– To start/stop analysis
Parameters:
enable
: Boolean. Set totrue
to enable/start Face Age analysis. Set tofalse
to disable/stop analysis
Delegate Method:
–FaceAI:didFaceAgeData:value:
– Gets called repeatedly with Face Age analysis report with JSON Object. JSON Object Reference appended below:
{ output: { age: { -18: Number, 18-35: Number, 35-51: Number, 51-: Number }, numericAge: Number } }
output
: Face Age Analysis Reportage
: Filtered (smoothened) age prediction:-18
: Probability Weightage suggesting less than 18 years old.18-35
: Probability Weightage suggesting between 18 to 35 years old.35-51:
Probability Weightage suggesting between 18 35 years old.51-
: Probability Weightage suggesting equal or greater than 51 years old.
numericAge
: Numeric. Estimated Age
// Start Face Age [FaceAI enableFaceAge:true]; - (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceAgeData:(NSString *_Nullable)type value:(NSString *_Nullable)value{ }
Note: in case of poor quality of the prediction, by default, the event is not fired (i.e. skipped for that frame).
Face Gender
This is to analyse face gender in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI analyses face gender.
Class: EnxFaceAI
Method: -(void)enableFaceGender:(BOOL)enable;
– To start/stop analysis
Parameters:
enable
: Boolean. Set totrue
to enable/start Face Gender analysis. Set tofalse
to disable/stop analysis
Delegate Method:
–FaceAI:didFaceGenderData:value:
– Gets called repeatedly with Face Gender analysis report with JSON Object. JSON Object Reference appended below:
{ output: { gender: { Female: Number, Male: Number }, mostConfident: String } }
output
: Face Gender Reportgender
: Filtered (smoothened) probabilities of the gender prediction:Female
: Probability weightage for gender is femaleMale
: Probability weightage for gender is male
mostConfident
: Gender name of the most likely result if its smoothened probability is above the threshold, otherwise it is undefined.
// Start Face Gender [FaceAI enableFaceGender:true]; - (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceGenderData:(NSString *_Nullable)type value:(NSString *_Nullable)value{ }
Face Emotion
This is to analyse face emotions in Video Stream. It analyses basic 8 emotions in a human face, viz. Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral. It also returns most dominate emotion on face. The Event Listener continuously getss data in JSON as FaceAI analyses face emotion.
Class:EnxFaceAI
Method: -(void)enableFaceEmotion:(BOOL)enable;
– To start/stop analysis
Parameters:
enable
: Boolean. Set totrue
to enable/start Face Emotion analysis. Set tofalse
to disable/stop analysis
Delegate Method:
–FaceAI:didFaceEmotionData:value:
– Called repeatedly with Face Gender analysis report with JSON Object. JSON Object Reference appended below:
{ output: { dominantEmotion: String, emotion: { Angry: Number, Disgust: Number, Fear: Number, Happy: Number, Neutral: Number, Sad: Number, Surprise: Number } } }
output
: Face Emotion ReportdominantEmotion
: Name of Dominant Emotion if present, otherwise it is undefined.emotion
: Filtered (smoothened) values of the probability distribution of emotions. The sum of all the probabilities is always 1, each probability in the distribution has a value between 0 and 1.Angry
: Probability for Angry.Disgust
: Probability for Disgust.Fear
: Probability for Fear.Happy
: Probability for Happy.Sad
: Probability for Sad.Surprise
: Probability for Surprise.Neutral
: Probability for Neutral.
// Start Face Emotion [FaceAI enableFaceEmotion:true]; - (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceEmotionData:(NSString *_Nullable)type value:(NSString *_Nullable)value{ }
Face Features
This is to analyse face features in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI analyses face features.
Class:EnxFaceAI
Method: -(void)enableFaceFeatures:(BOOL)enable;
– To start/stop analysis
Parameters:
enable
: Boolean. Set totrue
to enable/start Face Features analysis. Set tofalse
to disable/stop analysis
Delegate Method:
–FaceAI:didFaceFeaturesData:value:
– Gets called repeatedly with Face Features analysis report with JSON Object. JSON Object Reference appended below:
{ output: { features: { ArchedEyebrows: Number, Attractive: Number, .... .... } } }
output
: Face Features Reportfeatures
: Filtered (smoothened) probabilities of each face independent feature in range 0.0 – 1.0. The following features are evaluated:- Arched Eyebrows
- Attractive
- Bags Under Eyes
- Bald
- Bangs
- Beard 5 O’Clock Shadow
- Big Lips
- Big Nose
- Black Hair
- Blond Hair
- Brown Hair
- Chubby
- Double Chin
- Earrings
- Eyebrows Bushy
- Eyeglasses
- Goatee
- Gray Hair
- Hat
- Heavy Makeup
- High Cheekbones
- Lipstick
- Mouth Slightly Open
- Mustache
- Narrow Eyes
- Necklace
- Necktie
- No Beard
- Oval Face
- Pale Skin
- Pointy Nose
- Receding Hairline
- Rosy Cheeks
- Sideburns
- Straight Hair
- Wavy Hair
// Start Face Features [FaceAI enableFaceFeatures:true]; - (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceFeaturesData:(NSString *_Nullable)type value:(NSString *_Nullable)value{ }
Face Arousal Valence
This is to analyse face arousal valence in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI analyses Face Arousal Valence.
Class:EnxFaceAI
Method: -(void)enableFaceArousalValence:(BOOL)enable;
– To start/stop analysis
Parameters:
enable
: Boolean. Set totrue
to enable/start Face Arousal Valence analysis. Set tofalse
to disable/stop analysis
Delegate Method:
–FaceAI:didFaceArousalValenceData:value:
Gets called repeatedly with Face Arousal Valence analysis report with JSON Object. JSON Object Reference appended below:
{ output: { arousalvalence: { arousal: Number, valence: Number, affects38 : { "Afraid": Number, "Amused": Number, .. }, affects98 : { "Adventurous": Number, "Afraid": Number, .. }, quadrant : String } } }
output
: Face Arousal Valence Reportarousalvalence
: Filtered (smoothened) values.arousal
: Range 1.0 to 1.0. It represents the degree of engagement (positive arousal), or disengagement (negative arousal).valence
: Range -1.0 to 1.0. It represents the degree of pleasantness (positive valence), or unpleasantness (negative valence).affects38
: An object containing the smoothened probabilities of the 38 affects in range [0.00, 1.00]: Afraid, Amused, Angry, Annoyed, Uncomfortable, Anxious, Apathetic, Astonished, Bored, Worried, Calm, Conceited, Contemplative, Content, Convinced, Delighted, Depressed, Determined, Disappointed, Discontented, Distressed, Embarrassed, Enraged, Excited, Feel Well, Frustrated, Happy, Hopeful, Impressed, Melancholic, Peaceful, Pensive, Pleased, Relaxed, Sad, Satisfied, Sleepy, Tired

affects98
: An object containing the smoothened probabilities of the 98 affects in range [0.00, 1.00]: Adventurous, Afraid, Alarmed, Ambitious, Amorous, Amused, Wavering, Angry, Annoyed, Anxious, Apathetic, Aroused, Ashamed, Worried, Astonished, At Ease, Attentive, Bellicose, Bitter, Bored, Calm, Compassionate, Conceited, Confident, Conscientious, Contemplative, Contemptuous, Content, Convinced, Courageous, Defient, Dejected, Delighted, Depressed, Desperate, Despondent, Determined, Disappointed, Discontented, Disgusted, Dissatisfied, Distressed, Distrustful, Doubtful, Droopy, Embarrassed, Enraged, Enthusiastic, Envious, Excited, Expectant, Feel Guilt, Feel Well, Feeling Superior, Friendly, Frustrated, Glad, Gloomy, Happy, Hateful, Hesitant, Hopeful, Hostile, Impatient, Impressed, Indignant, Insulted, Interested, Jealous, Joyous, Languid, Light Hearted, Loathing, Longing, Lusting, Melancholic, Miserable, Passionate, Peaceful, Pensive, Pleased, Polite, Relaxed, Reverent, Sad, Satisfied, Selfconfident, Serene, Serious, Sleepy, Solemn, Startled, Suspicious, Taken Aback, Tense, Tired, Triumphant, Uncomfortable

quadrant
: A string representing one of the four quadrants in the cirumplex model of affect (“High Control”, “Obstructive”, “Low Control”, “Conductive”, or “Neutral”)

// Start Face Arousal Valence [FaceAI enableFaceArousalValence:true]; - (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceArousalValenceData:(NSString *_Nullable)type value:(NSString *_Nullable)value{ }
Face Attention
This is to analyse face attention in a Video Stream. The Event Listener continuously get data in JSON as FaceAI analyses face attention.
Class:EnxFaceAI
Method: -(void)enableFaceAttention:(BOOL)enable;
– To start/stop analysis
Parameters:
enable
: Boolean. Set totrue
to enable/start Face Attention analysis. Set tofalse
to disable/stop analysis
Delegate Method:
–FaceAI:didFaceAttentionData:value:
– Gets called repeatedly with Face Attention analysis report with JSON Object. JSON Object Reference appended below:
{ output: { attention: Number } }
output
: Face Attention Reportattention
: Filtered value (smoothened) in range [0.0, 1.0]. A value close to 1.0 represents attention, a value close to 0.0 represents distraction.
// Start Face Attention [FaceAI enableFaceAttention:true]; - (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceAttentionData:(NSString *_Nullable)type value:(NSString *_Nullable)value{ }
Face Wish
This is to analyze face wish in a Video Stream. The Event Listener keeps getting data in JSON as FaceAI keeps analyzing face wish.
Class: EnxFaceAI
Method: -(void)enableFaceWish:(BOOL)enable;
– To start/stop analysis
Parameters:
enable
: Boolean. Set totrue
to enable/start Face Wish analysis. Set tofalse
to disable/stop analysis
Delegate Method:
–FaceAI:didFaceWishData:value:
– Get called notifies repeatedly with Face Wish analysis report with JSON Object. JSON Object Reference appended below:
{ output: { wish: Number } }
output
: Face Wish Reportwish
: Filtered value (smoothened) in range [0, 1.0]. A value closer to 0 represents a lower wish, a value closer to 1.0 represents a higher wish.
// Start Face Wish [FaceAI enableFaceWish:true]; - (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceWishData:(NSString *_Nullable)type value:(NSString *_Nullable)value{ }
Stop Face AI
This is to stop analyzing face any further. An initialized face analysis must be stopped to end usage of Face AI during a session failing which it would implicitly end at end of current Video Session.
Class: EnxFaceAI
Method: EnxFaceAI.stopFaceAI(callback)
– To stop analysis
Parameters:
callback
: Callback to know that analysis has stopped.enable
: Boolean. Set totrue
to enable/start Face Age analysis. Set tofalse
to disable/stop analysis
// Stop Face AI Analysis faceAI.stopFaceAI((evt) => { if (evt.result === 0) { // Stopped } });