FaceAI iOS SDK is a fast and lightweight SDK that allows you to add facial expression and facial emotion recognition AI to iOS app. It works on the App’s UI itself and does not send or store any of the biometric data to a server.

Released: May 13, 202

Table of Contents

How to use EnxFaceAIiOS iOS SDK?

The EnxFaceAIiOS directory contains the EnxFaceAIiOS.framework iOS SDK. Add this Framework in your project. The EnxFaceAIiOS iOS SDK is supported by iOS 10 or higher, XCode 9 and later.

  • Install CocoaPods as described in CocoaPods Getting Started
  • In Terminal, go to Project Directory and run pod init
  • To integrate EnxFaceAIiOS into your Xcode project using CocoaPods, specify pod name EnxFaceAIiOS
  • After adding all required library in PodFile, go to terminal and run pod Install
  • Re-open your Project in Xcode using the new .xcworkspace file

Note: To use EnxFaceAIiOS SDK, please add GoogleWebRTC, EnxRTCiOS and Socket.IO-Client-Swift 15.0.0 in your Project with the following pod names:

Important: You must enable FaceAI while defining a Room for Facial Expression Analysis to work. Use “{ "settings": { "facex": true; }} to define a room.

Initialise FaceAI

FaceAI analyses Video Streams within an ongoing EnableX video session. To get started with analysis, you must bind the FaceAI Object with the Room in which Video Session is on.

Clas: EnxFaceAI

Method: -(void)initFaceAI:(NSDictionary *)roomMeta room:(EnxRoom * Nonnull)room stream:(EnxStream *)stream delegate:(id _Nonnull)delegate; – To start analysis on given Stream

Parameters:

  • roomMeta – NSDictionary Object. It contains information of the connected Room.
  • room – Instance of the connected Room using EnxRTCiOS native SDK
  • stream – The Local Stream Object which will be analyzed.
  • delegateEnxFaceAIDelegate object is the delegate for the EnxFaceAI object
    • faceDetector – JSON Object. This is to configure or customise the parameter which the Face Detector would analyse.
      • maxInputFrameSize – Number. Default 160 (pixel). Input Frame Size in pixels for Face Detection.
      • fullFrameDetection – A boolean. It is true when detection was full-frame and multiple faces can be returned, false otherwise.
    • facePose– JSON Object. This is to configure or customise parameter using which the Face Pose would be analysed.
      • smoothness: Number. Default 0.65. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceAge – JSON Object. This is to configure or customise parameter using which the Face Age would be analysed.
      • Currently EnableX doesn’t have any parameter. Therefore, you need to keep it empty while calling this method.
    • faceGender – JSON Object. This is to configure or customise parameter using which the Face Gender would be analysed.
      • smoothness: Number. Default 0.95. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
      • threshold: Number. Default 0.70. Range 0.5-1. It controls the minimum value of confidence for which mostConfident output returns the predicted gender name instead of undefined.
    • faceEmotion – JSON Object. This is to configure or customise parameter using which the Face Gender would be analysed.
      • smoothness: Number. Default 0.95. Range 0-1. A value closer to 1 provides greater smoothing and slower response time. Lower values provide less smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
      • threshold: Number. Default 0.70. Range 0.5-1. It controls the minimum value of confidence for which mostConfident output returns the predicted gender name instead of undefined.
    • faceFeatures – JSON Object. This is to configure or customize parameter using which the Face Features would be analyzed.
      • smoothness: Number. Default 0.90. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceArousalValence – JSON Object. This is to configure or customize parameter using which the Face Arousal Valence would be analyzed.
      • smoothness: Number. Default 0.70. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceAttention – JSON Object. This is to configure or customize parameter using which the Face Attention would be analyzed.
      • smoothness: Number. Default 0.83. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time. Set it to 0 (zero) if you need the raw signal.
    • faceWish – JSON Object. This is to configure or customize parameter using which the Face Wish would be analyzed.
      • smoothness: Number. Default 0.80. Range 0-1. Value closer to 1 provides greater smoothing and slower response time. Lower values provide lesser smoothing but faster response time.
  • callback: Callback to know if the room is enabled for FaceAI Analysis an that the client point is connected to an active session
faceDetectorConfig = {
	maxInputFrameSize: 200,
	fullFrameDetection: true
};  

facePoseConfig = {
	smoothness: 0.65
};

faceAgeConfig = {};

faceGenderConfig = {
	smoothness: 0.95,
	threshold: 0.70
};

faceEmotionConfig = {
	smoothness: 0.95,
	threshold: 0.70
};

faceFeaturesConfig = {
	smoothness: 0.90 
};

faceArousalValenceConfig = {
	smoothness: 0.70 
};

faceAttentionConfig = {
	smoothness: 0.85
};

faceWishConfig = {
	smoothness: 0.80
};

config = {
	faceDetector: faceDetectorConfig,
	facePose: facePoseConfig,
	faceAge: faceAgeConfig,
	faceGender: faceGenderConfig,
	faceEmotion: faceEmotionConfig,
	faceFeatures: faceFeaturesConfig,
	faceArousalValence: faceArousalValenceConfig,
	faceAttention: faceAttentionConfig,
	faceWish: faceWishConfig
}

localStream = EnxRtc.joinRoom(token, config, (response, error) => {
	if (error && error != null) { }

	if (response && response != null) {
		const FaceAI  = new EnxFaceAI(); // Construct the Object
		FaceAI.init(response, localStream, config, function (event) {
			// event.result == 0 - All Ok to process	
		})
	}
})

Face Detector

This is to detect how many faces are there in a Video Stream. The Event Listener continuously receives data in JSON as the detector tries to detect faces in the changing video frame.

Class: EnxFaceAI

Method: E-(void)enableFaceDetector:(BOOL)enable; – To start/stop analysis

Parameters:

  • enable: Boolean. Set to true to enable/start Face Detector analysis. Set to false to disable/stop analysis

Delegate Method:

  • –FaceAI:didFaceDetectorData:value: Gets called repeatedly with Face Detection analysis report with JSON Object. JSON Object Reference appended below:
{
    faces: Array(n)
    rects: Array(n)
    status: string
}
  • faces: Array. The detected faces in form of ImageData objects (zero or one; or multiple faces, if fullFrameDetection is true)
  • rects: Array of objects. Describes the bounding boxes (zero or one; or multiple rects, if fullFrameDetection is true)
    • x: Upper left point x coordinate
    • y: Upper left point y coordinate
    • width: Width of the bounding box
    • height: Height of the bounding box
  • status: String. Its status of the face tracker
    • INIT: Detector initializing; zero or many faces could be returned
    • TRACK_OK: Detector is correctly tracking one face; one face is returned
    • RECOVERING: Detector lost a face and attempting to recover and continue tracking; zero faces are returned
// Start Face Detector
[FaceAI enableFaceDetector:true];

- (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceDetectorData:(NSString *_Nullable)type value:(NSString *_Nullable)value{

}

Face Pose

This is to analyse face rotation and position in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI detects face rotation in the video stream. Face Rotation angle data is represented in terms of radiants as Pitch, Roll and Yaw.

Class: EnxFaceAI

Method: -(void)enableFacePose:(BOOL)enable; – To start/stop analysis

Parameters:

  • enable: Boolean. Set to true to enable/start Face Pose analysis. Set to false to disable/stop analysis

Delegate Method:

  • –FaceAI:didFacePoseData:value: Called repeatedly with Face Rotation & Position analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		pose: {
			pitch: Number, 
			roll: Number, 
			yaw: Number
		}
	}
}
  • output: Face Rotation & Position Report
    • pose: Filtered (smoothened) pose rotation angles expressed in radiants as pitch, roll and yaw.
// Start Face Pose
[FaceAI enableFacePose:true];

- (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFacePoseData:(NSString *_Nullable)type value:(NSString *_Nullable)value{

}

Face Age

This is to analyse and predict face age in a Video Stream. Face Age predicts within an age range. The Event Listener continuously gets data in JSON as FaceAI analyses face age. If the prediction quality is poor, the event is not fired.

Class: EnxFaceAI

Method: -(void)enableFaceAge:(BOOL)enable; – To start/stop analysis

Parameters:

  • enable: Boolean. Set to true to enable/start Face Age analysis. Set to false to disable/stop analysis

Delegate Method:

  • –FaceAI:didFaceAgeData:value: – Gets called repeatedly with Face Age analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		age: {
			-18: Number, 
			18-35: Number, 
			35-51: Number, 
			51-: Number
		}, 
		numericAge: Number
	}
}
  • output: Face Age Analysis Report
    • age: Filtered (smoothened) age prediction:
      • -18: Probability Weightage suggesting less than 18 years old.
      • 18-35: Probability Weightage suggesting between 18 to 35 years old.
      • 35-51: Probability Weightage suggesting between 18 35 years old.
      • 51-: Probability Weightage suggesting equal or greater than 51 years old.
    • numericAge: Numeric. Estimated Age
// Start Face Age
[FaceAI enableFaceAge:true];

- (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceAgeData:(NSString *_Nullable)type value:(NSString *_Nullable)value{

}

Face Gender

This is to analyse face gender in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI analyses face gender.

Class: EnxFaceAI

Method: -(void)enableFaceGender:(BOOL)enable; – To start/stop analysis

Parameters:

  • enable: Boolean. Set to true to enable/start Face Gender analysis. Set to false to disable/stop analysis

Delegate Method:

  • –FaceAI:didFaceGenderData:value: – Gets called repeatedly with Face Gender analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		gender: {
			Female: Number, 
			Male: Number
		}, 
		mostConfident: String
	}
}
  • output: Face Gender Report
    • gender: Filtered (smoothened) probabilities of the gender prediction:
      • Female: Probability weightage for gender is female
      • Male: Probability weightage for gender is male
    • mostConfident: Gender name of the most likely result if its smoothened probability is above the threshold, otherwise it is undefined.
// Start Face Gender
[FaceAI enableFaceGender:true];

- (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceGenderData:(NSString *_Nullable)type value:(NSString *_Nullable)value{

}

Face Emotion

This is to analyse face emotions in Video Stream. It analyses basic 8 emotions in a human face, viz. Angry, Disgust, Fear, Happy, Sad, Surprise, Neutral. It also returns most dominate emotion on face. The Event Listener continuously getss data in JSON as FaceAI analyses face emotion.

Class:EnxFaceAI

Method: -(void)enableFaceEmotion:(BOOL)enable; – To start/stop analysis

Parameters:

  • enable: Boolean. Set to true to enable/start Face Emotion analysis. Set to false to disable/stop analysis

Delegate Method:

  • –FaceAI:didFaceEmotionData:value: – Called repeatedly with Face Gender analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		dominantEmotion: String,
		emotion: {
			Angry: Number, 
			Disgust: Number, 
			Fear: Number, 
			Happy: Number, 
			Neutral: Number, 
			Sad: Number, 
			Surprise: Number
		}
	}
}
  • output: Face Emotion Report
    • dominantEmotion: Name of Dominant Emotion if present, otherwise it is undefined.
    • emotion: Filtered (smoothened) values of the probability distribution of emotions. The sum of all the probabilities is always 1, each probability in the distribution has a value between 0 and 1.
      • Angry: Probability for Angry.
      • Disgust: Probability for Disgust.
      • Fear: Probability for Fear.
      • Happy: Probability for Happy.
      • Sad: Probability for Sad.
      • Surprise: Probability for Surprise.
      • Neutral: Probability for Neutral.
// Start Face Emotion
[FaceAI enableFaceEmotion:true];

- (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceEmotionData:(NSString *_Nullable)type value:(NSString *_Nullable)value{

}

Face Features

This is to analyse face features in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI analyses face features.

Class:EnxFaceAI

Method: -(void)enableFaceFeatures:(BOOL)enable; – To start/stop analysis

Parameters:

  • enable: Boolean. Set to true to enable/start Face Features analysis. Set to false to disable/stop analysis

Delegate Method:

  • –FaceAI:didFaceFeaturesData:value: – Gets called repeatedly with Face Features analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		features: {
			ArchedEyebrows: Number, 
			Attractive: Number,
			....
			....
		}
	}
}
  • output: Face Features Report
    • features: Filtered (smoothened) probabilities of each face independent feature in range 0.0 – 1.0. The following features are evaluated:
      • Arched Eyebrows
      • Attractive
      • Bags Under Eyes
      • Bald
      • Bangs
      • Beard 5 O’Clock Shadow
      • Big Lips
      • Big Nose
      • Black Hair
      • Blond Hair
      • Brown Hair
      • Chubby
      • Double Chin
      • Earrings
      • Eyebrows Bushy
      • Eyeglasses
      • Goatee
      • Gray Hair
      • Hat
      • Heavy Makeup
      • High Cheekbones
      • Lipstick
      • Mouth Slightly Open
      • Mustache
      • Narrow Eyes
      • Necklace
      • Necktie
      • No Beard
      • Oval Face
      • Pale Skin
      • Pointy Nose
      • Receding Hairline
      • Rosy Cheeks
      • Sideburns
      • Straight Hair
      • Wavy Hair
// Start Face Features
[FaceAI enableFaceFeatures:true];

- (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceFeaturesData:(NSString *_Nullable)type value:(NSString *_Nullable)value{

}

Face Arousal Valence

This is to analyse face arousal valence in a Video Stream. The Event Listener continuously gets data in JSON as FaceAI analyses Face Arousal Valence.

Class:EnxFaceAI

Method: -(void)enableFaceArousalValence:(BOOL)enable; – To start/stop analysis

Parameters:

  • enable: Boolean. Set to true to enable/start Face Arousal Valence analysis. Set to false to disable/stop analysis

Delegate Method:

  • –FaceAI:didFaceArousalValenceData:value: Gets called repeatedly with Face Arousal Valence analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		arousalvalence: {
			arousal: Number, 
			valence: Number
		}
	}
}
  • output: Face Arousal Valence Report
    • arousalvalence: Filtered (smoothened) values.
      • arousal: Range 1.0 to 1.0. It represents the degree of engagement (positive arousal), or disengagement (negative arousal).
      • valence: Range -1.0 to 1.0. It represents the degree of pleasantness (positive valence), or unpleasantness (negative valence).
// Start Face Arousal Valence
[FaceAI enableFaceArousalValence:true];

- (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceArousalValenceData:(NSString *_Nullable)type value:(NSString *_Nullable)value{

}

Face Attention

This is to analyse face attention in a Video Stream. The Event Listener continuously get data in JSON as FaceAI analyses face attention.

Class:EnxFaceAI

Method: -(void)enableFaceAttention:(BOOL)enable; – To start/stop analysis

Parameters:

  • enable: Boolean. Set to true to enable/start Face Attention analysis. Set to false to disable/stop analysis

Delegate Method:

  • –FaceAI:didFaceAttentionData:value: – Gets called repeatedly with Face Attention analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		attention: Number
	}
}
  • output: Face Attention Report
    • attention: Filtered value (smoothened) in range [0.0, 1.0]. A value close to 1.0 represents attention, a value close to 0.0 represents distraction.
// Start Face Attention
[FaceAI enableFaceAttention:true];

- (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceAttentionData:(NSString *_Nullable)type value:(NSString *_Nullable)value{

}

Face Wish

This is to analyze face wish in a Video Stream. The Event Listener keeps getting data in JSON as FaceAI keeps analyzing face wish.

Class: EnxFaceAI

Method: -(void)enableFaceWish:(BOOL)enable;– To start/stop analysis

Parameters:

  • enable: Boolean. Set to true to enable/start Face Wish analysis. Set to false to disable/stop analysis

Delegate Method:

  • –FaceAI:didFaceWishData:value: – Get called notifies repeatedly with Face Wish analysis report with JSON Object. JSON Object Reference appended below:
{	output: {
		wish: Number
	}
}
  • output: Face Wish Report
    • wish: Filtered value (smoothened) in range [0, 1.0]. A value closer to 0 represents a lower wish, a value closer to 1.0 represents a higher wish.
// Start Face Wish
[FaceAI enableFaceWish:true];

- (void)FaceAI:(EnxFaceAI *_Nullable)FaceAI didFaceWishData:(NSString *_Nullable)type value:(NSString *_Nullable)value{

}

Stop Face AI

This is to stop analyzing face any further. An initialized face analysis must be stopped to end usage of Face AI during a session failing which it would implicitly end at end of current Video Session.

Class: EnxFaceAI

Method: EnxFaceAI.stopFaceAI(callback) – To stop analysis

Parameters:

// Stop Face AI Analysis
faceAI.stopFaceAI((evt) => {
	if (evt.result === 0) {	
		// Stopped
	}
});