feel.ai – Overview
- feel.ai is an API for detecting and analyzing human faces in pictures and videos. Face analysis includes the recognition of age group, attentiveness, emotions, and gender.
- The API’s typical use-cases include marketing, retail, healthcare, senior care, and online education.
- The API is based on state-of-the-art patent-pending technologies developed by Neuromorphic using deep learning and biologically inspired neural computing architectures.
- The API is available through a REST HTTP interface, which can be used for images and videos.
Contact us for more information or for an evaluation key.
feel.ai facial analysis in pictures
- Users can post pictures to the API and receive classification results of detected faces and associated emotions, attentiveness, gender, and age groups.
- Pictures can be in any of the popular formats, including JPEG, PNG, GIF, and TIFF.
- The location of the faces and analysis results are returned in the response.
feel.ai facial analysis in videos
Users can post videos for facial analysis in two modes:
Batch mode
- Each request posted by a user is queued for execution by the API.
- The user can query the status of the job, and when it is completed, the analysis results are returned.
- Using the API facilities, the user can also inspect video analytics, including a graph that shows the post-processed aggregate emotion state, attentiveness, age group, and gender, frame-by-frame in the video.
- The user also has access to media information extracted from the video, which can facilitate the visualization of videos in common browsers.
Immediate mode
- In this mode, the request is processed by the API immediately (i.e., the request is not queued, but processed instantly).
Contact us for more information or for an evaluation key.

