Site Loader
Rock Street, San Francisco

Rehman Mizna et. al. 1 This paper
implements a new technique known as Emotion Sensory World of Blue Eyes
Technology which identifies human emotions (sad, happy, excited or surprised)
using image processing techniques by extracting eye portion from the captured
image which is then compared with stored images of data base. This paper
proposes two key results of emotional sensory world. First, observation reveals
the fact that different eye colors and their intensity results in change in
emotions. It changes without giving any information on shape and actual
detected emotion. It is used to successfully recognize four different emotions
of eyes.

S.R. Vinotha
et. al. 2, this paper uses the feature extraction technique to extract the
eyes, support vector machine (SVM) classifier and a HMM to build a human
emotion recognition system. The proposed system presents a human emotion recognition system that
analyzes the human eye region from video sequences. From the frames of the
video stream the human eyes can be extracted using the well-known canny edge operator
and classified using a non – linear Support Vector machine (SVM) classifier.
Finally, standard learning tool is used, Hidden Markov Model (HMM) for
recognizing the emotions from the human eye expressions.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

Mohammad Soleymani et. al. 3 this paper presents the
methodology in which instantaneous detection of the user’s emotions from facial
expression and electroencephalogram (EEG) signals is used. A set of videos
having different emotional level were shown to a group of people and their physiological
responses and facial expressions were recorded.  Five annotators annotates the valence (from
negative to positive) in the user’s face videos. A continuous annotation of
arousal dimensions and valence is also taken for stimuli videos. Continuous
Conditional Random Fields (CCRF) and Long-short-term-memory recurrent neural
networks (LSTM-RNN) were used in detecting emotions continuously and
automatically. The analyzed effect of the interference of facial muscle activities
on EEG signals shows that most of the emotionally valued content in EEG
features are as a result of this interference. However, the arithmetical
analysis showed that EEG signals carries complementary
information in presence of facial expressions.

T. Moriyama et. al. 4 this
paper presents  a system that has capabilities of giving detailed
analysis of eye region images in terms of the position of the iris, angle of
eyelid opening, and the texture, shape and complexity of the eyelids. The
system uses an eye region model that parameterizes the motion and fine
structure of an eye. The structural factors represent structural individuality
of the eye, including the colour and size of the iris, the width, boldness, and
complexity of the eyelids, the width of the bulge below the eye, and the width
of the illumination reflection on the bulge. The motion factors represent
movement of the eye, including the up-down motion and position of the upper and
lower eyelids and the 2D position of the iris.

Renu Nagpal et. al. 5 this paper presents the world’s first publicly
available dataset of labeled data recorded over the Internet of people
naturally viewing online media. The AM-FED contains, 1) More than 200 webcam
videos recorded in real-world conditions, 2) More than 1.5 lakhs frames labeled
for the presence of 10 symmetrical FACS action units, 4 asymmetric (unilateral)
FACS action units, 2 head movements, smile, general expressiveness, feature
tracker fails and gender, 3) locations of 22 automatically detect landmark
points, 4) baseline performance of detection algorithms on this dataset and
baseline classifier outputs for smile. 5) Self-report responses of familiarity
with, liking of and desire to watch again for the stimuli videos. This
represents a rich and extensively coded resource for researchers working in the
domains of facial expression recognition, affective computing, psychology and
marketing. The videos in this dataset were recorded in real-world conditions.
In particular, they exhibit non-uniform frame rate and non-uniform lighting.
The camera position relative the viewer varies from video to video and in some
cases the screen of the laptop is the only source of illumination. The videos
contain viewers from a range of ages and ethnicities some with glasses and
facial hair. The dataset contains a large number of frames with agreed presence
of facial action units and other labels.

Post Author: admin


I'm Dora!

Would you like to get a custom essay? How about receiving a customized one?

Check it out