Face Recognition Through Simulation
Abstract
The use of machines in society has increased widely in the last decades. Nowadays, machines are used in many different industries. As their exposure with humans increase, the interaction also has to become smoother and more natural. In order to achieve this, machines have to be provided with a capability that let them understand the surrounding environment. When machines are referred, this term comprises to computers and robots. A distinction between both is that robots involve interaction abilities into a more advanced extent since their design involves some degree of autonomy. When machines are able to appreciate their surroundings, some sort of machine perception has been developed. Humans use their senses to gain insights about their environment. Therefore, machine perception aims to mimic human senses in order to interact with their environment. Nowadays, machines have several ways to capture their environment state trough cameras and sensors. Hence, using this information with suitable algorithms allow to generate machine perception. In the last years, the use of Deep Learning algorithms has been proven to be very successful in this regard. For instance, Jeremy Howard showed on his Brussels 2014 TEDx’s talk how computers trained using deep learning techniques were able to achieve some amazing tasks. These tasks include the ability to learn Chinese language, to recognize objects in images and to help on medical diagnosis. Affective computing claims that emotion detection is necessary for machines to better serve their purpose. For example, the uses of robots in areas such as elderly care or as porters in hospitals demand a deep understanding of the environment. Facial emotions deliver information about the subject’s inner state. If a machine is able to obtain a sequence of facial images, then the use of deep learning techniques would help machines to be aware of their interlocutor’s mood. In this context, deep learning has the potential to become a key factor to build better interaction between humans and machines, while providing machines with some kind of self-awareness about its human peers, and how to improve its communication with natural intelligence.
How to cite this article:
Thamizhmaran K. Face Recognition Thourgh Simulation. J Adv Res Instru Control Engg 2021; 8(1&2): 12-17
References
2. Vinay Bettadapura (2012) “Face expression recognition and analysis: the state of the art”, arXiv preprint arXi.
3. Xiaodong Cui et al, (2012) “Data augmentation for deep neural network acoustic modelling” IEEE/ACM Trans. Audio, Speech and Lang, Vol. 23, No. 9, pp. 1469–1477.
4. Abhinav Dhall et al (2014) “Emotion recognition in the wild challenge”, In Proceedings of the 16th International Conference on Multimodal Interaction, pp. 461–466.
5. Geoffrey Hinton et al (2012) “Deep neural networks for acoustic modeling in speech recognition”, The shared views of four research groups. Signal Processing Magazine, IEEE, Vol. 29, No. 6, pp. 82–97.
6. Anna Gorbenko and Vladimir Popov (2012) “On face detection from compressed video streams”, Applied Mathematical Sciences, Vol. 6, No. 96, pp. 4763–4766.
7. Geoffrey E Hinton et al (2012), “Improving neural networks by preventing co-adaptation of feature detectors”, arXiv preprint.
8. Minh Hoai (2014) “Regularized max pooling for image categorization”, In BMVC, Vol. 2, pp. 6.
9. Andrew G Howard (2013) “Some improvements on deep convolutional neural network based image classification”, arXiv.
10. Ilya Sutskever et al (2014) “Sequence to sequence learning with neural networks”, In Advances in neural information processing systems, pp. 3104–3112.
11. Yangqing Jia, et al (2014) “Caffe: Convolution architecture for fast feature embedding”, arXiv preprint arXiv.
12. Y. Kim et al (2013) “Deep learning for robust feature generation in audiovisual emotion recognition”, IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3687–3691.