|dc.description.abstract||In this thesis, a novel idea is presented, which is to teach young children with Autism Spectrum Disorder (ASD) to recognize human facial expressions through the help of computer vision and image processing. Universally, there are seven facial expressions categories: angry, disgust, happy, sad, fear, surprised and neutral. To recognize all these facial expressions and to predict the current mood of a person is a difficult task for a child. For a child with ASD, this problem presents itself in a more complex manner due to the nature of the disorder. The main goal of the thesis was to develop a deep Convolutional Neural Network (DCNN) for facial expression recognition, which can help young children with ASD to recognize facial expressions, using mobile devices. Previously, different neural network models and classifiers have been presented to achieve state of the art accuracy in this sector. Separately, different studies have been performed in studying the ability and performance of children with ASD for recognizing facial expressions. In this thesis, additional features have been added to the DCNN model such that it can correctly classify facial expressions in different lighting conditions and from different viewpoints as the model is trained to do so. Upon developing the DCNN model, an iOS app has been developed implementing this deep learning model as a byproduct and as a medium to use this model in clinical trials for children with autism as a medium of enhancing their communication abilities. The implementation of this proposed idea started with finding datasets containing images of faces with different expressions from different angles.
Further datasets were produced from the original dataset with images of different contrast and brightness with the help of image processing. The performance of the DCNN model was evaluated using these datasets. Once an optimal accuracy is achieved with good generalizability, an app suitable for iOS platform was developed for running both the DCNN model and image processing algorithms. The function of the app is to open the camera of the device, detect a face, classify the facial expression, and show the expression with an emoticon on the screen. As a product of this work, the app can be used by speech-language pathologies, teacher, care-takers, and parents as a
technological tool when working with children with ASD. The design of the model and application is targeted to children with ASD to recognize and identify facial expressions in real-time to practice social skills during everyday social interaction.||