# | EXP ID | EXP Title | Abstract |
---|---|---|---|
1 | EXP 01 | Face Recognition using 2D CNN | |
2 | EXP 02 | Face Recognition using 3D CNN | |
3 | EXP 03 | Image-to-Image Translation using Generative Adversarial Networks | |
4 | EXP 04 | Thermal/NIR to Visible Face Synthesis using Generative Adversarial Networks | |
5 | EXP 05 | Interactive Indoor Scene Description to Aid in Navigation for Visually Impaired Individuals using | This project introduces a methodology to help visually impaired person avoid obstacles in an indoor environment. We use DepthNet-MiDaS large model to get the depth map of the scene, and then use sparse optical flow in parallel to predict the path of interested objects. This is done in order to recognise objects that might cross paths and are of potential danger to the user. Index Terms—Object Recognition, Transfer Learning, Convolution Neural Networks (CNN), Monocular Depth, Optical Flow. |
6 | EXP 06 | Classification of COVID-19 patients using Chest X-ray Images using Image Super Resolution | COVID-19 is the illness caused by a novel coronavirus now known as severe acute respiratory syndrome coronavirus, which was categorized as a breakout of respiratory infection. As we can see, the coronavirus is an ongoing pandemic that spread so quickly between people and approached 257 million people worldwide. The second wave in many countries caused the utmost destruction. There is a need for a mechanism which could be scalable, reliable and fast. The currently available methods suffer due to the limited and low equality samples.We proposed a very accurate and efficient model that is Detection of Covid-19 using Deep Learning and for better results, we have applied Image Super-Resolution on the dataset. Image Super- Resolution is the technique in which high-resolution images are restored from the lower resolution images. The analysis is performed on a publicly available dataset. The model is classifying the Covid-19 patients with an accuracy of 97.36% and these high results will help the radiology specialist to reduce the false detection rate. |
7 | EXP 07 | Optimizing Resource Consumption of Capsule Networks on End Devices using Tucker Decomposition and Truncated SVD | Capsule networks have been a network that has been an improvement over CNNs for specific computer vision tasks, but the downside is that its resource requirements are still too high to be used as part of resource constrained devices. To make capsule network architectures more viable for use in resource constrained devices, we introduce two optimizations to the capsule network: the Tucker Decomposition and the Truncated Singular Value Decomposition. The use of Tucker Decomposition reduces the time required for inference by 50% and reduces the parameters of the network by 20%, which is a significant improvement over the baseline capsule network implementation. |
8 | EXP 08 | Text Sentiment Analysis using deep learning | In this experiment, a systematic and well structured method for text sentiment analysis is presented. We have used the BERT(Bidirectional Encoder Representations from Transformers) model for sentiment analysis. BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. We have used pretrained transformers as our embedding layer and only train the remainder of the model which learns from the representations produced by the transformer, using a multi-layer bi-directional GRU. |
9 | EXP 09 | Handwritten word recognition in JPEG compressed domain using deep learning | The ability for an automated system to receive and interpret messages from a handwritten format has been a field of interest for the past few decades. In this experiment, a hybrid approach is proposed by combining the aspects of both RNN and CNN to achieve the same in the case of images in JPEG compressed image domain. |
10 | EXP 10 | Incorrect Face Mask Detection using Deep Learning | The coronavirus disease (COVID 19 ) has spread rapidly in the world, causing a worldwide catastrophe. The impact of the disease is being felt not just economically but also socially and in terms of human lives. Of the many mechanisms to fight this disease, wearing a face-mask was found to be most effective, but the effectiveness of face masks has been diminished, mostly due to improper wearing. In this study we develop a face-mask wearing identification system that classifies whether the given 2D image is wearing the face mask correctly, incorrectly or not wearing a face mask. For solving this three class classification problem we used MT-CNN for face detection and a convolution neural network (CNN) as a classification network. The proposed method can be outlined in 4 steps : Image pre-processing, face detection and cropping, face-mask condition classification. The dataset used for this study was Medical Mask Dataset, a publicly available dataset containing 3835 images with each image containing multiple faces. Total faces in the dataset is 8490 with 1741 faces wearing no mask, 6520 wearing the correct mask, and 223 wearing the incorrect mask. The proposed model achieved 91.40 % accuracy. The findings of our study indicate that the proposed model can achieve identification of face-mask-wearing conditions with high accuracy, and as the model automatically detects faces it can be used along with a video surveillance system thus having potential applications in COVID-19 pandemic prevention. |