Mihir Chauhan

PhD Candidate

Advised by Sargur Srihari
Computer Science and Engineering Department
State University of New York at Buffalo


Office: 338Z, AI Lab, Barbara And Jack Davis Hall
            Computer Science and Engineering Department
            University at Buffalo
            North Campus, 12, Capen Hall,
            Buffalo, New York 14260

Ph: (716) 275-6231

Email: mihirhem@buffalo.edu

Bio: I received my Bachelors in Electronics and Communication Engineering from the Veermata Jijabai Technological Institute (VJTI) in 2016, and M.S. in Computer Science and Engineering from the State University of New York at Buffalo I am Doctoral student at the State University of New York at Buffalo since 2018. Expected to graduate in February, 2022.

Detailed CV (pdf) - Updated: 11/22/20

Research Abstract: Most Machine Learning tasks are formulated as either Supervised or Unsupervised Learning problems. Given input data x and human annotated labels y, the Supervised Learning approach is to estimate a conditional probability distribution p(y|x) using labeled examples from the joint distribution p(x,y). In contrast, an Unsupervised Learning approach is to determine the distribution p(x) of unlabeled examples x. Semi-Supervised learning approach represent a middle ground between supervised and unsupervised learning where labeled examples from p(x,y) and unlabeled examples from p(x) are used to determine p(y|x). Self-Supervised learning approach is to determine feature representation h of input x using unlabeled examples from p(x) by determining specific data patterns h=f(x). The result of self-supervised learning can be used for many downstream tasks such as regression, classification where we determine p(y|x) as well as comparison where we determine p(y|x1,x2) whether the two inputs (x1,x2) originate from the same or different class distribution. An advantage of Self-supervised learning is that we do not need labeled samples and yet we are able to perform tasks performed by supervised learning.
   Self-supervised learning models are mainly categorized as Generative and Contrastive. Generative models maximize the likelihood of unlabeled data p(x) by reconstructing the input x from h in parts or in its entirety. Generative models have the disadvantage of focusing on low level features inappropriate for the downstream tasks. They also suffer from high outlier sensitivity. Contrastive models solve the problem by using a discriminative approach to learn representations by exploiting rich similarities between parts of input data. Contrastive objective function is interpretable for human-level understanding but suffers from early degeneration problem where the objective function over-fits early thereby losing the ability to generalize well. Self-supervised models to this date for computer vision are generally mainly limited to the classification task p(y|x).
   My research aims to combine generative and contrastive self-supervised methods to learn representations h for classification and comparison tasks. The combined model is an adversarial model which has a generator and a discriminator network. The generator network uses a Variational AutoEncoder (VAE) which helps generate disentangled representations of input data while the discriminator network aims to maximize the similarity between randomly transformed view’s xt of data x and it’s generated adversarial example xadv.
   The representations h generated by the adversarial self-supervised framework is given as an input to a semi-supervised learning network which is trained on the downstream tasks. We propose to use a fine-tuned ResNet-50 architecture for evaluating the downstream task of image classification and a Deep Siamese network for the task of image comparison. The proposed model will be trained on multiple image classification and comparison benchmark datasets. Furthermore, we also apply our proposed model to domain specific dataset of Handwriting Identification and Verification. Finally, we will evaluate and compare the performance of our proposed self-supervised learning approach with other self-supervised and supervised learning approaches on the classification and comparison tasks using Accuracy, F-Scores, Precision and Recall metrics.