I am interested in machine learning and its applications in computer vision. Specifically, I work on Representation Learning algorithms that can be used for unsupervised discovery of predominant patterns embedded in data which can be eventually used for supervised tasks. Learning good data representation is necessary in order to accurately draw inferences and make predictions based on a given set of data samples. Such models should reflect the compact global structure, capture the behaviour of data, and be robust in the presence of noise. My early work focused on modelling data embedded in a union of linear independent subspaces. For the last few years, I have been working on deep learning algorithms with focus on analysing their manifold learning properties to gain insights into what leads to better data representation. I am currently also researching deep learning models that are closer to their biologically plausible counterpart.
There are too many existing algorithms out there for every single problem domain. May be one is better than the other but many such algorithms seem to be based on very different intuitions and fundamental concepts. My goal is not to just propose yet another algorithm that improves the existing state-of-the-art performance. Instead, I am interested in understanding the fundamental commonality between all such existing algorithms and improve the state-of-the-art by overcoming drawbacks at the fundamental level. I believe there may be multiple ways for solving any particular real world problem but they should ultimately have similar underlying fundamentals and that biologically inspired solutions hold the key to discovering them.