Nishad Gothoskar

I am a second-year PhD Student at MIT at the Probabilistic Computing Project in CSAIL. I am co-advised by Vikash Mansinghka and Josh Tenenbaum. Previously I was a researcher at Vicarious AI where I worked with Miguel Lázaro-Gredilla and Dileep George. In Dec. 2017, I recieved a BS in Computer Science and Math from CMU.

My research aims to build robots that can learn and generalize as rapidly and efficiently as humans. Humans have rich prior knowledge and inductive biases about the structure of the world, and they leverage this understanding when making inferences from data. To build AI systems as flexible as humans, we must understand what these priors and biases are, how our brains represent them, and how we use them. In my research, I study probabilistic generative models and how we can use them to improve data-efficiency, robustness, and generalizability of robotic systems.

Contact me at nishad AT mit DOT edu

[Google Scholar] [LinkedIn]

3DP3: 3D Scene Perception via Probabilistic Programs
Nishad Gothoskar, Marco Cusumano-Towner, Ben Zinberg, Matin Ghavamizadeh, Falk Pollok, Austin Garrett, Dan Gutfreund, Joshua B. Tenenbaum, Vikash Mansinghka
NeurIPS, 2021
[Coming soon!]

We propose a generative probabilistic programming-based architecture for modeling 3D objects and scenes, and use our architecture to do accurate and robust object pose estimation from RGBD images.

Clone-structured graph representations enable flexible learning and vicarious evaluation of cognitive maps
Dileep George, Rajeev V. Rikhye, Nishad Gothoskar,J. Swaroop Guntupalli, Antoine Dedieu, Miguel Lazaro-Gredilla
Nature Communications, 2021
[BibTeX] [PDF]

Cognitive maps are mental representations of spatial and conceptual relationships in an environment, and are critical for flexible behavior. To form these abstract maps, the hippocampus has to learn to separate or merge aliased observations appropriately in different contexts in a manner that enables generalization and efficient planning. Here we propose a specific higher-order graph structure, clone-structured cognitive graph (CSCG), which forms clones of an observation for different contexts as a representation that addresses these problems.

Query Training: Learning a Worse Model to Infer Better Marginals in Undirected Graphical Models with Hidden Variables
Miguel Lazaro-Gredilla, Wolfgang Lehrach, Nishad Gothoskar, Guangyao Zhou, Antoine Dedieu, Dileep George
AAAI, 2021
[BibTeX] [PDF]

Probabilistic graphical models (PGMs) provide a compact representation of knowledge that can be queried in a flexible way: after learning the parameters of a graphical model once, new probabilistic queries can be answered at test time without retraining. However, when using undirected PGMS with hidden variables, two sources of error typically compound in all but the simplest models (a) learning error (both computing the partition function and integrating out the hidden variables is intractable); and (b) prediction error (exact inference is also intractable). Here we introduce query training (QT), a mechanism to learn a PGM that is optimized for the approximate inference algorithm that will be paired with it.

Template from here.