• Neural Networks,  Research

    Hardening Deep Neural Networks via Adversarial Model Cascades

    Deep neural networks (DNNs) are vulnerable to malicious inputs crafted by an adversary to produce erroneous outputs. Works on securing neural networks against adversarial examples achieve high empirical robustness on simple datasets such as MNIST. However, these techniques are inadequate when empirically tested on complex data sets such as CIFAR10 and SVHN. Further, existing techniques are designed to target specific attacks and fail to generalize across attacks. We propose Adversarial Model Cascades (AMC) as a way to tackle the above inadequacies. Our approach trains a cascade of models sequentially where each model is optimized to be robust towards a mixture of multiple attacks. Ultimately, it yields a single model which…

    Comments Off on Hardening Deep Neural Networks via Adversarial Model Cascades
  • Experiences,  Students

    Sojourn of an introvert at PreCog

    “The Whole is Greater than the Sum of its Parts” This was just another saying for me until the day I joined Precog. It all began when my friends convinced me into taking part in OSM-Palooza, a hackathon organized by PreCog in Spring 2016. The task was to perform sentiment analysis on Twitter code-mixed data. The experience was fun: learning basics of machine learning, text analysis, APIs, web scraping, automation, and what not. Finally, after working for several hours, our team made a submission that ended up winning the first prize! While munching on pizza slices with the prize money, I started thinking about this experience, and how much I loved it. After…