READ WHAT WE'VE DISCOVERED SO FAR

New method to train an AI to analyze speech and dialog keywords with no fuss of creating a large dataset.

Abstract

To analyze a speech or dialog between people, we need to analyze what keywords were used in continuous flow of communication. One way is to use the heavy ASR(automatic speech recognition) systems to transcribe the entire speech. Apart from them being heavy, its impossible to tune these models for detecting domain specific keywords or rare keywords (which are often the most important). We solve this by building a keyword spotting algorithm which works by detecting keywords on flowing speech. Keyword spotting algorithms are what virtual assistants like Alexa and Cortana use to trigger their heavy ASR algorithms (by trigger words like "Alexa!" or "Ok Google"). We tune the method to train these algorithms so that they can be used to analyze what keywords are used in running speech than keywords in a distinct sentence like "Hi Alexa". The other constraint that we had while making this algorithm was making it easy and fuss-free to train. The training data set is a couple of utterances of the keywords to be analyzed by a set of 40 people. In a open demo we present here, our algorithm can detect a large list of keywords (related to Samsung) in sentences. So for example, if you say "Is UHD Television available ?" or "UHD TV hai kya?" (in Hindi) or in any other language, it should be able to catch the "UHD" keyword. Detecting specific words like UHD (and a lot of listed words listed in the demo) is hard in normal ASR systems.
This technology can be used to analyze interviews, audio transcripts of discussions, mystery shopper interaction, salesman pitches and in smart IVRS. Our algorithm consists of finetuning open models with a combination of metric loss and prototype loss on a synthetic dataset to achieve the desired results.

Authors

Harshita Seth, Pulkit Kumar, Muktabh Mayank Srivastava

Accepted at SOCO 2019

SOCO 2019 conference

Multidomain Document Layout Understanding using Few Shot Object Detection

Abstract

We try to address the problem of document layout understanding using a simple algorithm which generalizes across multiple domains while training on just few examples per domain. We approach this problem via supervised object detection method and propose a methodology to overcome the requirement of large datasets. We use the concept of transfer learning by pre-training our object detector on a simple artificial (source) dataset and fine-tuning it on a tiny domain specific (target) dataset. We show that this methodology works for multiple domains with training samples as less as 10 documents. We demonstrate the effect of each component of the methodology in the end result and show the superiority of this methodology over simple object detectors.

Authors

Pranaydeep Singh, Srikrishna Varadarajan, Ankit Narayan Singh, Muktabh Mayank Srivastava

Published

arxiv.org

Example Mining for Incremental Learning in Medical Imaging

Abstract

Incremental Learning is well known machine learning approach wherein the weights of the learned model are dynamically and gradually updated to generalize on new unseen data without forgetting the existing knowledge. Incremental learning proves to be time as well as resource-efficient solution for deployment of deep learning algorithms in real world as the model can automatically and dynamically adapt to new data as and when annotated data becomes available. The development and deployment of Computer Aided Diagnosis (CAD) tools in medical domain is another scenario, where incremental learning becomes very crucial as collection and annotation of a comprehensive dataset spanning over multiple pathologies and imaging machines might take years. However, not much has so far been explored in this direction. In the current work, we propose a robust and efficient method for incremental learning in medical imaging domain. Our approach makes use of Hard Example Mining technique (which is commonly used as a solution to heavy class imbalance) to automatically select a subset of dataset to fine-tune the existing network weights such that it adapts to new data while retaining existing knowledge. We develop our approach for incremental learning of our already under test model for detecting dental caries. Further, we apply our approach to one publicly available dataset and demonstrate that our approach reaches the accuracy of training on entire dataset at once, while availing the benefits of incremental learning scenario.

Authors

Pratyush Kumar, Muktabh Mayank Srivastava

Published

IEEE SSCI 2018

Binarizer at SemEval-2018 Task 3: Parsing dependency and deep learning for irony detection

Abstract

In this paper, we describe the system submitted for the SemEval 2018 Task 3 (Irony detection in English tweets) Subtask A by the team Binarizer. Irony detection is a key task for many natural language processing works. Our method treats ironical tweets to consist of smaller parts containing different emotions. We break down tweets into separate phrases using a dependency parser. We then embed those phrases using an LSTM-based neural network model which is pre-trained to predict emoticons for tweets. Finally, we train a fully-connected network to achieve classification.

Authors

Nishant Nikhil, Muktabh Mayank Srivastava

Published

SemEval-2018

Supervised Mover's Distance: A simple model for sentence comparison

Abstract

We propose a simple Neural Network model which can learn relation between sentences by modeling the task as Earth Mover's Distance(EMD) calculation. Underlying hypothesis is that a neural module can learn to approximate the flow optimization in EMD calculation for sentence comparison. Our model is simple to implement, light in terms of parameters and works across multiple supervised sentence comparison tasks. We show good results for the model on two datasets. Our model combines LSTM with a relational unit to model sentence comparison.

Authors

Muktabh Mayank

Accepted at AINL 2018

ainlconf.ru

Weakly Supervised Object Localization on grocery shelves using simple FCN and Synthetic Dataset

Abstract

We propose a weakly supervised method using two algorithms to predict object bounding boxes given only an image classification dataset. First algorithm is a simple Fully Convolutional Network (FCN) trained to classify object instances. We use the property of FCN to return a mask for images larger than training images to get a primary output segmentation mask during test time by passing an image pyramid to it. We enhance the FCN output mask into final output bounding boxes by a Convolutional Encoder-Decoder (ConvAE) viz. the second algorithm. ConvAE is trained to localize objects on an artificially generated dataset of output segmentation masks. We demonstrate the effectiveness of this method in localizing objects in grocery shelves where annotating data for object detection is hard due to variety of objects. This method can be extended to any problem domain where collecting images of objects is easy and annotating their coordinates is hard.

Authors

Srikrishna Varadarajan, Muktabh Mayank Srivastava

Published

ICVGIP 2018

Towards Automated Tuberculosis detection using Deep Learning

Abstract

Tuberculosis(TB) in India is the world's largest TB epidemic. TB leads to 480,000 deaths every year. Between the years 2006 and 2014, Indian economy lost US$340 Billion due to TB. This combined with the emergence of drug resistant bacteria in India makes the problem worse. The government of India has hence come up with a new strategy which requires a high-sensitivity microscopy based TB diagnosis mechanism. We propose a new Deep Neural Network based drug sensitive TB detection methodology with recall and precision of 83.78% and 67.55% respectively for bacillus detection. This method takes a microscopy image with proper zoom level as input and returns location of suspected TB germs as output. The high accuracy of our method gives it the potential to evolve into a high sensitivity system to diagnose TB when trained at scale.

Authors

Sonaal Kant, Muktabh Mayank Srivastava

Published

IEEE SSCI 2018

Train Once, Test Anywhere: Zero-Shot Learning for Text Classification

Abstract

Zero-shot Learners are models capable of predicting unseen classes. In this work, we propose a Zero-shot Learning approach for text categorization. Our method involves training model on a large corpus of sentences to learn the relationship between a sentence and embedding of sentence's tags. Learning such relationship makes the model generalize to unseen sentences, tags, and even new datasets provided they can be put into same embedding space. The model learns to predict whether a given sentence is related to a tag or not; unlike other classifiers that learn to classify the sentence as one of the possible classes. We propose three different neural networks for the task and report their accuracy on the test set of the dataset used for training them as well as two other standard datasets for which no retraining was done. We show that our models generalize well across new unseen classes in both cases. Although the models do not achieve the accuracy level of the state of the art supervised models, yet it evidently is a step forward towards general intelligence in natural language processing.

Authors

Pushpankar Kumar Pushp, Muktabh Mayank Srivastava

Published

arxiv.org

Visual aesthetic analysis using deep neural network: model and techniques to increase accuracy without transfer learning

Abstract

We train a deep Convolutional Neural Network (CNN) from scratch for visual aesthetic analysis in images and discuss techniques we adopt to improve the accuracy. We avoid the prevalent best transfer learning approaches of using pretrained weights to perform the task and train a model from scratch to get accuracy of 78.7% on AVA2 Dataset close to the best models available (85.6%). We further show that accuracy increases to 81.48% on increasing the training set by incremental 10 percentile of entire AVA dataset showing our algorithm gets better with more data.

Authors

Muktabh Mayank Srivastava, Sonaal Kant

Accepted at IEEE I2CT

http://ieeepune.i2ct.in/

Boosted Cascaded Convnets for Multilabel Classification of Thoracic Diseases in Chest Radiographs

Abstract

Chest X-ray is one of the most accessible medical imaging technique for diagnosis of multiple diseases. With the availability of ChestX-ray14, which is a massive dataset of chest X-ray images and provides annotations for 14 thoracic diseases; it is possible to train Deep Convolutional Neural Networks (DCNN) to build Computer Aided Diagnosis (CAD) systems. In this work, we experiment a set of deep learning models and present a cascaded deep neural network that can diagnose all 14 pathologies better than the baseline and is competitive with other published methods. Our work provides the quantitative results to answer following research questions for the dataset: 1) What loss functions to use for training DCNN from scratch on ChestX-ray14 dataset that demonstrates high class imbalance and label co occurrence? 2) How to use cascading to model label dependency and to improve accuracy of the deep learning model?

Authors

Pulkit Kumar, Monika Grewal, Muktabh Mayank Srivastava

Published

ICIAR 2018

Detection of Tooth caries in Bitewing Radiographs using Deep Learning

Abstract

We develop a Computer Aided Diagnosis (CAD) system, which enhances the performance of dentists in detecting wide range of dental caries. The CAD System achieves this by acting as a second opinion for the dentists with way higher sensitivity on the task of detecting cavities than the dentists themselves. We develop annotated dataset of more than 3000 bitewing radiographs and utilize it for developing a system for automated diagnosis of dental caries. Our system consists of a deep fully convolutional neural network (FCNN) consisting 100+ layers, which is trained to mark caries on bitewing radiographs. We have compared the performance of our proposed system with three certified dentists for marking dental caries. We exceed the average performance of the dentists in both recall (sensitivity) and F1-Score (agreement with truth) by a very large margin. Working example of our system is shown in Figure 1.

Authors

Muktabh Mayank Srivastava, Pratyush Kumar, Lalit Pradhan, Srikrishna Varadarajan

Published

NIPS 2017 Workshop

Anatomical labeling of brain CT scan anomalies using multi-context nearest neighbor relation networks

Abstract

This work is an endeavor to develop a deep learning methodology for automated anatomical labeling of a given region of interest (ROI) in brain computed tomography (CT) scans. We combine both local and global context to obtain a representation of the ROI. We then use Relation Networks (RNs) to predict the corresponding anatomy of the ROI based on its relationship score for each class. Further, we propose a novel strategy employing nearest neighbors approach for training RNs. We train RNs to learn the relationship of the target ROI with the joint representation of its nearest neighbors in each class instead of all data-points in each class. The proposed strategy leads to better training of RNs along with increased performance as compared to training baseline RN network.

Authors

Srikrishna Varadarajan, Muktabh Mayank Srivastava, Monika Grewal, Pulkit Kumar

Published

ISBI 2018

RADNET: Radiologist Level Accuracy using Deep Learning for HEMORRHAGE detection in CT Scans

Abstract

We describe a deep learning approach for automated brain hemorrhage detection from computed tomography (CT) scans. Our model emulates the procedure followed by radiologists to analyse a 3D CT scan in real-world. Similar to radiologists, the model sifts through 2D cross-sectional slices while paying close attention to potential hemorrhagic regions. Further, the model utilizes 3D context from neighboring slices to improve predictions at each slice and subsequently, aggregates the slice-level predictions to provide diagnosis at CT level. We refer to our proposed approach as Recurrent Attention DenseNet (RADnet) as it employs original DenseNet architecture along with adding the components of attention for slice level predictions and recurrent neural network layer for incorporating 3D context. The real-world performance of RADnet has been benchmarked against independent analysis performed by three senior radiologists for 77 brain CTs. RADnet demonstrates 81.82% hemorrhage prediction accuracy at CT level that is comparable to radiologists. Further, RADnet achieves higher recall than two of the three radiologists, which is remarkable.

Authors

Monika Grewal, Muktabh Mayank Srivastava, Pulkit Kumar, Srikrishna Varadarajan

Published

ISBI 2018

Testing the limits of unsupervised learning for semantic similarity

Abstract

Semantic Similarity between two sentences can be defined as a way to determine how related or unrelated two sentences are. The task of Semantic Similarity in terms of distributed representations can be thought to be generating sentence embeddings (dense vectors) which take both context and meaning of sentence in account. Such embeddings can be produced by multiple methods, in this paper we try to evaluate LSTM auto encoders for generating these embeddings. Unsupervised algorithms (auto encoders to be specific) just try to recreate their inputs, but they can be forced to learn order (and some inherent meaning to some extent) by creating proper bottlenecks. We try to evaluate how properly can algorithms trained just on plain English Sentences learn to figure out Semantic Similarity, without giving them any sense of what meaning of a sentence is.

Authors

Richa Sharma, Muktabh Mayank Srivastava

Published

arxiv.org

Content Based Document Recommender using Deep Learning

Abstract

With the recent advancements in information technology there has been a huge surge in amount of data available. But information retrieval technology has not been able to keep up with this pace of information generation resulting in over spending of time for retrieving relevant information. Even though systems exist for assisting users to search a database along with filtering and recommending relevant information, but recommendation system which uses content of documents for recommendation still have a long way to mature. Here we present a Deep Learning based supervised approach to recommend similar documents based on the similarity of content. We combine the C-DSSM model with Word2Vec distributed representations of words to create a novel model to classify a document pair as relevant/irrelavant by assigning a score to it. Using our model retrieval of documents can be done in O(1) time and the memory complexity is O(n), where n is number of documents.

Authors

Nishant Nikhil, Muktabh Mayank Srivastava

Published

ICICI 2017

Copyright © ParallelDots,Inc. 2018. All Rights Reserved.