DeepSLAM project aims to Develop an end-to-end deep neural network simultaneously capable of (a) Object Detection, and (b) Re-identification/Tracking
(a) Object Detection using RGB images/videos.
(b) Object Re-identification over multiple frames.
This all-in-one solution provides a better level of information integrity and reuse. In doing so, a local belief of the surrounding area is trained with occupancy grid to generate a local implicit map to capture dynamic road condition. Then local coarse implicit mapping is then combined with global accurate road information for above goals.
The goal of this work is to translate streams of data from individual sensors into a shared manifold-space for joint understanding and processing. This work includes investigation of computational topology for manifold learning, data summarization, and intrinsic dimensionality estimation. In practice, for a given application, processing chains are generally developed for a particular sensor or set of sensors.
Current synthetic aperture radar (SAR) image recognition systems suffer from significant degradation when the systems are trained with synthetic images but tested with real measured images. To address this issue, the project is aimed at developing a quasi-supervised-learning approach for SAR image recognition. The key idea is transfer learning with quasi-supervised training procedures.
Principal Component Analysis (PCA) has been widely used in computer vision and machine learning applications due to its excellent performance in compression, feature extraction and feature representation. However, PCA suffers from severely degraded performance when outliers exist in datasets. To address this issue, the project is intended to develop a robust PCA algorithm, capable of mitigating outliers. The key idea is to leverage a popularity index for each sample so that outliers will contribute little in finding the projection matrix of the PCA.
Conventional video coding methods optimize each part separately which might lead to sub-optimal solution. Motivated by the success of deep learning on computer vision tasks, we are proposing deep learning for video compression in an end-to-end manner.
We propose to tackle denoising problem using deep learning into two folds: outlier removal and denoising. We used two stage deep learning pipeline where first stage acts as a binary classifier that classifies 3d points either as outlier or non-outlier. The second stage receives non-outlier and noisy points from the first stage and learns the underlying manifold to produce residual noise from the reference(true) surface.
The objectives of this research are: (1) Develop a unified and accurate deep learning based regression model for predicting energy usage of different building types. (2) Develop a zero shot learning approach (using an ontology of building types) to accurately predict energy usage of building types whose data are not available during training. (3) Evaluate the deep learning model against traditional machine learning models. If successful, the proposed research can accurately model the energy usage of a variety of building types for architects and engineers (at design time).
We propose a novel “Semantic Deep Learning” method to analyze the electronic health records of real patients. Our previous work as successfully used a hypergraph- based approach in the clinical text notes from Stanford Hospital’s Clinical Data Warehouse (STRIDE).
The goal of this project is to evaluate contemporary techniques for deep learning model explanations and utilize DL Explanation approach for improving model interpretability.
In this project, we will investigate ontology-based explainable deep learning (OBDL) algorithms for textual data to identify key factors, concepts, and hypothesis that significantly contribute to the decision made by deep neural networks.