DeepSLAM project is designed to develop an end-to-end and all-in-one deep neural network capable of followings, simultaneously, with only road scene RGB images
(a) Object Detection using RGB images and videos;
(b) Object Re-identification over multiple frames, with occlusions;
(c) Predicting multiple object’s future trajectories.
This all-in-one solution provides a better level of information integrity and reuse. In doing so, a local belief of the surrounding area will be trained with grid cells, a navigation system in humans’ brain, to generate a local implicit map to capture dynamic road condition. Then local coarse implicit mapping is then combined with global accurate road information for above three goals.
The goal of this project is to design and develop an intelligent task management for online education and business purpose. The project includes OneTask online platform, self-adaptive neural question generation for study quality evaluation and companion, GCN-based DRL model for self-adaptive neural question generation and study content recommendation and entity-related open-domain question answering system.
The goal of this project is to develop a Deep Neural Network based risk management model that can help financial companies predict loan default likelihood with a higher accuracy when a customer applies for a loan.
The goal of this project is to explore potential vulnerabilities in federated learning applications. Federated learning is a new kind of distributed machine learning with decentralized data. There is no need for data sharing for federated learning.
The goal of this work is to translate streams of data from individual sensors into a shared manifold-space for joint understanding and processing. This work includes investigation of computational topology for manifold learning, data summarization, and intrinsic dimensionality estimation. In practice, for a given application, processing chains are generally developed for a particular sensor or set of sensors.
Deep learning (DL) has in recent years been widely used in computer vision and natural language processing (NLP) applications due to its superior performance. However, while images and natural languages are known to be rich in structures expressed, for example, by grammar rules, DL has so far not been capable of explicitly representing and enforcing such structures. In this project, we propose an approach to bridging this gap by exploiting tensor product representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate DL with explicit language rules, logical rules, or rules that summarize the human knowledge about the subject.
Advancing the state-of-the-art in image/video compression by adopting deep learning methods in prediction, transform, entropy coding and post processing. Develop fresh new coding tools based on deep learning for post processing, reconstruction enhancement. Investigate new pipelines using deep learning for end-to-end image/video compression. Achieve significant coding improvements with applicable computational complexity as well as deliver insights into deep learning video compression for machine consumption, e.g., tracking, segmentation, recognition.
Enable efficient 3D sensing and information sharing for auto-driving and smart city with 3D point cloud denoising and end-to-end compression using deep learning architectures. We would explore deep learning architecture for point cloud processing. We formulate a multi-scale CNN based 3d point cloud feature extraction technique.
This project aims to design a deep learning based end-to-end solution for dark image denoising and enhancement using image decomposition method, achieving significant improvement in output image quality. Further it aims to improve the image super-resolution by exploiting temporal redundancy. The project also aims at developing an end-to-end solution for multi-frame image super-resolution which involves image registration using optical flow and deep learning for noise removal, achieving significant gain in performance over the state-of-the-art methods.
We propose the ModelKB system automating end-to-end model management in deep learning. We will develop a ModelKB prototype that can automatically (1) extract and store the model’s metadata-including its architecture, weights, and configuration; (2) visualize, query, and compare experiments; and (3) reproduce experiments.