Team Members

Current

Avatar

Anirban Samaddar

Visiting Student

Bayesian Inference

Avatar

Anurag Daram

Research Aid

Neuromorphic Computing

Avatar

Jaehoon Koo

Postdoc (Joint with Prasanna)

Deep Learning

Avatar

Ray Sinurat

Ph.D. Candidate

Machine learning for I/O

Avatar

Sanket Jantre

Visiting Student

Bayesian Inference

Avatar

Sumegha Premchandar

Givens Fellow

Bayesian Inference

Alumni

Avatar

Tianchen Zhao

Givens Fellow

Deep Generative Models

Avatar

Kelvin Kan

NSF-MSGI Fellow

Physics-informed Learning

Avatar

Neer Bharadwaj

NSF-MSGI Fellow

Deep Generative Models

Avatar

Pankaj Chauhan

Summer Intern

Deep Learning

Avatar

Peihong Jiang

NSF-MSGI Fellow

Reinforcement Learning

Projects

.js-id-Current

A Transformative Co-Design Approach to Materials and Computer Architecture Research (Threadwork)

Co-design approach that encompasses neuromorphic computing, systems architecture, and datacentric applications. Focus on high energy physics (HEP) and nuclear physics (NP) detector experiments

Accelerating HEP Science:-Inference and Machine Learning at Extreme Scales

This project brings together ASCR and HEP researchers to develop and apply new methods and algorithms in the area of extreme-scale inference and machine learning. The research program melds high-performance computing and techniques for ​“big data” analysis to enable new avenues of scientific discovery.

Atoms to Manufacturing

Deep transfer learning to automatically segment the precipitate from matrix in 3D Atom Probe Tomography data.

Dynamic architectures through introspection and neuromodulation (DARPA’s Lifelong Learning Machines Program)

Employing architechtures inspired by insect brain to devise efficient, life-long learning machines.

Foundations for Correctness Checkability and Performance Predictability of Systems at Scale (ScaleSTUDS)

Multi-dimensional automated scalability tests, program analysis, performance learning and prediction at various levels of the software/hardware stack.

High-Velocity Artificial Intelligence for HEP

Develop cross-cutting artificial intelligence framework for fast inference and training on heterogeneous computing resources as well as algorithmic advances in AI explainability and uncertainty quantification.

Improving Computational Science Throughput via Model-Based I/O Optimization (SciDAC SUPER-SDAV)

Machine learning-based probabilistic I/O performance models that take the background traffic, and system state into account while prediciting application performance on HPC system.

ML Assisted Equilibrium Reconstruction for Tokamak Experiments and Burning Plasmas

Deelop a framework for efficient and accurate equilibrium reconstructions, by automating and maximizing the information extracted from measurements and by leveraging physics-informed ML models constructed from experimental and synthetic solution databases to guide the search for the solution vector.

RAPIDS2:- SciDAC Institute for Computer Science, Data, and Artificial Intelligence

The objective of RAPIDS2 is to assist the Office of Science (SC) application teams in overcoming computer science, data, and AI challenges in the use of DOE supercomputing resources to achieve scientific breakthroughs.

RAPIDS:- A SciDAC Institute for Computer Science and Data

The goal of RAPIDS (a SciDAC Institute for Resource and Application Productivity through computation, Information, and Data Science) institute is to assist Office of Science (SC) application teams in overcoming computer science and data challenges in the use of DOE supercomputing resources to achieve science breakthroughs.

RAPIDS:- A SciDAC Institute for Computer Science and Data

The goal of RAPIDS (a SciDAC Institute for Resource and Application Productivity through computation, Information, and Data Science) institute is to assist Office of Science (SC) application teams in overcoming computer science and data challenges in the use of DOE supercomputing resources to achieve science breakthroughs.

Self-Aware Adaptive Workflow and Data Management Services for Future HPC Systems

Develop modular characterization approaches, where we can examine key performance parameters and application execution similarities.

Recent Publications