Sandeep Madireddy is a Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. His research spans across the areas of probabilistic machine learning, bio-inspired and energy-efficient learning, high-performance computing and generative AI with an emphasis on safety and robustness. His research integrates algorithmic research in these areas with applied research aimed to advance scientific discovery in critical areas such as fusion energy sciences, cosmology and high-energy physics, weather and climate, and material science.
He previously was a postdoc and assistant computer scientist in Mathematics and Computer Science Division advised by Prasanna Balaprakash and Stefan Wild. Before joining Argonne, he obtained his Ph.D. in mechanical and materials engineering (focusing on Probabilistic machine Learning) from the University of Cincinnati, as part of the UC Simulation center (a Procter & Gamble Collaboration). Before that, he obtained his masters from Utah State University and bachelors from Birla Institute of Technology and Science (BITS-Pilani) in India.
Sandeep serves as a Co-investigator (and AI lead) for DOE and NSF-funded projects:
Sandeep also provides professional services to various machine learning, high performance computing and domain science conferences and journals.
Co-design approach that encompasses neuromorphic computing, systems architecture, and datacentric applications. Focus on high energy physics (HEP) and nuclear physics (NP) detector experiments
This project brings together ASCR and HEP researchers to develop and apply new methods and algorithms in the area of extreme-scale inference and machine learning. The research program melds high-performance computing and techniques for “big data” analysis to enable new avenues of scientific discovery.
Deep transfer learning to automatically segment the precipitate from matrix in 3D Atom Probe Tomography data.
The project overarching objective is to develop the simulation capability and to perform extended MHD and (drift-gyro) kinetic simulations of non-ELMing (and some ELMing) regime operating points to close gaps in understanding, prediction, and optimization of edge stability for an FPP. I am leading a team to develop ML techniques for extracting reduced-order models, data reduction, and feature extraction to the existing non-ELM database (based on interpolation of data), and to extrapolate to new parameter regimes (such as coil currents for negative-triangularity shaping) for ELM-free optimization.
Employing architechtures inspired by insect brain to devise efficient, life-long learning machines.
The overarching objective of this SciDAC-5 project is to create consistent predictions of the dark and visible Universe across redshifts, length scales and wavebands based on state-of-the-art cosmological simulations. The simulation suite will encompass large-volume, high-resolution gravity-only simulations and hydrodynamical simulations equipped with a comprehensive set of subgrid models covering both small and large volumes. The simulations will be coupled to a powerful analysis framework and associated tools to maximize the analysis flexibility and science return.
Multi-dimensional automated scalability tests, program analysis, performance learning and prediction at various levels of the software/hardware stack.
Develop cross-cutting artificial intelligence framework for fast inference and training on heterogeneous computing resources as well as algorithmic advances in AI explainability and uncertainty quantification.
Machine learning-based probabilistic I/O performance models that take the background traffic, and system state into account while prediciting application performance on HPC system.
Deelop a framework for efficient and accurate equilibrium reconstructions, by automating and maximizing the information extracted from measurements and by leveraging physics-informed ML models constructed from experimental and synthetic solution databases to guide the search for the solution vector.
he project goal is to develop a probabilistic ML framework – PRISM – to improve manufacturing efficiency and demonstrate the technologies on wing spars.
The objective of RAPIDS2 is to assist the Office of Science (SC) application teams in overcoming computer science, data, and AI challenges in the use of DOE supercomputing resources to achieve scientific breakthroughs.
The goal of RAPIDS (a SciDAC Institute for Resource and Application Productivity through computation, Information, and Data Science) institute is to assist Office of Science (SC) application teams in overcoming computer science and data challenges in the use of DOE supercomputing resources to achieve science breakthroughs.
Develop modular characterization approaches, where we can examine key performance parameters and application execution similarities.