Sandeep Madireddy is a Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory. His research interests span the broader areas of theoretical and applied machine learning, probabilistic modeling and high performance computing, with applications across science and engineering. His current research aims at developing deep learning algorithms and architectures tailored for scientific machine learning, with a particular focus on improving training efficiency, model robustness, uncertainty quantification and feature representation learning. He has experience applying these approaches to address diverse problems in various domains, ranging from physical sciences (material science, high energy physics, climate science) to computer systems modeling and neuromorphic computing.
Before joining Argonne, he obtained his Ph.D. in mechanical and materials engineering from the University of Cincinnati, as part of the UC Simulation center (a UC Engineering and Procter & Gamble Collaboration). Before that, he obtained his masters from Utah State University and bachelors from Birla Institute of Technology and Science (BITS-Pilani) in India.
Co-design approach that encompasses neuromorphic computing, systems architecture, and datacentric applications. Focus on high energy physics (HEP) and nuclear physics (NP) detector experiments
This project brings together ASCR and HEP researchers to develop and apply new methods and algorithms in the area of extreme-scale inference and machine learning. The research program melds high-performance computing and techniques for “big data” analysis to enable new avenues of scientific discovery.
Deep transfer learning to automatically segment the precipitate from matrix in 3D Atom Probe Tomography data.
Employing architechtures inspired by insect brain to devise efficient, life-long learning machines.
Multi-dimensional automated scalability tests, program analysis, performance learning and prediction at various levels of the software/hardware stack.
Develop cross-cutting artificial intelligence framework for fast inference and training on heterogeneous computing resources as well as algorithmic advances in AI explainability and uncertainty quantification.
Machine learning-based probabilistic I/O performance models that take the background traffic, and system state into account while prediciting application performance on HPC system.
Deelop a framework for efficient and accurate equilibrium reconstructions, by automating and maximizing the information extracted from measurements and by leveraging physics-informed ML models constructed from experimental and synthetic solution databases to guide the search for the solution vector.
The objective of RAPIDS2 is to assist the Office of Science (SC) application teams in overcoming computer science, data, and AI challenges in the use of DOE supercomputing resources to achieve scientific breakthroughs.
The goal of RAPIDS (a SciDAC Institute for Resource and Application Productivity through computation, Information, and Data Science) institute is to assist Office of Science (SC) application teams in overcoming computer science and data challenges in the use of DOE supercomputing resources to achieve science breakthroughs.
The goal of RAPIDS (a SciDAC Institute for Resource and Application Productivity through computation, Information, and Data Science) institute is to assist Office of Science (SC) application teams in overcoming computer science and data challenges in the use of DOE supercomputing resources to achieve science breakthroughs.
Develop modular characterization approaches, where we can examine key performance parameters and application execution similarities.