Home


Chandrika Kamath is a data scientist at Lawrence Livermore National Laboratory, where she is involved in the analysis of data from scientific simulations, experiments, and observations. Her early interest in mathematical algorithms, and their efficient implementation, led her to the fields of numerical methods and high performance computing. A few years into her career, the need for scalable algorithms to cluster Web documents, and the connection between SVD and PCA, resulted in a serendipitous introduction to the field of data mining. Realizing the potential of this field in science and engineering applications, Chandrika changed her career focus. For the last twenty-five years, she has enjoyed applying her interests and expertise in mathematics and computer science to solve problems where the challenges posed by the size of the data, whether tiny or massive, are matched by the complexity of the data. Bio for Chandrika Kamath

The image above is a rangoli or kolam. Learn more about its connection with math.

Publications available from Google Scholar


Latest updates:

  • April 2023: Honored to be selected a 2023 Fellow of the Society for Industrial and Applied Mathematics for “community leadership and contributions to data mining and its application to real-world problems in science and engineering.”
  • November 2022: my paper on “Classification of orbits in Poincare maps using machine learning” was published in International Journal of Data Science and Analytics, (open access). This is a paper based on work that was completed several years ago. It is a very interesting problem – the smallest data set I have ever analyzed and yet the most challenging. I thought it was worth completing the half-written paper.
  • August 2022: my paper on “Intelligent sampling for surrogate modeling, hyper-parameter optimization and data analysis” was published in the Machine Learning with Applications journal (open access). This paper summarizes ideas I have explored for nearly a decade on how best to generate sample points in high-dimensional spaces to meet the constraints of real problems. It wasn’t a surprise that simple methods worked quite well.