Soumyadeep Pal

prof_pic.jpg

Hello ! I am a second-year Ph.D. student at OPTML Group at Michigan State University, under the supervision of Prof. Sijia Liu. I am broadly interested in AI Safety or trustworthy AI as well development of BP-free AI algorithms.

Research Thrusts

:heavy_check_mark: Thrust 1: AI Safety : Machine Unlearning (MU): For my first year of PhD, I worked mostly on LLM unlearning. I approach unlearning as one of the potential methods to ensure safety, essentially by removing hazardous knowledge.

Currently, MU does not seem to be a very promising direction for safety due to a variety of reasons. I believe that warrants more research in this area, specifically because of its promise - its probably better for some hazardous knowledge to be not present than other mitigation strategies.

:heavy_check_mark: Thrust 2: BP-free Training: I developed a keen interest in developing ML algorithms to train models without gradients during my summer internship at Lawrence Livermore National Laboratory. In many applications, we cannot take gradients, for example, some non-parameterized non-differentiable system (like a MFEM solver) is in the end-to-end training. How do we train a NN in such a system ?

Current zeroth-order (gradient free) methods suffer from scalability issues, from both the network side as well as the dataset size. I am working on developing scalable zeroth-order (ZO) algorithms.

Collaboration Opportunities

I am always open to collaborations with researchers, as well as undergraduate and graduate students seeking Ph.D. positions. If you have exciting research ideas or are looking for opportunities to conduct research under professional guidance, feel free to reach out to me!

News

Jul 07, 2025 :tada: My first-author paper LLM Unlearning Reveals a Stronger-Than-Expected Coreset Effect in Current Benchmarks accepted in COLM 2025 !
Jun 25, 2025 :tada: Excited to serve as a volunteer for ICML 2025!
Jun 10, 2025 :tada: A short version of my first-author paper Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs accepted as an oral an MUGen @ ICML'25 ! This is an exciting work in progress 🤩
Jun 02, 2025 🤩 I am starting a summer internship as a Computing Scholar Intern at Lawrence Livermore National Laboratory.
Apr 17, 2025 :tada: Our paper Invariance Makes LLM Unlearning Resilient Even to Unanticipated Downstream Fine-Tuning accepted in ICML 2025 !

Publications

Please refer to my publications here.