👋 Hi! My principal research interest is at the intersection of Foundation Models and Agentic Behaviour. [P1] I have recently worked on efficient and effective self-taught reasoning training. My past works make [C1] multi-agents more generalizable and significantly more reliable; and [W1] improves agents in noisy environments. I have also made an [C2] agentic framework for scalable LM-based chart generation (code and image), with robust improvements across 27B, 70B, and frontier-level LMs.
My current research topic revolves around: (1) LM pretraining, (2) LM reasoning, and (3) LM evaluation. Nevertheless, I am interested in a wide-range of open problems.
In the past, I worked on [J1] personalization and user-centric machine learning; and [W2] improving time-series representation learning under non-stationarity.
I am perfectly bilingual in English and Korean. I love to chat about research and downstream impact; feel free to reach out via email 📧.
Graduate Student
KAIST AI
Bachelors Degree
Yonsei University
Diploma Programme
International Baccalaureate
*First Author(s), ^Advisor(s)
I have been fortunate to mentor individuals listed below, just as I have benefited from the guidance of numerous mentors and advisors throughout my own journey.