ð Hi! My principal research interest is at the intersection of Language Models and Agentic Behaviour. [P1] I have recently worked on efficient and effective LM reasoning training. My past works make [C1] multi-agents more generalizable and significantly more reliable; and [W1] improves agents in noisy environments. I have also made an [C2] agentic framework for scalable LM-based chart generation (code and image), with robust improvements across 27B, 70B, and frontier-level LMs.
I am currently primarily looking into (1) LM reasoning, and (2) LM agents and agentic systems; but nevertheless interested in a wide range of open problems related to LMs.
In the past, I worked on [J1] personalization and user-centric machine learning; and [W2] improving time-series representation learning under non-stationarity.
I am perfectly bilingual in English and Korean. I love to chat about research and downstream impact; feel free to reach out via email ð§.
Undergraduate
Yonsei University
Diploma Programme
International Baccalaureate
*First Author(s), ^Advisor(s)
I have been fortunate to mentor individuals listed below, just as I have benefited from the guidance of numerous mentors and advisors throughout my own journey.