Mathematician Ernest Ryu is among the more than 1 million weekly ChatGPT users focusing on advanced topics in science and math, according to a new OpenAI report. Ryu picked up ChatGPT out of curiosity in 2023, and saw it advance until it could generate a publishable result last year. His academic work has focused on optimization: the math behind efficient, reliable algorithms that support modern economies, from planning logistics to keeping aircraft wings stable.
When large language models were surging in popularity in 2023, Ryu began his first experiments: could a model translate real-world “word problems” into precise optimization models, including all the hidden constraints, and then hand them to a solver? Scheduling a baseball season, for example, requires hard constraints (e.g. no team plays two games at the same time) and softer ones (e.g. travel rest days that can be violated if necessary). That early model struggled with the careful constraint handling this work demands, sometimes omitting constraints and failing on larger, realistic schedules.
The inflection point came last year, after the arrival of reasoning models and OpenAI’s winning gold at the International Mathematical Olympiad. The same class of scheduling problems Ryu had tested before were reliably solved. That success led Ryu to apply LLMs to everyday mathematical work: while writing lectures, Ryu began asking ChatGPT for proofs of results he knew were true but didn’t have top of mind.
Finally, he tried it on research. Ryu chose a problem related to Nesterov acceleration, a well-known technique for speeding up optimization, and picked a version of the problem that was open long enough that others had attempted it, yet simple enough that a short proof might exist. For three consecutive evenings, after his son went to bed, he worked from 8pm to midnight, and by the third night he had guided AI to the point where it cracked the problem.
Their collaboration looked like real research. The model produced an initial proof with a calculation mistake, so Ryu began to iterate: he corrected the error, preserved the correct intermediate steps in a growing prompt, abandoned dead ends, and pushed the model into other approaches. Ryu describes this as maze running, where you turn down corridors and open doors, sometimes only to find them empty, while keeping a mental map of what fails and what seems promising. ChatGPT helped Ryu accelerate how fast he ran the maze by 3x to 10x.
On the third night, the model made a small but meaningful leap that “looked different” enough to unlock the proof. Ryu said he “more than triple-checked” the argument, then had a student verify it, before he shared it publicly to an optimization community that reacted with surprise and excitement. From there, the continuous-time result was translated into the discrete-time algorithm statement with a single prompt, leaving a short, one-page novel core that met the standard for a publishable advance.
Since then, Ryu has joined OpenAI’s synthetic data team, where his core focus is improving the model’s mathematical capability.