Alex Lupsasca came to AI the way many physicists come to bold claims: with polite skepticism and a bunch of tests. In early 2025 he tried ChatGPT, and found it useful for the routine administrative tasks that regularly pop up in academia, but he did not see it as a tool for the hard part of the job: turning the laws of physics into concrete and verifiable predictions.
A common reality of academic publishing, Lupsasca says, is that a research project may often result in a draft equations and rough connective tissue, and a professor’s work becomes that of a copy editor, especially when collaborators are writing outside their first language. ChatGPT can take a rough draft and turn it into cleaner scientific prose, saving substantial time. Despite his experience, Lupsasca says AI tools are almost always better at writing than he is.
But as the models improved, he kept testing for greater capabilities. What changed his mind was watching the system finally do physics at the level of a graduate problem set at speed. Lupsasca described a simple-to-state question from general relativity: starting from a well known textbook model for electromagnetic fields near a black hole (a so-called “Wald solution”), what is the magnetic field strength right at the black hole’s edge, the event horizon? A beginning graduate student might need hours to work it through, but the model produced the full derivation in seconds.
Then he noticed something stranger: the model often arrived with correct answers, but expressed them in unfamiliar mathematical language, as if it had learned multiple dialects of physics and math and could choose whichever compressed the result best.
The bigger test came from Lupsasca’s own research. He had recently derived new “hidden symmetries” of an equation that governs a black hole’s tidal response, roughly the black hole analogue of ocean tides raised by the Moon. Those symmetries explain why a famous tidal effect vanishes. When he handed that equation to GPT‑5 Pro with minimal guidance, it thought for about 18 minutes and returned the same symmetry generators he had spent years building the skills to find, and months working on directly. “I think this is just incredible, and it’s clearly going to change everything that we do,” he said.
He has since repeated the pattern with skeptical colleagues: at CERN, a colleague offered a homework-style problem he would normally give PhD students a week to solve, and the model produced a detailed solution in minutes; in Aspen, astrophysicist Elliot Quataert tried to trick it with a puzzle-like transient signal, and it correctly identified a magnetar (a neutron star with an extreme magnetic field), then laid out follow-up observations and a draft abstract.
That arc, moving from skepticism to contagious enthusiasm, led Lupsasca to join OpenAI. He is now pushing beyond one-off wins toward repeatable scientific acceleration: better tools for reading and explaining papers, stronger workflows than a single chat window, and training setups that embed frontier physics in the model’s capabilities. His goal is to unfold the consequences of physics insights faster, so researchers spend less time stuck in algebra and more time identifying the next question worth asking, to eventually crack the biggest mysteries haunting his discipline.