{"id":4409,"date":"2026-01-17T10:44:10","date_gmt":"2026-01-17T10:44:10","guid":{"rendered":"https:\/\/salarydistribution.com\/machine-learning\/2026\/01\/17\/ai-copilot-keeps-berkeleys-x-ray-particle-accelerator-on-track\/"},"modified":"2026-01-17T10:44:10","modified_gmt":"2026-01-17T10:44:10","slug":"ai-copilot-keeps-berkeleys-x-ray-particle-accelerator-on-track","status":"publish","type":"post","link":"https:\/\/salarydistribution.com\/machine-learning\/2026\/01\/17\/ai-copilot-keeps-berkeleys-x-ray-particle-accelerator-on-track\/","title":{"rendered":"AI Copilot Keeps Berkeley\u2019s X-Ray Particle Accelerator on Track"},"content":{"rendered":"<div>\n\t\t<span class=\"bsf-rt-reading-time\"><span class=\"bsf-rt-display-label\"><\/span> <span class=\"bsf-rt-display-time\"><\/span> <span class=\"bsf-rt-display-postfix\"><\/span><\/span><\/p>\n<p>In the rolling hills of Berkeley, California, an AI agent is supporting high-stakes physics experiments at the Advanced Light Source (ALS) particle accelerator.<\/p>\n<p>Researchers at the Lawrence Berkeley National Laboratory ALS facility recently deployed the Accelerator Assistant, a large language model (LLM)-driven system to keep X-ray research on track.<\/p>\n<p>The Accelerator Assistant \u2014 powered by an NVIDIA H100 GPU harnessing CUDA for accelerated inference \u2014 taps into institutional knowledge data from the ALS support team and routes requests through Gemini, Claude or ChatGPT. It writes Python and solves problems, either autonomously or with a human in the loop.<\/p>\n<p>This is no small task. The ALS particle accelerator sends electrons traveling near the speed of light in a 200-yard circular path, emitting ultraviolet and X-ray light, which is directed through 40 beamlines for 1,700 scientific experiments per year. Scientists worldwide use this process to study materials science, biology, chemistry, physics and environmental science.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-88967 aligncenter\" src=\"https:\/\/blogs.nvidia.com\/wp-content\/uploads\/2026\/01\/paticleaccel.jpg\" alt=\"\" width=\"512\" height=\"384\"><\/p>\n<p>At the ALS, beam interruptions can last minutes, hours or days, depending on the complexity, halting concurrent scientific experiments in process. And much can go wrong: the ALS control system has more than 230,000 process variables.<\/p>\n<p>\u201cIt\u2019s really important for such a machine to be up, and when we go down, there are 40 beamlines that do X-ray experiments, and they are waiting,\u201d said Thorsten Hellert, staff scientist from the Accelerator Technology and Applied Physics Division at Berkeley Lab and lead author of a <a target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2509.17255\" rel=\"noopener\">research paper<\/a> on the groundbreaking work.<\/p>\n<\/p>\n<p>Until now, facility staff troubleshooting issues have had to quickly identify the areas, retrieve data and gather the right personnel for analysis under intense time pressure to get the system back up and running.<\/p>\n<p>\u201cThe novel approach offers a blueprint for securely and transparently applying large language model-driven systems to particle accelerators, nuclear and fusion reactor facilities, and other complex scientific infrastructures,\u201d said Hellert.<\/p>\n<p>The research team demonstrated that the Accelerator Assistant can autonomously prepare and run a multistage physics experiment, cutting setup time and reducing efforts by 100x.<\/p>\n<h2><b>Applying Context Engineering Prompts to Accelerator Assistant<\/b><\/h2>\n<p>The ALS operators interact with the system through either a command line interface or Open WebUI, which enables interaction with various LLMs and is accessible from control room stations, as well as remotely. Under the hood, the system uses Osprey, a framework developed at Berkeley Lab to apply agent-based AI safely in complex control systems.<\/p>\n<p>Each user is authenticated and the framework maintains personalized context and memory across sessions, and multiple sessions can be managed simultaneously. This allows users to organize distinct tasks or experiments into separate threads. These inputs are routed through the Accelerator Assistant, which makes connections to the database of more than 230,000 process variables, a historical database archive service and Jupyter Notebook-based execution environments.<\/p>\n<p>\u201cWe try to engineer the context of every language model call with whatever prior knowledge we have from this execution up to this point,\u201d said Hellert.<\/p>\n<p>Inference is done either locally \u2014 using Ollama, which is an open-source tool for running LLMs with a personal computer, on an H100 GPU node located within the control room network \u2014 or externally with the CBorg gateway, which is a lab-managed interface that routes requests to external tools such as ChatGPT, Claude or Gemini.<\/p>\n<p>The hybrid architecture balances secure, low-latency, on-premises inference with access to the latest foundation models. Integration with EPICS (Experimental Physics and Industrial Control System) enables operator-standard safety constraints for direct interaction with accelerator hardware. EPICS is a distributed control system used in large-scale scientific facilities such as particle accelerators. Engineers can write Python code in Jupyter Notebook that can communicate with it.<\/p>\n<p>Basically, conversational input is turned into a clear natural language task description for objectives without redundancy. External knowledge such as personalized memory tied to users, documentation and accelerator databases are integrated to assist with terminology and context.<\/p>\n<p>\u201cIt\u2019s a large facility with a lot of specialized expertise,\u201d said Hellert. \u201cMuch of that knowledge is scattered across teams, so even finding something simple \u2014 like the address of a temperature sensor in one part of the machine \u2014 can take time.\u201d<\/p>\n<h2><b>Tapping Accelerator Assistant to Aid Engineers, Fusion Energy Development<\/b><\/h2>\n<p>Using the Accelerator Assistant, engineers can start with a simple prompt describing their goal. Behind the scenes, the system draws on carefully prepared examples and keywords from accelerator operations to guide the LLM\u2019s reasoning.<\/p>\n<p>\u201cEach prompt is engineered with relevant context from our facility, so the model already knows what kind of task it\u2019s dealing with,\u201d said Hellert.<\/p>\n<p>Each agent is an expert in that field, he said.<\/p>\n<p>Once the task is defined, the agent brings together its specialized capabilities \u2014 such as finding process variables or navigating the control system \u2014 and can automatically generate and run Python scripts to analyze data, visualize results or interact safely with the accelerator itself.<\/p>\n<p>\u201cThis is something that can save you serious time \u2014 in the paper, we say two orders of magnitude for such a prompt,\u201d said Hellert.<\/p>\n<p>Looking ahead, Hellert aims to have the ALS engineers put together a wiki that documents the many processes that go on to support the experiments. These documents could help the agents run the facilities autonomously \u2014 with a human in the loop to approve the course of action.<\/p>\n<p>\u201cOn these high-stakes scientific experiments, even if it\u2019s just a TEM microscope or something that might cost $1 million, a human in the loop can be very important,\u201d said Hellert.<\/p>\n<p>The work has already expanded beyond ALS as part of the DOE\u2019s Genesys mission, with the framework being deployed across U.S. particle accelerator facilities. Next up, Hellert just began collaborating with engineers at the <a target=\"_blank\" href=\"https:\/\/www.iter.org\/\" rel=\"noopener\">ITER<\/a> fusion reactor \u2014 the world\u2019s largest \u2014 in France for implementing the framework for use in the fusion reactor facility. He also has a collaboration in the works with the Extremely Large Telescope ELT, in northern Chile.<\/p>\n<h2><b>Benefiting Humanity: Scientific Impact of Experiments Supported by ALS<\/b><\/h2>\n<p>Beyond optimizing the accelerator and other industrial operations, the work at the ALS directly enables scientific breakthroughs with global impact. The facility\u2019s stable X-ray beams underpin research in health, climate resilience and planetary science.<\/p>\n<p>During the COVID-19 pandemic, ALS researchers helped characterize a rare antibody that could neutralize SARS-CoV-2. Structural biology experiments at Beamline 4.2.2 revealed how six molecular loops of the antibody latch onto and disable the viral spike protein. The findings supported the rapid development of a therapeutic that remained effective through multiple variants.<\/p>\n<p>ALS science also contributes to climate-focused research. Metal-organic frameworks (MOFs) \u2014 a class of porous materials capable of capturing water or carbon dioxide from air \u2014 were extensively studied across several ALS beamlines. These experiments supported foundational work that ultimately led to the 2025 Nobel Prize in Chemistry, recognizing the transformative potential of MOFs for sustainable water harvesting and carbon management.<\/p>\n<p>In planetary science, ALS measurements of samples returned from NASA\u2019s OSIRIS-REx mission helped trace the chemical history of asteroid Bennu. X-ray analyses provided evidence that such asteroids carried water and molecular precursors of life to early Earth, deepening our understanding of the origins of the planet\u2019s habitable conditions.<\/p>\n<p>\u00a0<\/p>\n<\/p><\/div>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/blogs.nvidia.com\/blog\/ai-copilot-berkeley-x-ray-particle-accelerator\/<\/p>\n","protected":false},"author":0,"featured_media":4410,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[3],"tags":[],"_links":{"self":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4409"}],"collection":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/comments?post=4409"}],"version-history":[{"count":0,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/posts\/4409\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media\/4410"}],"wp:attachment":[{"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/media?parent=4409"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/categories?post=4409"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/salarydistribution.com\/machine-learning\/wp-json\/wp\/v2\/tags?post=4409"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}