Last year, climate scientist Zeke Hausfather was experimenting with climate data visualizations, searching for fresh and striking ways to illustrate just how rapidly the planet is heating up. He was brainstorming with an artificial-intelligence tool, having it quickly code and generate new designs. Together, they created inventive tree-ring-style charts — with months arranged around each ring, annual circles expanding outward over time, and colors representing temperature. Then Hausfather posed a question to the AI: what if these charts were rendered in 3D?
The outcome was what Hausfather calls a thermal helix animation — a visualization of temperature spiraling upward through time, forming a shape that resembles a tornado (see ‘A new view’). In a world where most people have already seen the familiar ‘hockey-stick’ graph of rising global temperatures, this is a welcome new graphic: captivating and visually stunning. And despite being a skilled coder himself, Hausfather had no clue how to build it on his own.
Hausfather, a researcher at the climate-data nonprofit Berkeley Earth in California, is far from the only scientist using AI tools this way. Thanks to large language models (LLMs), people can now simply tell their computers to write and execute code for graphics, applications, data processing, and virtually anything else they can dream up.
This relaxed, conversational approach is commonly known as vibe coding. Andrej Karpathy, co-founder of the US company OpenAI, coined the term last year. It means asking an LLM-powered tool to build or accomplish something using code behind the scenes, with the user refining the output through successive prompts until the results look correct. In its purest form, vibe coding doesn’t involve examining the code at all — only the end product. But the term has no rigid definition, so the boundaries of what qualifies as vibe coding are blurry. Plenty of people with programming skills start a project by vibing and then review the code manually, or begin coding themselves and then turn to an AI tool to fill in the missing pieces.
Credit: Zeke Hausfather
Nature spoke with a range of scientists — from highly experienced coders to absolute beginners, and those in between, like Hausfather, who are leveraging AI to push the boundaries of what they can achieve. Many incorporate AI-assisted coding into their daily work, and some are deliberately stress-testing its limits. All of them said the AI tools currently available are remarkable, helping them dramatically accelerate their workflows or experiment with novel ideas. But they also caution that these tools should be used carefully, and some had alarming stories to share.
All aboard
In many ways, vibe coding represents the culmination of a long evolution in how humans interact with computers. In the 1960s, people communicated with machines using punch cards. Computer scientists soon created programming languages — such as BASIC and later Python — that made instructing computers feel more intuitive. And developers built software platforms that allowed non-programmers to create confidently within defined boundaries: Microsoft Word, for instance, lets users format documents without any coding knowledge. What’s new is the unmatched speed and flexibility that LLMs bring to code generation, along with their peculiar habit of fabricating things and making errors.

What are the best AI tools for research? Nature’s guide
Although any LLM can generate code, specialized systems have emerged over the past few years tailored specifically to this task, including GitHub Copilot, Anysphere’s Cursor, Anthropic’s Claude Code, Google’s Gemini Code Assist, and OpenAI’s Codex. These systems can produce a working application from as little as a one-sentence prompt. The results can be buggy, however. For instance, Anthropic’s Claude Opus 4.7 currently leads on Vibe Code Bench — a benchmark that tests the functionality of web applications built autonomously by AI tools — but with an accuracy score of just 71%.
Like other AI products, AI coding tools are steadily improving. Those released in the past year or so behave like amiable project managers, according to Hausfather. You can feed them lengthy descriptions of goals and requirements, and they’ll respond with things like a coding plan, proposed verification tests, multiple-choice options for interface design, and thousands of lines of well-documented code complete with explanations. The progress has impressed Hausfather: today, he says, AI-generated code is “as bug-free as my code.”
Among professional coders in the software community, nearly everyone now relies heavily on AI, whether through vibing or other approaches. A survey this year by DX — a company based in Salt Lake City, Utah, focused on measuring developer productivity — found that more than 90% of software developers use AI coding assistants at least once a month, and fully AI-written code now accounts for more than a quarter of customer-facing code.
It’s difficult to gauge how many researchers are hopping on the vibe-coding bandwagon, but the interest is clearly there. When Argonne National Laboratory in Lemont, Illinois, organized a one-day vibe-coding hackathon last June for its researchers, it reached its cap of 200 participants.
Manuel Corpas, a genomicist and health-data scientist at the University of Westminster in London, says there’s a genuine appetite for vibing in his field. He vibe-coded a project called ClawBio in just two days and unveiled it at an Imperial College London hackathon in early March. It functions as a kind of library of code snippets useful in bioinformatics — such as instructions for extracting data from scientific figures or for generating personalized medication recommendations based on a genome sequence stored on your computer. AI agents can draw code from the library to integrate these ‘skills’ into their own tasks.
After its launch, Corpas says, ClawBio accumulated an impressive 5,000 downloads in just two weeks, and the community had contributed dozens of new skills — which were themselves vibe coded, he adds.
Good news first
Rosemarie Wilton, a molecular biologist at Argonne National Laboratory, has zero coding experience. But she uses established software packages to compile and analyze her datasets on viruses found in wastewater, so she attended the lab’s hackathon last June to explore what AI coding tools might offer her.

AI ‘scientists’ joined these research teams: here’s what happened
Wilton was impressed. She doesn’t have a graduate student, but AI tools acted like one. She could ask them to handle straightforward tasks such as running data through one software package after another, cross-checking results, or producing graphs of outputs in specific ways. The AI tools could operate independently throughout the day.
To test new data-processing pipelines, Wilton would normally initiate each step by hand or reach out to Argonne’s Data Science
Wilton initially relied on her department’s Learning division to code the pipeline for her, but now AI can accelerate this exploratory phase. If she discovers an effective approach, she says, she would ask the division to code it properly before, for example, submitting processed data to the state health department.
As a side benefit, says Wilton, the ease of vibe coding makes it less intimidating for her to learn some coding herself. “I can learn a lot from it, not having done a lot of Python coding,” says Wilton. “It has opened up my world.”

Molecular biologists Rosemarie Wilton (right) and Sarah Owens test AI workflows at Argonne National Laboratory in Lemont, Illinois.Credit: Argonne National Laboratory
Speed and agility are key advantages of AI coding for everyone who Nature spoke to. Hausfather says that the code for, say, converting one axis of a graph to a log scale or adding extra information to a chart is often non-intuitive. The ability to instruct his computer to do such things in plain English is, he says, “magical”. Vibe coding has also enabled him to build and host websites in a day (including a dashboard that creates constantly updated, visually appealing charts of global temperature), which he had never done before.
Tim Hobbs, a theoretical physicist at Argonne who also attended the hackathon, says he uses AI “all the time”, because coding is a big part of his job. He explores physics that goes beyond the standard model of particle physics, for example, by analyzing vast amounts of data from particle accelerators to test underlying theories or attempt to discover new ones. There are plenty of mathematical approaches he could take, so he has vibe-coded to see which ones seem more promising.
“It’s like handing off a problem to an extremely competent graduate student,” he says. “I can just quickly try a variety of ideas and maybe discard some that are suboptimal.” He says he checks the code behind anything important, such as a research paper intended for publication.
Hobbs is impressed by how cleanly AI code is written, with plenty of helpful annotations, often to a higher standard than the human-generated code he sees accompanying published papers. “Human code has a messiness to it, because we’re human,” he says.
For a paper1 published earlier this year, Jesse Meyer, an analytical chemist and expert in computational biomedicine at Cedars Sinai Medical Center in Los Angeles, California, attempted to demonstrate the power of AI. His team has previously developed software packages for biological data processing. This time, he used an LLM-powered app builder to vibe-code a pipeline for analyzing proteomics and other ‘omics’ data.

Meet the academics refusing to use generative AI
He found that it took less than ten minutes, four well-written prompts and less than US$2 in fees to create something that might reasonably take professional coders months or even years to develop without AI. “The barrier to trying something new is very low,” Meyer says. He sees a future in which, instead of publishing code, researchers publish their prompts or ‘vibe blueprints’ for others to use.
But Meyer emphasizes that his work is just a demonstration of what can be done: he wouldn’t recommend vibe-coding anything important without substantial checks. When he published this experiment, he included a disclaimer in the paper’s introduction: “Vibe coding is not a substitute for understanding statistical analysis or computational logic.”



