Feed aggregator
Connecticut reveals details about every property's climate risk
New Hampshire considers classifying fossil fuels as ‘green energy’
Plan to relax fuel efficiency standards sparks safety debate
New York lawmakers grapple with data center demand
Germany pledges swift climate action after loss in top court
Trump’s Greenland threats put crucial climate research at risk
Data firms bet regulation will revive market for climate tools
How generative AI can help scientists synthesize complex materials
Generative artificial intelligence models have been used to create enormous libraries of theoretical materials that could help solve all kinds of problems. Now, scientists just have to figure out how to make them.
In many cases, materials synthesis is not as simple as following a recipe in the kitchen. Factors like the temperature and length of processing can yield huge changes in a material’s properties that make or break its performance. That has limited researchers’ ability to test millions of promising model-generated materials.
Now, MIT researchers have created an AI model that guides scientists through the process of making materials by suggesting promising synthesis routes. In a new paper, they showed the model delivers state-of-the-art accuracy in predicting effective synthesis pathways for a class of materials called zeolites, which could be used to improve catalysis, absorption, and ion exchange processes. Following its suggestions, the team synthesized a new zeolite material that showed improved thermal stability.
The researchers believe their new model could break the biggest bottleneck in the materials discovery process.
“To use an analogy, we know what kind of cake we want to make, but right now we don’t know how to bake the cake,” says lead author Elton Pan, a PhD candidate in MIT’s Department of Materials Science and Engineering (DMSE). “Materials synthesis is currently done through domain expertise and trial and error.”
The paper describing the work appears today in Nature Computational Science. Joining Pan on the paper are Soonhyoung Kwon ’20, PhD ’24; DMSE postdoc Sulin Liu; chemical engineering PhD student Mingrou Xie; DMSE postdoc Alexander J. Hoffman; Research Assistant Yifei Duan SM ’25; DMSE visiting student Thorben Prein; DMSE PhD candidate Killian Sheriff; MIT Robert T. Haslam Professor in Chemical Engineering Yuriy Roman-Leshkov; Valencia Polytechnic University Professor Manuel Moliner; MIT Paul M. Cook Career Development Professor Rafael Gómez-Bombarelli; and MIT Jerry McAfee Professor in Engineering Elsa Olivetti.
Learning to bake
Massive investments in generative AI have led companies like Google and Meta to create huge databases filled with material recipes that, at least theoretically, have properties like high thermal stability and selective absorption of gases. But making those materials can require weeks or months of careful experiments that test specific reaction temperatures, times, precursor ratios, and other factors.
“People rely on their chemical intuition to guide the process,” Pan says. “Humans are linear. If there are five parameters, we might keep four of them constant and vary one of them linearly. But machines are much better at reasoning in a high-dimensional space.”
The synthesis process of materials discovery now often takes the most time in a material’s journey from hypothesis to use.
To help scientists navigate that process, the MIT researchers trained a generative AI model on over 23,000 material synthesis recipes described over 50 years of scientific papers. The researchers iteratively added random “noise” to the recipes during training, and the model learned to de-noise and sample from the random noise to find promising synthesis routes.
The result is DiffSyn, which uses an approach in AI known as diffusion.
“Diffusion models are basically a generative AI model like ChatGPT, but more like the DALL-E image generation model,” Pan says. “During inference, it converts noise into meaningful structure by subtracting a little bit of noise at each step. In this case, the ‘structure’ is the synthesis route for a desired material.”
When a scientist using DiffSyn enters a desired material structure, the model offers some promising combinations of reaction temperatures, reaction times, precursor ratios, and more.
“It basically tells you how to bake your cake,” Pan says. “You have a cake in mind, you feed it into the model, the model spits out the synthesis recipes. The scientist can pick whichever synthesis path they want, and there are simple ways to quantify the most promising synthesis path from what we provide, which we show in our paper.”
To test their system, the researchers used DiffSyn to suggest novel synthesis paths for a zeolite, a material class that is complex and takes time to form into a testable material.
“Zeolites have a very high-dimensional synthesis space,” Pan says. “Zeolites also tend to take days or weeks to crystallize, so the impact [of finding the best synthesis pathway faster] is much higher than other materials that crystallize in hours.”
The researchers were able to make the new zeolite material using synthesis pathways suggested by DiffSyn. Subsequent testing revealed the material had a promising morphology for catalytic applications.
“Scientists have been trying out different synthesis recipes one by one,” Pan says. “That makes them very time-consuming. This model can sample 1,000 of them in under a minute. It gives you a very good initial guess on synthesis recipes for completely new materials.”
Accounting for complexity
Previously, researchers have built machine-learning models that mapped a material to a single recipe. Those approaches do not take into account that there are different ways to make the same material.
DiffSyn is trained to map material structures to many different possible synthesis paths. Pan says that is better aligned with experimental reality.
“This is a paradigm shift away from one-to-one mapping between structure and synthesis to one-to-many mapping,” Pan says. “That’s a big reason why we achieved strong gains on the benchmarks.”
Moving forward, the researchers believe the approach should work to train other models that guide the synthesis of materials outside of zeolites, including metal-organic frameworks, inorganic solids, and other materials that have more than one possible synthesis pathway.
“This approach could be extended to other materials,” Pan says. “Now, the bottleneck is finding high-quality data for different material classes. But zeolites are complicated, so I can imagine they are close to the upper-bound of difficulty. Eventually, the goal would be interfacing these intelligent systems with autonomous real-world experiments, and agentic reasoning on experimental feedback to dramatically accelerate the process of materials design.”
The work was supported by MIT International Science and Technology Initiatives (MISTI), the National Science Foundation, Generalitat Vaslenciana, the Office of Naval Research, ExxonMobil, and the Agency for Science, Technology and Research in Singapore.
A portable ultrasound sensor may enable earlier detection of breast cancer
For people who are at high risk of developing breast cancer, frequent screenings with ultrasound can help detect tumors early. MIT researchers have now developed a miniaturized ultrasound system that could make it easier for breast ultrasounds to be performed more often, either at home or at a doctor’s office.
The new system consists of a small ultrasound probe attached to an acquisition and processing module that is a little larger than a smartphone. This system can be used on the go when connected to a laptop computer to reconstruct and view wide-angle 3D images in real-time.
“Everything is more compact, and that can make it easier to be used in rural areas or for people who may have barriers to this kind of technology,” says Canan Dagdeviren, an associate professor of media arts and sciences at MIT and the senior author of the study.
With this system, she says, more tumors could potentially be detected earlier, which increases the chances of successful treatment.
Colin Marcus PhD ’25 and former MIT postdoc Md Osman Goni Nayeem are the lead authors of the paper, which appears in the journal Advanced Healthcare Materials. Other authors of the paper are MIT graduate students Aastha Shah, Jason Hou, and Shrihari Viswanath; MIT summer intern and University of Central Florida undergraduate Maya Eusebio; MIT Media Lab Research Specialist David Sadat; MIT Provost Anantha Chandrakasan; and Massachusetts General Hospital breast cancer surgeon Tolga Ozmen.
Frequent monitoring
While many breast tumors are detected through routine mammograms, which use X-rays, tumors can develop in between yearly mammograms. These tumors, known as interval cancers, account for 20 to 30 percent of all breast cancer cases, and they tend to be more aggressive than those found during routine scans.
Detecting these tumors early is critical: When breast cancer is diagnosed in the earliest stages, the survival rate is nearly 100 percent. However, for tumors detected in later stages, that rate drops to around 25 percent.
For some individuals, more frequent ultrasound scanning in addition to regular mammograms could help to boost the number of tumors that are detected early. Currently, ultrasound is usually done only as a follow-up if a mammogram reveals any areas of concern. Ultrasound machines used for this purpose are large and expensive, and they require highly trained technicians to use them.
“You need skilled ultrasound technicians to use those machines, which is a major obstacle to getting ultrasound access to rural communities, or to developing countries where there aren’t as many skilled radiologists,” Viswanath says.
By creating ultrasound systems that are portable and easier to use, the MIT team hopes to make frequent ultrasound scanning accessible to many more people.
In 2023, Dagdeviren and her colleagues developed an array of ultrasound transducers that were incorporated into a flexible patch that can be attached to a bra, allowing the wearer to move an ultrasound tracker along the patch and image the breast tissue from different angles.
Those 2D images could be combined to generate a 3D representation of the tissue, but there could be small gaps in coverage, making it possible that small abnormalities could be missed. Also, that array of transducers had to be connected to a traditional, costly, refrigerator-sized processing machine to view the images.
In their new study, the researchers set out to develop a modified ultrasound array that would be fully portable and could create a 3D image of the entire breast by scanning just two or three locations.
The new system they developed is a chirped data acquisition system (cDAQ) that consists of an ultrasound probe and a motherboard that processes the data. The probe, which is a little smaller than a deck of cards, contains an ultrasound array arranged in the shape of an empty square, a configuration that allows the array to take 3D images of the tissue below.
This data is processed by the motherboard, which is a little bit larger than a smartphone and costs only about $300 to make. All of the electronics used in the motherboard are commercially available. To view the images, the motherboard can be connected to a laptop computer, so the entire system is portable.
“Traditional 3D ultrasound systems require power expensive and bulky electronics, which limits their use to high-end hospitals and clinics,” Chandrakasan says. “By redesigning the system to be ultra-sparse and energy-efficient, this powerful diagnostic tool can be moved out of the imaging suite and into a wearable form factor that is accessible for patients everywhere.”
This system also uses much less power than a traditional ultrasound machine, so it can be powered with a 5V DC supply (a battery or an AC/DC adapter used to plug in small electronic devices such as modems or portable speakers).
“Ultrasound imaging has long been confined to hospitals,” says Nayeem. “To move ultrasound beyond the hospital setting, we reengineered the entire architecture, introducing a new ultrasound fabrication process, to make the technology both scalable and practical.”
Earlier diagnosis
The researchers tested the new system on one human subject, a 71-year-old woman with a history of breast cysts. They found that the system could accurately image the cysts and created a 3D image of the tissue, with no gaps.
The system can image as deep as 15 centimeters into the tissue, and it can image the entire breast from two or three locations. And, because the ultrasound device sits on top of the skin without having to be pressed into the tissue like a typical ultrasound probe, the images are not distorted.
“With our technology, you simply place it gently on top of the tissue and it can visualize the cysts in their original location and with their original sizes,” Dagdeviren says.
The research team is now conducting a larger clinical trial at the MIT Center for Clinical and Translational Research and at MGH.
The researchers are also working on an even smaller version of the data processing system, which will be about the size of a fingernail. They hope to connect this to a smartphone that could be used to visualize the images, making the entire system smaller and easier to use. They also plan to develop a smartphone app that would use an AI algorithm to help guide the patient to the best location to place the ultrasound probe.
While the current version of the device could be readily adapted for use in a doctor’s office, the researchers hope that the future, a smaller version can be incorporated into a wearable sensor that could be used at home by people at high risk for developing breast cancer.
Dagdeviren is now working on launching a company to help commercialize the technology, with assistance from an MIT HEALS Deshpande Momentum Grant, the Martin Trust Center for MIT Entrepreneurship, and the MIT Media Lab WHx Women’s Health Innovation Fund.
The research was funded by a National Science Foundation CAREER Award, a 3M Non-Tenured Faculty Award, the Lyda Hill Foundation, and the MIT Media Lab Consortium.
Enhanced effect of warming on the leaf-onset date of boreal deciduous broadleaf forest
Nature Climate Change, Published online: 02 February 2026; doi:10.1038/s41558-025-02528-2
The authors consider the changing sensitivity of the leaf-onset date to temperature (ST) for boreal deciduous broadleaf forests. ST increased between 1982–1996 and 1998–2012—potentially linked to enhanced chilling accumulation—but this increase is underestimated in phenology models.Friday Squid Blogging: New Squid Species Discovered
A new species of squid. pretends to be a plant:
Scientists have filmed a never-before-seen species of deep-sea squid burying itself upside down in the seafloor—a behavior never documented in cephalopods. They captured the bizarre scene while studying the depths of the Clarion-Clipperton Zone (CCZ), an abyssal plain in the Pacific Ocean targeted for deep-sea mining.
The team described the encounter in a study published Nov. 25 in the journal Ecology, writing that the animal appears to be an undescribed species of whiplash squid. At a depth of roughly 13,450 feet (4,100 meters), the squid had buried almost its entire body in sediment and was hanging upside down, with its siphon and two long ...
The philosophical puzzle of rational artificial intelligence
To what extent can an artificial system be rational?
A new MIT course, 6.S044/24.S00 (AI and Rationality), doesn’t seek to answer this question. Instead, it challenges students to explore this and other philosophical problems through the lens of AI research. For the next generation of scholars, concepts of rationality and agency could prove integral in AI decision-making, especially when influenced by how humans understand their own cognitive limits and their constrained, subjective views of what is or isn’t rational.
This inquiry is rooted in a deep relationship between computer science and philosophy, which have long collaborated in formalizing what it is to form rational beliefs, learn from experience, and make rational decisions in pursuit of one's goals.
“You’d imagine computer science and philosophy are pretty far apart, but they’ve always intersected. The technical parts of philosophy really overlap with AI, especially early AI,” says course instructor Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering at MIT, calling to mind Alan Turing, who was both a computer scientist and a philosopher. Kaelbling herself holds an undergraduate degree in philosophy from Stanford University, noting that computer science wasn’t available as a major at the time.
Brian Hedden, a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS), who teaches the class with Kaelbling, notes that the two disciplines are more aligned than people might imagine, adding that the “differences are in emphasis and perspective.”
Tools for further theoretical thinking
Offered for the first time in fall 2025, Kaelbling and Hedden created AI and Rationality as part of the Common Ground for Computing Education, a cross-cutting initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.
With over two dozen students registered, AI and Rationality is one of two Common Ground classes with a foundation in philosophy, the other being 6.C40/24.C40 (Ethics of Computing).
While Ethics of Computing explores concerns about the societal impacts of rapidly advancing technology, AI and Rationality examines the disputed definition of rationality by considering several components: the nature of rational agency, the concept of a fully autonomous and intelligent agent, and the ascription of beliefs and desires onto these systems.
Because AI is extremely broad in its implementation and each use case raises different issues, Kaelbling and Hedden brainstormed topics that could provide fruitful discussion and engagement between the two perspectives of computer science and philosophy.
“It's important when I work with students studying machine learning or robotics that they step back a bit and examine the assumptions they’re making,” Kaelbling says. “Thinking about things from a philosophical perspective helps people back up and understand better how to situate their work in actual context.”
Both instructors stress that this isn’t a course that provides concrete answers to questions on what it means to engineer a rational agent.
Hedden says, “I see the course as building their foundations. We’re not giving them a body of doctrine to learn and memorize and then apply. We’re equipping them with tools to think about things in a critical way as they go out into their chosen careers, whether they’re in research or industry or government.”
The rapid progress of AI also presents a new set of challenges in academia. Predicting what students may need to know five years from now is something Kaelbling sees as an impossible task. “What we need to do is give them the tools at a higher level — the habits of mind, the ways of thinking — that will help them approach the stuff that we really can’t anticipate right now,” she says.
Blending disciplines and questioning assumptions
So far, the class has drawn students from a wide range of disciplines — from those firmly grounded in computing to others interested in exploring how AI intersects with their own fields of study.
Throughout the semester’s reading and discussions, students grappled with different definitions of rationality and how they pushed back against assumptions in their fields.
On what surprised her about the course, Amanda Paredes Rioboo, a senior in EECS, says, “We’re kind of taught that math and logic are this golden standard or truth. This class showed us a variety of examples that humans act inconsistently with these mathematical and logical frameworks. We opened up this whole can of worms as to whether, is it humans that are irrational? Is it the machine learning systems that we designed that are irrational? Is it math and logic itself?”
Junior Okoroafor, a PhD student in the Department of Brain and Cognitive Sciences, was appreciative of the class’s challenges and the ways in which the definition of a rational agent could change depending on the discipline. “Representing what each field means by rationality in a formal framework, makes it clear exactly which assumptions are to be shared, and which were different, across fields.”
The co-teaching, collaborative structure of the course, as with all Common Ground endeavors, gave students and the instructors opportunities to hear different perspectives in real-time.
For Paredes Rioboo, this is her third Common Ground course. She says, “I really like the interdisciplinary aspect. They’ve always felt like a nice mix of theoretical and applied from the fact that they need to cut across fields.”
According to Okoroafor, Kaelbling and Hedden demonstrated an obvious synergy between fields, saying that it felt as if they were engaging and learning along with the class. How computer science and philosophy can be used to inform each other allowed him to understand their commonality and invaluable perspectives on intersecting issues.
He adds, “philosophy also has a way of surprising you.”
AIs Are Getting Better at Finding and Exploiting Security Vulnerabilities
From an Anthropic blog post:
In a recent evaluation of AI models’ cyber capabilities, current Claude models can now succeed at multistage attacks on networks with dozens of hosts using only standard, open-source tools, instead of the custom tools needed by previous generations. This illustrates how barriers to the use of AI in relatively autonomous cyber workflows are rapidly coming down, and highlights the importance of security fundamentals like promptly patching known vulnerabilities.
[…]
A notable development during the testing of Claude Sonnet 4.5 is that the model can now succeed on a minority of the networks without the custom cyber toolkit needed by previous generations. In particular, Sonnet 4.5 can now exfiltrate all of the (simulated) personal information in a high-fidelity simulation of the Equifax data breach—one of the costliest cyber attacks in historyusing only a Bash shell on a widely-available Kali Linux host (standard, open-source tools for penetration testing; not a custom toolkit). Sonnet 4.5 accomplishes this by instantly recognizing a publicized CVE and writing code to exploit it without needing to look it up or iterate on it. Recalling that the original Equifax breach happened by exploiting a publicized CVE that had not yet been patched, the prospect of highly competent and fast AI agents leveraging this approach underscores the pressing need for security best practices like prompt updates and patches...
Designing the future of metabolic health through tissue-selective drug delivery
New treatments based on biological molecules like RNA give scientists unprecedented control over how cells function. But delivering those drugs to the right tissues remains one of the biggest obstacles to turning these promising yet fragile molecules into powerful new treatments.
Now Gensaic, founded by Lavi Erisson MBA ’19; Uyanga Tsedev SM ’15, PhD ’21; and Jonathan Hsu PhD ’22, is building an artificial intelligence-powered discovery engine to develop protein shuttles that can deliver therapeutic molecules like RNA to specific tissues and cells in the body. The company is using its platform to create advanced treatments for metabolic diseases and other conditions. It is also developing treatments in partnership with Novo Nordisk and exploring additional collaborations to amplify the speed and scale of its impact.
The founders believe their delivery technology — combined with advanced therapies that precisely control gene expression, like RNA interference (RNAi) and small activating RNA (saRNA) — will enable new ways of improving health and treating disease.
“I think the therapeutic space in general is going to explode with the possibilities our approach unlocks,” Erisson says. “RNA has become a clinical-grade commodity that we know is safe. It is easy to synthesize, and it has unparalleled specificity and reversibility. By taking that and combining it with our targeting and delivery, we can change the therapeutic landscape.”
Drinking from the firehose
Erisson worked on drug development at the large pharmaceutical company Teva Pharmaceuticals before coming to MIT for his Sloan Fellows MBA in 2018.
“I came to MIT in large part because I was looking to stretch the boundaries of how I apply critical thinking,” Erisson says. “At that point in my career, I had taken about 10 drug programs into clinical development, with products on the market now. But what I didn’t have were the intellectual and quantitative tools for interrogating finance strategy and other disciplines that aren’t purely scientific. I knew I’d be drinking from the firehose coming to MIT.”
Erisson met Hsu and Tsedev, then PhD students at MIT, in a class taught by professors Harvey Lodish and Andrew Lo. The group started holding weekly meetings to discuss their research and the prospect of starting a business.
After Erisson completed his MBA program in 2019, he became chief medical and business officer at the MIT spinout Iterative Health, a company using AI to improve screening for colorectal cancer and inflammatory bowel disease that has raised over $200 million to date. There, Erisson ran a 1,400-patient study and led the development and clearance of the company’s software product.
During that time, the eventual founders continued to meet at Erisson’s house to discuss promising research avenues, including Tsedev’s work in the lab of Angela Belcher, MIT’s James Mason Crafts Professor of Biological Engineering. Tsedev’s research involved using bacteriophages, which are fast-replicating protein particles, to deliver treatments into hard-to-drug places like the brain.
As Hsu and Tsedev neared completion of their PhDs, the team decided to commercialize the technology, founding Gensaic at the end of 2021. Gensaic’s approach uses a method called unbiased directed evolution to find the best protein scaffolding to reach target tissues in the body.
“Directed evolution means having a lot of different species of proteins competing together for a certain function,” Erisson says. “The proteins are competing for the ability to reach the right cell, and we are then able to look at the genetic code of the protein that has ‘won’ that competition. When we do that process repeatedly, we find extremely adaptable proteins that can achieve the function we’re looking for.”
Initially, the founders focused on developing protein scaffolds to deliver gene therapies. Gensaic has since pivoted to focus on delivering molecules like siRNA and RNAi, which have been hard to deliver outside of the liver.
Today Gensaic has screened more than 500 billion different proteins using a process called phage display and directed evolution. It calls its platform FORGE, for Functional Optimization by Recursive Genetic Evolution.
Erisson says Gensaic’s delivery vehicles can also carry multiple RNA molecules into cells at the same time, giving doctors a novel and powerful set of tools to treat and prevent diseases.
“Today FORGE is built into the idea of multifunctional medicines,” Erisson says. “We are moving into a future where we can extract multiple therapeutic mechanisms from a single molecule. We can combine proteins with multiple tissue selectivity and multiple molecules of siRNA or other therapeutic modalities, and affect complex disease system biology with a single molecule.”
A “universe of opportunity”
The founders believe their approach will enable new ways of improving health by delivering advanced therapies directly to new places in the body. Precise delivery of drugs to anywhere in the body could not only unlock new therapeutic targets but also boost the effectiveness of existing treatments and reduce side effects.
“We’ve found we can get to the brain, and we can get to specific tissues like skeletal and adipose tissue,” Erisson says. “We’re the only company, to my knowledge, that has a protein-based delivery mechanism to get to adipose tissue.”
Delivering drugs into fat and muscle cells could be used to help people lose weight, retain muscle, and prevent conditions like fatty liver disease or osteoporosis.
Erisson says combining RNA therapeutics is another differentiator for Gensaic.
“The idea of multiplexed medicines is just emerging,” Erisson says. “There are no clinically approved drugs using dual-targeted siRNAs, especially ones that have multi-tissue targeting. We are focused on metabolic indications that have two targets at the same time and can take on unique tissues or combinations of tissues.”
Gensaic’s collaboration with Novo Nordisk, announced last year, targets cardiometabolic diseases and includes up to $354 million in upfront and milestone payments per disease target.
“We already know we can deliver multiple types of payloads, and Novo Nordisk is not limited to siRNA, so we can go after diseases in ways that aren’t available to other companies,” Erisson says. “We are too small to try to swallow this universe of opportunity on our own, but the potential of this platform is incredibly large. Patients deserve safer medicines and better outcomes than what are available now.”
