Feed aggregator
Property insurance is becoming the latest climate weapon
Climate change trifecta fueled Georgia wildfires
Iran war is ‘supercharging’ the energy transition, UNFCCC says
Solar ranch aims to prove grazing cattle under panels is farm win-win
Mirova green fund exits Philippine debt after graft scandal
Alberta minister believes new pipeline will have Indigenous support
Beacon Biosignals is mapping the brain during sleep
The human brain remains one of the most fascinating and perplexing mysteries in medicine. Scientists still struggle to match neurological activity with brain function and detect problems early, slowing efforts to treat neurological disorders and other diseases.
Beacon Biosignals is working to make sense of the brain by monitoring its activity while people sleep. The company, which was founded by Jake Donoghue PhD ’19 and former MIT researcher Jarrett Revels, developed a lightweight headband that uses electroencephalogram (EEG) technology to measure brain activity while people enjoy their normal sleep routines at home. Those data are processed by machine-learning algorithms to monitor the effects of novel treatments, find new signs of disease progression, and create patient cohorts for clinical trials.
“There’s a step-change in what becomes possible when you remove the sleep lab and bring clinical-grade EEG into the home,” says Donoghue, who serves as Beacon’s CEO. “It turns sleep from a constrained, facility-based test into a scalable source of high-quality data for diagnostics, drug development, and longitudinal brain health.”
Beacon partners with pharmaceutical companies to accelerate its path to patients. The company’s FDA 510(k)-cleared medical device has already been used in over 40 clinical trials across the globe as part of studies aimed at treating conditions including major depressive disorder, schizophrenia, narcolepsy, idiopathic hypersomnia, Alzheimer’s disease, and Parkinson’s disease.
With each deployment, Beacon learns more about how the brain works — insights it is using to create a “foundation model” of the brain.
“It’s our belief that the dataset that’s going to transform brain health doesn’t exist yet — but we are rapidly creating it,” Donoghue says. “Our platform can characterize the heterogeneity of disease progression, generating dynamic insights that are impossible to fully capture through static modalities like sequencing or imaging. The brain is an electric organ and changes through synaptic plasticity, so tracking brain function across many diseases at scale will allow us to discover novel subgroups of diseases and map them over time.”
Illuminating the brain
Donoghue trained in the Harvard-MIT Program in Health Sciences and Technology, conducting clinical training for an MD while completing his PhD in neuroscience at MIT under the guidance of Earl Miller, MIT's Picower Professor in Brain and Cognitive Sciences and The Picower Institute for Learning and Memory. While in the program, Donoghue trained at Massachusetts General Hospital and Boston Children’s Hospital, where he helped care for patients, including in oncology, during the rise of genomic sequencing to guide precision cancer therapies. He later worked in neurology and psychiatry, where care often relied on more iterative approaches — highlighting an opportunity to bring similarly data-driven precision to brain health.
“What struck me most was the inability to measure brain function in the ways that cardiologists can longitudinally monitor cardiac function in patients from home,” Donoghue says. “At MIT, I built this conviction that processing a lot of brain data and working to correlate that with brain function would be transformative to how these neurological diseases are identified and treated.”
Toward the end of his training, Donoghue began developing his ideas further, engaging with mentors including HST and Harvard Medical School professors Sydney Cash and Brandon Westover. He had met Revels, who was working as a research software engineer in MIT’s Julia Lab, during his PhD, and convinced him to co-found Beacon with him in 2019.
“We decided building a business to understand the organ of interest — the brain — would be a great start to understanding heterogeneous neuropsychiatric diseases and building better treatments,” Donoghue recalls.
Beacon began as a computation and analytics company building wearable devices to expand clinical impact and reach. From its early days, Beacon has been partnering with large pharmaceutical companies running clinical trials, offering a less invasive way to watch brain activity and learn how their drugs are impacting the brain as well as how patients sleep.
“It was clear sleep was the right window to understand the brain,” Donoghue says. “Neural activity during sleep can be an order of magnitude higher and more structured, almost like a language. It’s a great surface area for understanding brain function and how different drugs affect the brain.”
Donoghue says Beacon’s devices can collect lab-grade data on each patient for multiple sequential nights, resulting in higher quality assessment. The company uses machine learning to extract insights, such as the time patients spend in different sleep stages and the number of small awakenings that occur throughout the night. It can also detect subtle sleep architecture changes that might lead to cognitive decline.
“We’re starting to take features of sleep activity and link them to outcomes in a way that’s never been done with this level of precision,” Donoghue says.
To date, Beacon has taken part in clinical trials for sleep and psychiatric disorders as well as neurodegenerative diseases, where sleep changes can emerge years before the presentation of symptoms.
“We do a lot of work in areas like Alzheimer’s disease and Parkinson’s, which affected my grandfather,” Donoghue says. “We’re analyzing features of rapid-eye-movement and slow-wave sleep to detect early changes that precede clinical symptoms. It’s an opportunity to move these diseases from late recognition to much earlier, data-driven detection.”
Improving brain treatments for millions
Last year, Beacon acquired an at-home sleep apnea testing company that serves more than 100,000 patients each year across the U.S., accelerating access to high-quality, comprehensive testing in the home and expanding the reach of its platform. Then in November, the company raised $97 million to accelerate that expansion.
“The vision has always been to reach patients and help people at scale,” Donoghue says. “What’s powerful is that we’re building a longitudinal record of brain function over time,” Donoghue says. “A patient might come in for sleep apnea screening, but if they develop Parkinson’s years later, that earlier data becomes a window into the disease before symptoms emerged. That turns routine testing into a foundation for entirely new prognostic biomarkers — and a path to detecting and intervening in brain disease earlier, potentially before symptoms ever begin.”
Inequity arises from multi-gas mitigation
Nature Climate Change, Published online: 01 May 2026; doi:10.1038/s41558-026-02628-7
Addressing non-CO2 greenhouse gases alongside CO2 is essential for climate mitigation, but distributional effects remain a major concern. Now a study shows that when climate policy extends beyond CO2, the resulting costs are unevenly distributed across households worldwide.Distributional effects of expanding climate targets beyond CO<sub>2</sub>
Nature Climate Change, Published online: 01 May 2026; doi:10.1038/s41558-026-02622-z
In response to the large contribution of non-CO2 GHG to global warming, pricing of their emissions has been proposed as a cost-effective mitigation option. The authors find that such multi-GHG pricing can be more regressive than CO2-only pricing, with a relative increase in burden for low-income households.Utah’s New Law Targeting VPNs Goes Into Effect Next Week
For the last couple of years, we’ve watched the same predictable cycle play out across the globe: a state (or country) passes a clunky age-verification mandate, and, without fail, Virtual Private Network (VPN) usage surges as residents scramble to maintain their privacy and anonymity. We've seen this everywhere—from states like Florida, Missouri, Texas, and Utah, to countries like the United Kingdom, Australia, and Indonesia.
Instead of realizing that mass surveillance and age gates aren't exactly crowd favorites, Utah lawmakers have decided that VPNs themselves are the real issue.
Next week, on May 6, 2026, Utah will become, to EFF’s knowledge, the first state in the nation to target the use of VPNs to avoid legally mandated age-verification gates. While advocates in states like Wisconsin successfully forced the removal of similar provisions due to constitutional and technical concerns, Utah is proceeding with a mandate that threatens to significantly undermine digital privacy rights.
What the Bill DoesFormally known as the “Online Age Verification Amendments,” Senate Bill 73 (SB 73) was signed by Governor Spencer Cox on March 19, 2026. While the majority of the bill consists of provisions related to a 2% tax on revenues from online adult content that is set to take effect in October, one of the more immediate concerns for EFF is the section regulating VPN access, which goes into effect this coming Wednesday.
The VPN ProvisionsThe new law explicitly addresses VPN use in Section 14, which amends Section 78B-3-1002 of existing Utah statutes in two primary ways:
- Regulation based on physical location: Under the law, an individual is considered to be accessing a website from Utah if they are physically located there, regardless of whether they use a VPN, proxy server, or other means to disguise their geographic location.
- Ban on sharing VPN instructions: Commercial entities that host "a substantial portion of material harmful to minors" are now prohibited from facilitating or encouraging the use of a VPN to bypass age checks. This includes providing instructions on how to use a VPN or providing the means to circumvent geofencing.
By holding companies liable for verifying the age of anyone physically in Utah, even those using a VPN, the law creates a massive "liability trap." Just like we argued in the case of the Wisconsin bill, if a website cannot reliably detect a VPN user's true location and the law requires it to do so for all users in a particular state, then the legal risk could push the site to either ban all known VPN IPs, or to mandate age verification for every visitor globally. This would subject millions of users to invasive identity checks or blocks to their VPN use, regardless of where they actually live.
"Don't Ask, Don't Tell"In practice, SB 73 is different from the Wisconsin proposal in that it stops short of a total VPN ban. Instead, it discourages using VPNs by imposing the liability described above and by muzzling the websites themselves from sharing information about VPNs. This raises significant First Amendment concerns, as it prevents platforms from providing basic, truthful information about a lawful privacy tool to their users.
Unlike previous drafts seen in other states, SB 73 doesn't explicitly ban the use of a VPN. Under a "don't ask, don't tell" style of enforcement, websites likely only have an obligation to ask for proof of age if they actually learn that a user is physically in Utah and using a VPN. If a site doesn’t know a user is in Utah, their broader obligation to police VPNs remains murky. So, while SB 73 isn’t as extreme as the discarded Wisconsin proposal, it remains a dangerous precedent.
Technical FeasibilityThen there is also the question of technical feasibility: Blocking all known VPN and proxy IP addresses is a technical whack-a-mole that likely no company can win. Providers add new IP addresses constantly, and no comprehensive blocklist exists. Complying with Utah’s requirements would require impossible technical feats.
The internet is built to, and will always, route around censorship. If Utah successfully hampers commercial VPN providers, motivated users will transition to non-commercial proxies, private tunnels through cloud services like AWS, or residential proxies that are virtually indistinguishable from standard home traffic. These workarounds will emerge within hours of the law taking effect. Meanwhile, the collateral damage will fall on businesses, journalists, and survivors of abuse who rely on commercial VPNs for essential data security.
These provisions won't stop a tech-savvy teenager, but they certainly will impact the privacy of every regular Utah resident who just wants to keep their data out of the hands of brokers or malicious actors.
Uncharted TerritoryLawmakers have watched age-verification mandates fail and, instead of reconsidering the approach, have decided to wage war on privacy itself. As the Cato Institute states:
“The point is that when an internet policy can be avoided by a relatively common technology that often provides significant privacy and security benefits, maybe the policy is the problem. Age verification regimes do plenty of damage to online speech and privacy, but attacking VPNs to try to keep them from being circumvented is doubling down on this damaging approach."
Attacks on VPNs are, at their core, attacks on the tools that enable digital privacy. Utah is setting a precedent that prioritizes government control over the fundamental architecture of a private and secure internet, and it won’t stop at the state’s borders. Regulators in countries outside the U.S. are still eyeing VPN restrictions, with the UK Children’s Commissioner calling VPNs a “loophole that needs closing” and the French Minister Delegate for Artificial Intelligence and Digital Affairs saying VPNs are “the next topic on my list” after the country enacted a ban on social media for kids under 15.
As this law goes into effect next week, we are entering uncharted territory. Lawmakers who can’t distinguish between a security tool and a "loophole" are now writing the rules for one of the most complex infrastructures on Earth. And we can assure that the result won't be a safer internet, only an increasingly less private one.
Unlocking mysteries of the universe through math
GPS navigation, cryptography, quantum computing — while some of humankind’s greatest advancements have been invented by pioneers from various cultures, they were founded upon one common grammar: mathematics.
“Mathematics is the language with which God wrote the universe,” said the famous Italian astronomer, physicist, and philosopher Galileo Galilei, who, among his various scientific contributions, helped provide evidence for the idea that the sun is at the center of the solar system.
Although mostly conveyed through combinations of numbers, letters, and signs that may seem enigmatic to many, math equations hold within them countless stories — playbooks that generations of wonderers and inventors have crafted, refined, and shared in an attempt to make sense of a world full of unknown variables.
“I have faith in mathematics that, when there seems to be something special happening, when there’s some coincidence, that it’s not just a coincidence,” says mathematician Amanda Burcroff, “but that there’s actually some really deep, interesting, and involved reason for why that should be true.”
Burcroff’s research is focused on algebraic combinatorics, an area that provides discrete frameworks for understanding algebraic and geometric spaces that ubiquitously arise across science. This year, she joins MIT’s Department of Mathematics as a postdoc as part of the School of Science Dean’s Fellowship. Working with Professor Alexander Postnikov, Burcroff is building upon her techniques with the goal of applying them to other areas such as theoretical physics — a field that seeks to uncover the fundamental laws governing everything from subatomic particles to the cosmos itself.
“I have trust that if you keep following the path, eventually you’ll find the treasure — that is, whatever theorem or proof — that you’re looking for,” she says.
Exploring possibilities and redefining rules
Like many children, Burcroff once saw math as a subject that entailed lots of memorizing. Although she felt that it came naturally to her, she didn’t always find math very interesting.
In high school, as she came to learn about areas like calculus and geometry, Burcroff started to see the discipline in a different light — a creative approach to exploring what’s possible.
“[In] most other fields, the rules are imposed on you by the world,” she says, “but in math, you get full freedom to lay down those rules and then figure out what the implications of those rules are by using logical consequence.”
In 2015, Burcroff began her bachelor’s degree at the University of Michigan with a major in math and a minor in computer science. There, she entered the world of combinatorics — a branch of math dealing with counting, arranging, and combining objects that forms a crucial basis for understanding the complexity of problems, as well as the limits of computer algorithms.
“When I was starting out, I was just happy to have any mystery that anyone gave me,” she says.
Math was, to Burcroff, like a fun game with levels to complete. But during a study abroad program in Budapest, Hungary — the hometown of Paul Erdős, who is considered to be one of the most prolific mathematicians of the 20th century — it became more exciting to play when she was handed puzzles no one has yet solved.
“It turns out that if you put down the right set of rules, there’s an infinite number of beautiful things that you can do with it,” she says.
A journey of endless mysteries to unlock
In 2019, Burcroff embarked on a journey to pursue further research in England, later completing a master’s degree in pure mathematics at the University of Cambridge, then a research master’s degree at Durham University. In 2021, she returned to the United States and began her PhD at Harvard University, with the guidance of Professor Lauren Williams.
Among several riddles she has unraveled over the years, Burcroff helped unify different mathematical approaches to understand why systems work so reliably. Think of it as finding out that two seemingly different set of instructions actually lead the same way. By demonstrating their connections, her work has revealed an underlying, overarching mathematical architecture — a finding that later helped Burcroff and her collaborators tackle one of the many enduring riddles in her field.
Generalized cluster algebras form the basis for describing geometries that appear throughout physics. For more than a decade, mathematicians suspected these building blocks were created only by adding up ingredients and never subtracting, although no one was able to prove it. In 2024, Burcroff and her collaborators published a paper demonstrating that these spaces have nice positivity properties by developing a new way to count and organize patterns — helping untangle a long-standing conjecture, whose potential implications span from predicting particle collision outcomes to describing the spaces appearing in string theory.
These findings have earned Burcroff numerous prestigious awards including a National Science Foundation Graduate Research Fellowship, a British Marshall Scholarship, and a Jack Kent Cooke Graduate Fellowship.
Despite the tremendous number of problems she has answered, new ones keep arising.
“Every time you unlock one of them, it gives you a bunch of paths to new connected mysteries,” Burcroff says.
At MIT, she is working with Postnikov, whose research on combinatorics and positivity-type problems has presented a radically different way to calculate fundamental quantities in quantum field theory.
“Burcroff is conducting research across disciplinary boundaries,” says Postnikov.
He adds: “I am sure that she will have a lot of fruitful interactions with researchers in other MIT departments.”
Burcroff’s goal is to apply combinatorial techniques to broader physical contexts and direct applications, especially those with implications to topics like mirror symmetry, a principle in string theory suggesting that very different-looking geometric spaces can be mathematically equivalent.
While “doing math is 99 percent trying something and failing,” Burcroff says it is this same challenge that keeps her motivated. To her, it is not about reaching a destination, but rather about the continuous “process of discovery,” one she hopes to share beyond the typical classroom.
To make math more accessible, especially among underrepresented groups, Burcroff has worked with mentorship programs including Harvard’s Real Representations and Math Includes, Cambridge Girls’ Angle, and MIT PRIMES. During her time as a postdoc, she hopes to continue this outreach and explore ways to get involved with other support groups at MIT’s Department of Mathematics.
Study: Gene circuits reshape DNA folding and affect how genes are expressed
When a gene is turned on in a cell, it creates a ripple effect along the DNA strand, changing the physical structure of the strand. A new study by MIT researchers shows that these ripples can stimulate or suppress neighboring genes.
These effects, which result from the winding or unwinding of neighboring DNA, are determined by the order of genes along a strand of DNA. Genes upstream of the active gene are usually turned up, while those downstream are inhibited.
The new findings offer guidance that could make it easier to control the output of synthetic gene circuits. By altering the relative ordering and arrangement of genes, or “gene syntax,” researchers could create circuits that synergize to maximize their output, or that alternate the output of two different genes.
“This is really exciting because we can coordinate gene expression in ways that just weren’t possible before,” says Katie Galloway, an assistant professor of chemical engineering at MIT. “Syntax will be really useful for dynamic circuits. Now we have the ability to select not only the biochemistry of circuits, but also the physical design to support dynamics.”
Galloway is the senior author of the study, which appears today in Science. MIT postdoc Christopher Johnstone PhD ’26 is the paper’s lead author. Other authors include MIT graduate student Kasey Love, members of the lab of Brandon DeKosky, an MIT associate professor of chemical engineering, and researchers from Peter Zandsta’s lab at the University of British Columbia and the labs of Christine Mummery and Richard Davis at Leiden University Medical Center in the Netherlands.
Gene syntax
When a gene is copied into messenger RNA, or “transcribed,” the double-stranded DNA helix must be unwound so that an enzyme called RNA polymerase can access the DNA and start copying it. That unwinding leads to physical changes in the structure of DNA strand.
Upstream of the gene, DNA becomes looser, while downstream, it becomes more tightly wound. These changes affect RNA polymerase’s ability to access the DNA: Upstream of an active gene, it’s easier for the enzyme to attach; downstream, it’s more difficult.
In a study published in 2022, Galloway and Johnstone performed computational modeling that explored how these biophysical changes might influence gene expression. They studied three different arrangements, or types of syntax: tandem, divergent, and convergent.
Most synthetic gene circuits are designed in a tandem arrangement, with one gene followed by another downstream. In a divergent arrangement, neighboring genes are transcribed in opposite directions (away from each other), and in convergent syntax, they are transcribed toward each other.
The modeling suggested that the divergent arrangement was most likely to produce circuits where both genes are expressed at a high level. Tandem arrangements were predicted to result in the downstream gene being suppressed by the upstream gene.In the new study, the researchers wanted to see if they could observe these predicted phenomena in human cells.
“Normally, we think about gene circuits and pieces of DNA as these lines that we draw, but they’re polymers that have physical characteristics,” Galloway says. “The thing that we were trying to solve in this paper was: When you put two genes on the same piece of DNA, how does their physical interaction become coupled?”
The researchers engineered circuits that each contained two genes, in either a tandem, divergent, or convergent configuration, into human cell lines and human induced pluripotent stem cells.
The results confirmed what their modeling had predicted: In divergent circuits, expression of both genes was amplified. In tandem circuits, turning on the upstream gene suppressed the expression of the downstream gene.
These effects produced as much as a 25-fold increase or decrease in gene expression, and they could be seen at distances of up to 2,000 base pairs between genes.
Using a high-resolution genome mapping technique called Region Capture Micro-C, the researchers were also able to analyze how the DNA structure changed when nearby genes were being transcribed.
As predicted, they found that the DNA regions downstream from an active gene formed tightly twisted structures known as plectonemes, similar to the tangles seen in a twisted telephone cord. These structures make it harder for RNA polymerase to bind to DNA.
To engineer these cells, the researchers used a new system they developed with the LUMC team called STRAIGHT-IN Dual, which allows them to efficiently insert two genes into the same DNA strand at both alleles. This system is being reported in a second paper published today, in Nature Biomedical Engineering.
Precise control
The new findings could help guide the design of synthetic gene circuits, which are usually designed to be controlled by biochemical interactions with activator or repressor molecules. Now, circuit designers can also perform biophysical manipulations to enhance or repress genes expression.
“Everyone thinks about the components they need, and the biochemical properties they need to build a circuit,” Galloway says. “Now, we have added the physical construction of those components, which is going to change how those biochemical units are interpreted.”
As a demonstration of one potential application, the researchers built synthetic circuits containing the genes for two segments of a novel antibody discovered by the Dekosky lab, used to treat yellow fever, and incorporated them into human cells. As they expected, the divergent syntax produced larger quantities of the yellow fever antibody.
Galloway’s lab has also used this approach to optimize the output of synthetic gene circuits they previously reported that could be used to deliver gene therapy or to reprogram adult cells into other cell types.
This strategy could also be used to build a variety of other types of dynamic synthetic circuits, such as toggle switches, oscillators, or pulse generators, for any application that requires precise control over gene expression.
“If you want coordinated expression, a divergent circuit is great. If you want something that’s either/or, you can imagine using a convergent or tandem circuit, so when one turns on, the other turns off, and you can alternate pulses,” Galloway says. “Now that we understand the syntax, I think this will pave the way for us to program dynamic behaviors.”
The research was funded, in part, by the National Institutes of Health, the National Institute for General Medical Sciences, a National Science Foundation CAREER Award, the Pershing Square Foundation, the Air Force Research Laboratory, and the Koch Institute Support (core) Grant from the National Cancer Institute.
The hidden structure behind a widely used class of materials
Materials called relaxor ferroelectrics have been used for decades in technologies like ultrasounds, microphones, and sonar systems. Their unique properties come from their atomic structure, but that structure has stubbornly eluded direct measurement.
Now a team of researchers from MIT and elsewhere has directly characterized the three-dimensional atomic structure of a relaxor ferroelectric for the first time. The findings, reported today in Science, provide a framework for refining models used to design next-generation computing, energy, and sensing devices.
“Now that we have a better understanding of exactly what’s going on, we can better predict and engineer the properties we want materials to achieve,” says corresponding author James LeBeau, MIT’s Kyocera Professor of Materials Science and Engineering. “The research community is still developing methods to engineer these materials, but in order to predict the properties those materials will have, you have to know if your model is right.”
In their paper, the researchers describe how they used an emerging technique to reveal the distribution of electric charges in the material, with a surprising result.
“We realized the chemical disorder we observed in our experiments was not fully considered previously,” says co-first authors Michael Xu PhD ’25 and Menglin Zhu, who are both postdocs at MIT. “Working with our collaborators, we were able to merge the experimental observations with simulations to refine the models and better predict what we see in experiments.”
Joining Zhu, Xu, and LeBeau on the paper are Colin Gilgenbach and Bridget R. Denzer, MIT PhD students in materials science and engineering; Yubo Qi, an assistant professor at the University of Alabama at Birmingham; Jieun Kim, an assistant professor at the Korea Advanced Institute of Science and Technology; Jiahao Zhang, a former PhD student at the University of Pennsylvania; Lane W. Martin, a professor at Rice University; and Andrew M. Rappe, a professor at the University of Pennsylvania.
Probing disordered materials
Leading simulations of relaxor ferroelectrics suggest that when an electric field is applied, the interactions of positively and negatively charged atoms in different nanoregions of the material help give rise to exceptional energy storage and sensing capabilities. The details of those nanoregions have been impossible to directly measure to date.
For their Science paper, the researchers studied a relaxor ferroelectric material used in sensors, actuators, and defense systems that is a lead magnesium niobate-lead titanate alloy. They used an emerging measurement technique, called multi-slice electron ptychography (MEP), in which researchers move a nanoscale-sized probe of high-energy electrons over a material and measure the resulting electron diffraction patterns.
“We do this in a sequential way, and at each position, we acquire a diffraction pattern,” Zhu explains. “That creates regions of overlap, and that overlap has enough information to use an algorithm to iteratively reconstruct three-dimensional information about the object and the electron wave function.”
The technique revealed a hierarchy of chemical and polar structures that spanned from atomic to mesoscopic scales. The researchers also found that many regions of differing polarization in the material were much smaller than predicted by the leading simulations. The researchers then fed their new data back into those computer simulations and refined the models to better reflect their findings under different conditions.
“Previously, these models basically had random regions of polarization, but they didn’t tell you how those regions correlate with each other,” Xu says. “Now we can tell you that information, and we can see how individual chemical species modulate polarization depending on the charge state of atoms.”
Toward better materials
Zhu says the paper demonstrates the potential of electron ptychography to study complex materials and opens up new avenues of research into complex, disordered materials.
“This study is the first time in the electron microscope that we’ve been able to directly connect the three-dimensional polar structure of relaxor ferroelectrics with molecular dynamics calculations,” Xu says. “It further proves you can get three-dimensional information out of the sample using this technique.”
The researchers also believe the approach could one day help engineer materials with advanced electronic behaviors for a range of improved memory storage, sensing, and energy technologies.
“Materials science is incorporating more complexity into the material design process — whether that’s for metal alloys or semiconductors — as AI has improved and our computational tools have become more advanced,” LeBeau says. “But if our models aren’t accurate enough and we have no way to validate them, it’s garbage in garbage out. This technique helps us understand why the material behaves the way it does and validate our models.”
The work was supported, in part, by the U.S. Army Research Laboratory, the U.S. Office of Naval Research, the U.S. Department of War, and a National Science Graduate Fellowship. The researchers also used MIT.nano facilities.
How neurons sense bacteria in the gut
Recent studies suggest animals and people alike have close and complex relationships with the bacteria around and within them. The human gut microbiome, for instance, has been associated with both depression and Parkinson’s disease. To go beyond association toward understanding of the actual mechanisms that enable the bacterial microbiome to influence brain function, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT examines the mechanisms at work in a model “bacterial specialist,” the nematode Caenorhabditis elegans.
In the new study in Current Biology, the team, led by Picower Fellow Cassi Estrem in the Picower Institute for Learning and Memory lab of Associate Professor Steven Flavell, identifies the specific chemicals that a key neuron in C. elegans senses, both in the bacteria that it eats and in the bacteria that it needs to avoid ingesting.
“In our bodies, our own cells are outnumbered by the bacterial cells living in and on us. There’s an increasing recognition that this has a profound impact on human health,” says Flavell, an investigator of the Howard Hughes Medical Institute and faculty member of MIT’s Department of Brain and Cognitive Sciences. “It’s been clear that there are links for some time. Our study aimed to identify the hard mechanisms of how a host nervous system is affected by bacteria in the alimentary canal.”
Achieving a fundamental mechanistic understanding of how neurons interact with bacteria could help improve attempts to intervene in or manipulate those interactions with therapeutic drugs or supplements, Flavell says.
Mmm … sugar
Flavell calls C. elegans a “bacterial specialist” because the tiny, transparent worm has evolved to eat bacteria as its diet, while also needing to avoid pathogenic bacteria that can prove to be its undoing. This has led it to develop a nervous system especially well-attuned to sorting out what is food and what is foe. In 2019, the lab discovered that the neuron NSM, which projects into the worm’s alimentary canal, employs two “acid sensing ion channels” (ASICs) to detect when certain bacteria have been ingested. Notably, those ion channels are analogous to ones found in neurons in humans. When NSM detects yummy bacteria, it releases serotonin that causes the worm to increase its feeding rate and slow its slithering so that it can stay to dine on the surrounding meal.
To really understand how this works, Flavell and Estrem realized they needed to know exactly what the ion channels are detecting in the bacteria. To get started, they exposed worms to 20 different kinds of bacteria the worms are known to encounter and found that they all activated NSM activity to varying extents. Then they broke the bacteria down into more and more specific chemical components to see which one or ones triggered NSM. The experiments ruled out many components, including DNA, lipids, proteins, and simple sugars, and instead found that it’s specifically the polysaccharide sugars that coat many bacteria that drive NSM activation. In particular, in gram-positive bacteria, a chemical called peptidoglycan activated NSM. In gram-negative bacteria, a different polysaccharide was apparently in play.
Estrem and Flavell’s team also ran experiments showing that polysaccharides from bacteria in general, and peptidoglycan in particular, not only trigger NSM electrical activity, but actually promote the feeding and slowing behaviors. They also showed that genetically knocking out the ASICs abolished these responses. In all, they demonstrated that polysaccharide and peptidoglycan detection are sufficient to trigger the worm’s behaviors, and requires the ASICs.
Better not eat this
Having shown what exactly triggers the worms to recognize their bacterial food, the researchers wondered whether they could also pinpoint a danger sign the worm finds in harmful bacteria. For these experiments, they carefully used Serratia marcescens, a bacterium that’s also infectious for humans. Some strains of the bacteria have a red color, while others do not. The red ones, which have a pigment called prodigiosin, tend to be much more lethal for worms. In their testing, the researchers found that when NSM detected the non-pigmented bacteria, the neuron still activated and the worms still ingested the bacteria, but when prodigiosin was present, NSM did not activate and the worm did not pump it in or slow down to eat.
Adding prodigiosin to normally yummy bacteria also suppressed NSM’s usual response. In other words, the worms have evolved their digestive behavior (and the detectors within NSM) to avoid ingesting a chemical specifically associated with danger.
Flavell says it’s likely that some of the fundamental mechanisms highlighted in the new paper will inform studies of similar mechanisms in other animals.
“We developed a way of identifying these pathways by studying this organism that specializes in bacterial detection and displays robust responses,” Flavell explains. “But there’s no reason these pathways should be limited to C. elegans. The molecular players we identified are found in many species, including mammals.”
In addition to Estrem and Flavell, the paper’s other authors are Malvika Dua, Colby Fees, Greg Hoeprich, Matthew Au, Bruce Goode, and Lingyi Deng.
The National Institutes of Health, the McKnight Foundation, the Alfred P. Sloan Foundation, the Howard Hughes Medical Institute, and The Freedom Together Foundation provided support for the study.
A materials scientist’s playground
Scientists and engineers around the world are working to improve quantum bits, or qubits, the minuscule building blocks of the quantum computer. Qubits are incredibly sensitive, making it easy for errors to be introduced, lowering device yield. But a new cluster tool at MIT.nano introduces capabilities that will allow researchers to continue advancements in qubit performance.
Passersby outside MIT.nano may have recently noticed a complex looking piece of equipment being installed on the first-floor cleanroom. What looks like a sci-fi movie prop is actually a state-of-the-art, custom-built molecular beam epitaxy (MBE): a physical vapor deposition system that operates under ultra-high vacuum to produce high-quality thin films. With the ability to grow different crystalline materials on a wafer, the tool will support quantum researchers and materials scientists by allowing them to study how film growth affects the properties of the materials used in making qubits.
“To realize the full promise of quantum computing, we need to build qubits that are robust, reproducible, and extensible,” says William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics at MIT. “To date, most of the improvements to superconducting qubit performance are traceable to circuit design — essentially, designing qubit circuits that are less sensitive to their environmental noise. However, those improvements have largely run their course. Going forward, we need to address the fundamental materials science and fabrication engineering required to reduce the sources of environmental noise. This multi-chamber, cassette-loaded, 200-millimeter wafer MBE system is exactly the right tool at the right time. And there’s no place better to do this research than at MIT.nano.”
That is because MIT.nano is preconditioned to receive this type of system with physical space, climate controls, policies and procedures for researchers, and expert staff to manage the lab. Through an equipment support plan, Oliver’s Engineering Quantum Systems (EQuS) group is able to install and run the tool inside MIT.nano, a high-performance, safe, and reliable environment.
A controlled environment is essential for the MBE. “Think of this system like an inverted International Space Station (ISS),” explains Patrick Strohbeen, research scientist in the EQuS group. “The ISS is a small chamber of atmosphere surrounded by the vacuum of space. This MBE system is a chamber of space-level vacuum surrounded by atmosphere.” That vacuum of space is kept at a steady negative 90 degrees Celsius, which enables precise growth of thin films on an atomic scale. It is the largest single deposition chamber (1-meter diameter) the manufacturer, DCA, has sold in the United States.
The journey of a wafer
The system, which in total takes up 600 square feet, is made up of six chambers. First is the load lock, where the wafer is placed into the system and brought down from atmospheric pressure to near the vacuum level of space. Then, the wafer enters the distribution center. This space acts like a central hub, transferring the wafers to other chambers. Next is the deposition, or “growth,” chamber. This is where the system’s primary function takes place — depositing materials, specifically atoms of superconducting metal, onto a substrate, typically silicon. From there, it moves to the oxidation chamber, which facilitates the growth of key ceramic materials for qubits. A fifth storage chamber can hold an additional 10 wafers within the vacuum.
A unique aspect of this system is its sixth chamber, designed for X-ray photoelectron spectroscopy (XPS). Using this chamber, researchers can shoot a photon in the form of X-rays at the surface and, when it hits the surface, it will excite the electron inside the material so that the electron jumps out and is picked up by a sensor that then tells the researcher about the environment the electron came from. As individual layers of atoms are put down in the growth chamber, scientists can move the wafer to the XPS chamber to measure changes in the material structure of the film and back again, all while keeping it inside the vacuum space.
Why is this important? “The quantum community has excellent device physicists and device engineers,” said Strohbeen. “The last piece of the puzzle is: We need to understand the materials platform that we’re using for these devices.” The buried interfaces, so far, have been understudied due to the difficulty in probing them, he explained.
For those of us who are not MBE experts, think of the snow that fell in Massachusetts this winter. How can you tell how much ice is on the pavement without removing all of the snow on top of it? And without changing the natural setting where the snow, ice, and pavement meet? With this system, specifically the XPS chamber, scientists can study the interfaces of buried materials without disturbing the physical or chemical environments. “It is a materials scientist’s playground,” jokes Strohbeen — a controlled space where researchers can learn about and explore materials’ interactions within layers of atoms.
Why MIT.nano?
When Oliver, who is also the director of the MIT Center for Quantum Engineering, secured the MBE Quantum, the next question was where to put it. Enter MIT.nano. Housing 45,000 square feet of cleanroom, this facility exists at MIT to support complex, sensitive equipment with both the infrastructure and the staff needed to maintain it.
“MIT.nano’s ultra-stable building utilities and lab environment are exactly what is needed to support a system that demands extreme repeatability and purity,” says Nick Menounos, MIT.nano associate director of infrastructure. “The success of this installation grew from the early collaboration. Professor Oliver engaged the MIT.nano team in the procurement process almost two years in advance. That foresight, combined with the infrastructure momentum we gained from the recent CHIPS Act project, meant that we could prepare the cleanroom perfectly. We compressed the installation process that normally takes several months and had this extraordinary machine running in under three weeks.”
“From the very beginning, the MIT.nano staff were helpful, knowledgeable, and willing to go above and beyond to make this happen,” says Oliver. “While the MIT.nano facility is certainly an infrastructural crown jewel at MIT, it’s the MIT.nano staff who make it the national treasure it is today.”
Positioning the MBE Quantum in the cleanroom helps the team focus on scalability and device yield. Humidity and particle count, two things carefully measured and maintained at MIT.nano, can affect the output of the device. Minimizing as many variables as possible is key to improving qubit performance. The cleanroom also allows for new device research because an array of fabrication and metrology tools are available without having to leave the clean environment.
“We’re really excited to see what we can do with it,” says Strohbeen. “We bought it as a materials science tool, and it will also be a device development tool due to the flexibility of having it in the cleanroom.”
The MBE system was purchased through a combination of grants from the Army Research Office (ARO) and from the Laboratory for Physical Sciences (LPS). The ARO grant, a Defense University Research Instrumentation Program grant, is the premier grant from ARO for funding large capital equipment purchases that should prove disruptive in technologically relevant areas. It arrives at an important time on campus, as one of MIT’s strategic initiatives — the MIT Quantum Initiative — aims to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.
Open Records Laws Reveal ALPRs’ Sprawling Surveillance. Now States Want to Block What the Public Sees.
Reporters, community advocates, EFF, and others have used public records laws to reveal and counteract abuse, misuse, and fraudulent narratives around how law enforcement agencies across the country use and share data collected by automated license plate readers (ALPRs). EFF is alarmed by recent laws in several states that have blocked public access to data collected by ALPRs, including, in some cases, information derived from ALPR data. We do not support pending bills in Arizona and Connecticut that would block the public oversight capabilities that ALPR information offers.
Every state has laws granting members of the public the right to obtain records from state and local governments. These are often called “freedom of information acts” (FOIAs) or “public records acts” (PRAs). They are a powerful check by the people on their government, and EFF frequently advocates for robust public access and uses the laws to scrutinize government surveillance.
But lawmakers across the country, often in response to public scrutiny of police ALPRs, are introducing or enacting measures aimed at excluding broad swaths of ALPR information from disclosure under these public records laws. This could include whole categories of important information: general information about the extent of law enforcement use; details on ALPR sharing across policing agencies; data on the number of license plate scans conducted, where they happened, and how many “hits” for license plates of interest actually occur; analyses on how many false matches or other errors occur; and images taken of individuals’ own vehicles.
No thanks. Public records and public scrutiny of ALPR programs have shown that people are harmed by these systems and that retained ALPR data violates people’s privacy. In this moment, lawmakers should not be completely cutting off access to public records that document the abuses perpetuated by ALPRs.
Transparency with privacyTo be sure, there are legitimate concerns about wholesale public disclosure of raw ALPR data. After all, many of the harms people experience from these systems are based on the government’s collection, retention, and use of this information. Public transparency rights should not exacerbate the privacy harms suffered by people subjected to ALPR surveillance. But many current proposals do not address legitimate privacy concerns in a measured way, much less seek to harmonize people’s privacy with the public’s right to know.
There is a better path to balancing privacy and transparency rights than outright bans or total disclosure.
Any legislative proposal concerning public access to ALPR data must start with this reality: ALPR data is deeply revealing about where a person goes, and thus about what they are doing and who they are doing it with. That’s a reason why EFF opposes ALPRs. It is dangerous that the police have so much of our ALPR information. Even worse for our privacy would be for police to disclose our ALPR information to our bosses, political opponents, and ex-friends. Or to surveillance-oriented corporations that would use our ALPR information to send us targeted ads, or monetize it by selling it to the highest bidder.
On the other hand, EFF’s firsthand experience using public records from ALPR systems demonstrates the strong accountability value of public access to many kinds of ALPR data, including information like data-sharing reports and network audits. For example, in our “Data Driven” series, we used ALPR data-sharing and hit ratio reports to investigate the extent of ALPR data sharing between police departments and to analyze the number of ALPR scans that are ultimately associated with a crime-related vehicle. We have also identified racist uses of ALPR systems, ALPR surveillance of protestors, and ALPR tracking of a person who sought an abortion. Across the country, municipalities have been shutting down their contracts for ALPR use, often citing concerns with data sharing with federal and immigration agents.
These records are not just informational—they are leverage. Communities, journalists, and local officials have used ALPR disclosures to block new deployments, refuse contract renewals, and terminate existing agreements with surveillance vendors whose practices proved too dangerous to continue. Without this evidentiary record, it is far harder for cities to exercise their procurement power to say no.
It is not always easy to harmonize transparency and privacy when one person wishes to use a public records law to obtain government records that reveal people’s personal information. The best approach is for public records laws to contain a privacy exemption that requires balancing, on a case-by-case basis, of the transparency benefits versus the privacy costs of disclosure. Many do. These provisions of public records laws already accommodate similar concerns about disclosing personal information of private individuals whose information the government may have collected, government employee’s private data, and other personal information.
The balancing provisions in these laws are often flexible and allow for nuance. For example, if a government record contains a mix of information that does not reveal people’s private information and some that does, agencies and courts can disclose the non-private information while withholding the truly private information. This is often accomplished with blacking out, or redacting, the private information.
Applying this privacy-and-transparency balancing to ALPR records, it will often be appropriate for the government to disclose some information and withhold other information. Everybody should generally have access to records showing their own movements and other information captured by ALPRs, but the privacy protections in public records laws should foreclose a single person’s ability to get a copy of similar records about everyone else. And even with accessing your own data, there are complications with shared vehicles that should be considered when balancing privacy and transparency.
An example of where it may be appropriate to release unredacted data and images would be vehicles engaged in non-sensitive government business. For example, a member of the public might use ALPR scans of garbage trucks to identify gaps in service, which would not reveal private information. On other hand, it would be inappropriate to release the scans of a government social worker visiting their clients.
Public records laws should allow a requester to obtain some ALPR information about government surveillance of everyone else, in a manner that accommodates the public transparency interest in disclosure and people’s privacy interests. For example, the best public records laws would disclose the times and places that plate data was collected, but not plate data itself. This can be done, for example, by an agency or court finding that disclosing aggregated and/or deidentified ALPR data protects the privacy or other interests of individuals captured within the data. The best laws recognize that aggregation or de-identification of databases are redactions in service of individual privacy (which responding agencies must do), and are not creating new public records (which responding agencies sometimes need not do).
Likewise, in a government audit log of police searches of stored ALPR data, it will often be appropriate to disclose an officer’s investigative purposes to conduct a search, and the officer’s search terms – but not the search term if it is a license plate number. Many people do not want the world to know that they are under police investigation, and many public records laws generally limit the disclosure of such sensitive facts because of the reputational and privacy harm inherent in that disclosure.
Aggregate ALPR information about, for example, the amount of data collected and error rates can have important transparency value and impact government policy. Requiring the public release of that kind of data contributes to informed public discussion of how our policing agencies do their jobs. This kind of information has been used to study, critique, and provide oversight of ALPR use.
Thus, the wholesale exemption of ALPR information from disclosure under state public records laws would stymie the public’s ability to monitor how their government is using powerful and controversial surveillance technology. EFF cannot support such laws.
Blocking transparencyIn Connecticut, SB 4 is a pending bill that would exclude, from that state’s public records law, information “gathered by” an ALPR or “created through an analysis of the information gathered by” an ALPR. This could ultimately harm individual civilians, who would have less ability to protect themselves from law enforcement that indiscriminately collect vehicle information. Other provisions of this bill would limit government use of ALPRs, and regulate data brokers.
In Arizona, SB 1111 would restrict public access to ALPR data “collected by” an ALPR. The bill would even make it a felony to access or use data from an ALPR (or disseminate it) in violation of this article, which apparently might apply to a member of the public who obtained ALPR data with a public records request. The bill’s author claims it adds “guardrails” for ALPR use.
Earlier this year, Washington state enacted a law that will exempt data “collected by” ALPRs from the state’s public records law. While “bona fide research” will still be a way for some people to obtain ALPR data, this may not include journalists and activists who analyze aggregate data to identify policy flaws. Notably, Washington courts found last year that information generated by ALPR, including images of an individual’s own vehicle, are public records; this new legislation will override that decision, blocking the ability for people to see what photos police have taken of their own vehicles. Other provisions of this new law will limit government use of ALPRs.
A year ago, Illinois’ HB 3339 ended use of that state’s public records law to obtain ALPR information used and collected by the Illinois State Police (ISP), including both information “gathered by an ALPR” and information “created from the analysis of data generated by an ALPR.” This Illinois language for just the ISP is very similar to what is now being considered in Connecticut for all state and local agencies.
Sadly, the list goes on. Georgia exempted ALPR data (both “captured by or derived from” ALPRs) of any government agency from its open records law. Adding insult to injury, Georgia also made it a misdemeanor to knowingly request, use, or obtain law enforcement’s plate data for any purpose other than law enforcement. Maryland exempted “information gathered by” an ALPR from its public information act. Oklahoma exempted from its open records act the ALPR data “collected, retained or shared” by District Attorneys under that state’s Uninsured Vehicle Enforcement Program.
These laws and bills in seven states are an unwelcome national trend.
Next stepsWe urge legislators to reject efforts to amend state public records laws to wholly exempt ALPR information. This would diminish meaningful oversight over these controversial technologies. Public disclosure of some ALPR information is important.
There is a better approach for states that want to harmonize privacy and transparency in the context of ALPR data:
- Open records laws should cover, and not exclude, information collected by ALPRs, and also any public records derived from that information.
- Open records laws should have a privacy exemption that applies to all records, including information collected or derived from ALPRs. That exemption should require a case-by-case balancing of the transparency benefits and privacy costs of disclosure. These provisions work best when agencies and courts can analyze the context of the particular records, the weight of the privacy interests and public interests at stake, and other specific facts to fashion the best balance between these competing values.
- When a document contains both exempt and non-exempt information, open records laws should require disclosure of the latter and withholding of the former. The best public records laws allow agencies to black out, or redact, specific private information while disclosing non-private information in the same records, threading the privacy and transparency needle.
- Finally, in the context of a law enforcement ALPR database (including both data collected by ALPRs and audit logs of police searches of stored ALPR data), the law should permit agencies to disclose aggregated and/or deidentified data, while withholding personally identifiable data. Importantly, the law should recognize that the steps an agency takes to protect individual privacy in ALPR databases should not be construed as creating a new public record.
FOIA balancing standards are one layer in a larger governance stack, and work best alongside strong guardrails on whether and how governments procure ALPR systems in the first place: public debate over vendor contracts, binding surveillance ordinances, strict data‑retention limits, and clear pathways to end ALPR programs entirely where the risks prove too great.
