Feed aggregator

Study: Gene circuits reshape DNA folding and affect how genes are expressed

MIT Latest News - Thu, 04/30/2026 - 2:00pm

When a gene is turned on in a cell, it creates a ripple effect along the DNA strand, changing the physical structure of the strand. A new study by MIT researchers shows that these ripples can stimulate or suppress neighboring genes.

These effects, which result from the winding or unwinding of neighboring DNA, are determined by the order of genes along a strand of DNA. Genes upstream of the active gene are usually turned up, while those downstream are inhibited.

The new findings offer guidance that could make it easier to control the output of synthetic gene circuits. By altering the relative ordering and arrangement of genes, or “gene syntax,” researchers could create circuits that synergize to maximize their output, or that alternate the output of two different genes.

“This is really exciting because we can coordinate gene expression in ways that just weren’t possible before,” says Katie Galloway, an assistant professor of chemical engineering at MIT. “Syntax will be really useful for dynamic circuits. Now we have the ability to select not only the biochemistry of circuits, but also the physical design to support dynamics.”

Galloway is the senior author of the study, which appears today in Science. MIT postdoc Christopher Johnstone PhD ’26 is the paper’s lead author. Other authors include MIT graduate student Kasey Love, members of the lab of Brandon DeKosky, an MIT associate professor of chemical engineering, and researchers from Peter Zandsta’s lab at the University of British Columbia and the labs of Christine Mummery and Richard Davis at Leiden University Medical Center in the Netherlands.

Gene syntax

When a gene is copied into messenger RNA, or “transcribed,” the double-stranded DNA helix must be unwound so that an enzyme called RNA polymerase can access the DNA and start copying it. That unwinding leads to physical changes in the structure of DNA strand.

Upstream of the gene, DNA becomes looser, while downstream, it becomes more tightly wound. These changes affect RNA polymerase’s ability to access the DNA: Upstream of an active gene, it’s easier for the enzyme to attach; downstream, it’s more difficult.

In a study published in 2022, Galloway and Johnstone performed computational modeling that explored how these biophysical changes might influence gene expression. They studied three different arrangements, or types of syntax: tandem, divergent, and convergent.

Most synthetic gene circuits are designed in a tandem arrangement, with one gene followed by another downstream. In a divergent arrangement, neighboring genes are transcribed in opposite directions (away from each other), and in convergent syntax, they are transcribed toward each other.

The modeling suggested that the divergent arrangement was most likely to produce circuits where both genes are expressed at a high level. Tandem arrangements were predicted to result in the downstream gene being suppressed by the upstream gene.In the new study, the researchers wanted to see if they could observe these predicted phenomena in human cells.

“Normally, we think about gene circuits and pieces of DNA as these lines that we draw, but they’re polymers that have physical characteristics,” Galloway says. “The thing that we were trying to solve in this paper was: When you put two genes on the same piece of DNA, how does their physical interaction become coupled?”

The researchers engineered circuits that each contained two genes, in either a tandem, divergent, or convergent configuration, into human cell lines and human induced pluripotent stem cells.

The results confirmed what their modeling had predicted: In divergent circuits, expression of both genes was amplified. In tandem circuits, turning on the upstream gene suppressed the expression of the downstream gene.

These effects produced as much as a 25-fold increase or decrease in gene expression, and they could be seen at distances of up to 2,000 base pairs between genes.

Using a high-resolution genome mapping technique called Region Capture Micro-C, the researchers were also able to analyze how the DNA structure changed when nearby genes were being transcribed.

As predicted, they found that the DNA regions downstream from an active gene formed tightly twisted structures known as plectonemes, similar to the tangles seen in a twisted telephone cord. These structures make it harder for RNA polymerase to bind to DNA.

To engineer these cells, the researchers used a new system they developed with the LUMC team called STRAIGHT-IN Dual, which allows them to efficiently insert two genes into the same DNA strand at both alleles. This system is being reported in a second paper published today, in Nature Biomedical Engineering.

Precise control

The new findings could help guide the design of synthetic gene circuits, which are usually designed to be controlled by biochemical interactions with activator or repressor molecules. Now, circuit designers can also perform biophysical manipulations to enhance or repress genes expression.

“Everyone thinks about the components they need, and the biochemical properties they need to build a circuit,” Galloway says. “Now, we have added the physical construction of those components, which is going to change how those biochemical units are interpreted.”

As a demonstration of one potential application, the researchers built synthetic circuits containing the genes for two segments of a novel antibody discovered by the Dekosky lab, used to treat yellow fever, and incorporated them into human cells. As they expected, the divergent syntax produced larger quantities of the yellow fever antibody.

Galloway’s lab has also used this approach to optimize the output of synthetic gene circuits they previously reported that could be used to deliver gene therapy or to reprogram adult cells into other cell types.

This strategy could also be used to build a variety of other types of dynamic synthetic circuits, such as toggle switches, oscillators, or pulse generators, for any application that requires precise control over gene expression.

“If you want coordinated expression, a divergent circuit is great. If you want something that’s either/or, you can imagine using a convergent or tandem circuit, so when one turns on, the other turns off, and you can alternate pulses,” Galloway says. “Now that we understand the syntax, I think this will pave the way for us to program dynamic behaviors.”

The research was funded, in part, by the National Institutes of Health, the National Institute for General Medical Sciences, a National Science Foundation CAREER Award, the Pershing Square Foundation, the Air Force Research Laboratory, and the Koch Institute Support (core) Grant from the National Cancer Institute.

The hidden structure behind a widely used class of materials

MIT Latest News - Thu, 04/30/2026 - 2:00pm

Materials called relaxor ferroelectrics have been used for decades in technologies like ultrasounds, microphones, and sonar systems. Their unique properties come from their atomic structure, but that structure has stubbornly eluded direct measurement.

Now a team of researchers from MIT and elsewhere has directly characterized the three-dimensional atomic structure of a relaxor ferroelectric for the first time. The findings, reported today in Science, provide a framework for refining models used to design next-generation computing, energy, and sensing devices.

“Now that we have a better understanding of exactly what’s going on, we can better predict and engineer the properties we want materials to achieve,” says corresponding author James LeBeau, MIT’s Kyocera Professor of Materials Science and Engineering. “The research community is still developing methods to engineer these materials, but in order to predict the properties those materials will have, you have to know if your model is right.”

In their paper, the researchers describe how they used an emerging technique to reveal the distribution of electric charges in the material, with a surprising result.

“We realized the chemical disorder we observed in our experiments was not fully considered previously,” says co-first authors Michael Xu PhD ’25 and Menglin Zhu, who are both postdocs at MIT. “Working with our collaborators, we were able to merge the experimental observations with simulations to refine the models and better predict what we see in experiments.”

Joining Zhu, Xu, and LeBeau on the paper are Colin Gilgenbach and Bridget R. Denzer, MIT PhD students in materials science and engineering; Yubo Qi, an assistant professor at the University of Alabama at Birmingham; Jieun Kim, an assistant professor at the Korea Advanced Institute of Science and Technology; Jiahao Zhang, a former PhD student at the University of Pennsylvania; Lane W. Martin, a professor at Rice University; and Andrew M. Rappe, a professor at the University of Pennsylvania.

Probing disordered materials

Leading simulations of relaxor ferroelectrics suggest that when an electric field is applied, the interactions of positively and negatively charged atoms in different nanoregions of the material help give rise to exceptional energy storage and sensing capabilities. The details of those nanoregions have been impossible to directly measure to date.

For their Science paper, the researchers studied a relaxor ferroelectric material used in sensors, actuators, and defense systems that is a lead magnesium niobate-lead titanate alloy. They used an emerging measurement technique, called multi-slice electron ptychography (MEP), in which researchers move a nanoscale-sized probe of high-energy electrons over a material and measure the resulting electron diffraction patterns.

“We do this in a sequential way, and at each position, we acquire a diffraction pattern,” Zhu explains. “That creates regions of overlap, and that overlap has enough information to use an algorithm to iteratively reconstruct three-dimensional information about the object and the electron wave function.”

The technique revealed a hierarchy of chemical and polar structures that spanned from atomic to mesoscopic scales. The researchers also found that many regions of differing polarization in the material were much smaller than predicted by the leading simulations. The researchers then fed their new data back into those computer simulations and refined the models to better reflect their findings under different conditions.

“Previously, these models basically had random regions of polarization, but they didn’t tell you how those regions correlate with each other,” Xu says. “Now we can tell you that information, and we can see how individual chemical species modulate polarization depending on the charge state of atoms.”

Toward better materials

Zhu says the paper demonstrates the potential of electron ptychography to study complex materials and opens up new avenues of research into complex, disordered materials.

“This study is the first time in the electron microscope that we’ve been able to directly connect the three-dimensional polar structure of relaxor ferroelectrics with molecular dynamics calculations,” Xu says. “It further proves you can get three-dimensional information out of the sample using this technique.”

The researchers also believe the approach could one day help engineer materials with advanced electronic behaviors for a range of improved memory storage, sensing, and energy technologies.

“Materials science is incorporating more complexity into the material design process — whether that’s for metal alloys or semiconductors — as AI has improved and our computational tools have become more advanced,” LeBeau says. “But if our models aren’t accurate enough and we have no way to validate them, it’s garbage in garbage out. This technique helps us understand why the material behaves the way it does and validate our models.”

The work was supported, in part, by the U.S. Army Research Laboratory, the U.S. Office of Naval Research, the U.S. Department of War, and a National Science Graduate Fellowship. The researchers also used MIT.nano facilities.

How neurons sense bacteria in the gut

MIT Latest News - Thu, 04/30/2026 - 1:30pm

Recent studies suggest animals and people alike have close and complex relationships with the bacteria around and within them. The human gut microbiome, for instance, has been associated with both depression and Parkinson’s disease. To go beyond association toward understanding of the actual mechanisms that enable the bacterial microbiome to influence brain function, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT examines the mechanisms at work in a model “bacterial specialist,” the nematode Caenorhabditis elegans.

In the new study in Current Biology, the team, led by Picower Fellow Cassi Estrem in the Picower Institute for Learning and Memory lab of Associate Professor Steven Flavell, identifies the specific chemicals that a key neuron in C. elegans senses, both in the bacteria that it eats and in the bacteria that it needs to avoid ingesting.

“In our bodies, our own cells are outnumbered by the bacterial cells living in and on us. There’s an increasing recognition that this has a profound impact on human health,” says Flavell, an investigator of the Howard Hughes Medical Institute and faculty member of MIT’s Department of Brain and Cognitive Sciences. “It’s been clear that there are links for some time. Our study aimed to identify the hard mechanisms of how a host nervous system is affected by bacteria in the alimentary canal.”

Achieving a fundamental mechanistic understanding of how neurons interact with bacteria could help improve attempts to intervene in or manipulate those interactions with therapeutic drugs or supplements, Flavell says.

Mmm … sugar

Flavell calls C. elegans a “bacterial specialist” because the tiny, transparent worm has evolved to eat bacteria as its diet, while also needing to avoid pathogenic bacteria that can prove to be its undoing. This has led it to develop a nervous system especially well-attuned to sorting out what is food and what is foe. In 2019, the lab discovered that the neuron NSM, which projects into the worm’s alimentary canal, employs two “acid sensing ion channels” (ASICs) to detect when certain bacteria have been ingested. Notably, those ion channels are analogous to ones found in neurons in humans. When NSM detects yummy bacteria, it releases serotonin that causes the worm to increase its feeding rate and slow its slithering so that it can stay to dine on the surrounding meal.

To really understand how this works, Flavell and Estrem realized they needed to know exactly what the ion channels are detecting in the bacteria. To get started, they exposed worms to 20 different kinds of bacteria the worms are known to encounter and found that they all activated NSM activity to varying extents. Then they broke the bacteria down into more and more specific chemical components to see which one or ones triggered NSM. The experiments ruled out many components, including DNA, lipids, proteins, and simple sugars, and instead found that it’s specifically the polysaccharide sugars that coat many bacteria that drive NSM activation. In particular, in gram-positive bacteria, a chemical called peptidoglycan activated NSM. In gram-negative bacteria, a different polysaccharide was apparently in play.

Estrem and Flavell’s team also ran experiments showing that polysaccharides from bacteria in general, and peptidoglycan in particular, not only trigger NSM electrical activity, but actually promote the feeding and slowing behaviors. They also showed that genetically knocking out the ASICs abolished these responses. In all, they demonstrated that polysaccharide and peptidoglycan detection are sufficient to trigger the worm’s behaviors, and requires the ASICs.

Better not eat this

Having shown what exactly triggers the worms to recognize their bacterial food, the researchers wondered whether they could also pinpoint a danger sign the worm finds in harmful bacteria. For these experiments, they carefully used Serratia marcescens, a bacterium that’s also infectious for humans. Some strains of the bacteria have a red color, while others do not. The red ones, which have a pigment called prodigiosin, tend to be much more lethal for worms. In their testing, the researchers found that when NSM detected the non-pigmented bacteria, the neuron still activated and the worms still ingested the bacteria, but when prodigiosin was present, NSM did not activate and the worm did not pump it in or slow down to eat.

Adding prodigiosin to normally yummy bacteria also suppressed NSM’s usual response. In other words, the worms have evolved their digestive behavior (and the detectors within NSM) to avoid ingesting a chemical specifically associated with danger.

Flavell says it’s likely that some of the fundamental mechanisms highlighted in the new paper will inform studies of similar mechanisms in other animals.

“We developed a way of identifying these pathways by studying this organism that specializes in bacterial detection and displays robust responses,” Flavell explains. “But there’s no reason these pathways should be limited to C. elegans. The molecular players we identified are found in many species, including mammals.”

In addition to Estrem and Flavell, the paper’s other authors are Malvika Dua, Colby Fees, Greg Hoeprich, Matthew Au, Bruce Goode, and Lingyi Deng.

The National Institutes of Health, the McKnight Foundation, the Alfred P. Sloan Foundation, the Howard Hughes Medical Institute, and The Freedom Together Foundation provided support for the study.

A materials scientist’s playground

MIT Latest News - Thu, 04/30/2026 - 1:20pm

Scientists and engineers around the world are working to improve quantum bits, or qubits, the minuscule building blocks of the quantum computer. Qubits are incredibly sensitive, making it easy for errors to be introduced, lowering device yield. But a new cluster tool at MIT.nano introduces capabilities that will allow researchers to continue advancements in qubit performance.

Passersby outside MIT.nano may have recently noticed a complex looking piece of equipment being installed on the first-floor cleanroom. What looks like a sci-fi movie prop is actually a state-of-the-art, custom-built molecular beam epitaxy (MBE): a physical vapor deposition system that operates under ultra-high vacuum to produce high-quality thin films. With the ability to grow different crystalline materials on a wafer, the tool will support quantum researchers and materials scientists by allowing them to study how film growth affects the properties of the materials used in making qubits.

“To realize the full promise of quantum computing, we need to build qubits that are robust, reproducible, and extensible,” says William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics at MIT. “To date, most of the improvements to superconducting qubit performance are traceable to circuit design — essentially, designing qubit circuits that are less sensitive to their environmental noise. However, those improvements have largely run their course. Going forward, we need to address the fundamental materials science and fabrication engineering required to reduce the sources of environmental noise. This multi-chamber, cassette-loaded, 200-millimeter wafer MBE system is exactly the right tool at the right time. And there’s no place better to do this research than at MIT.nano.”

That is because MIT.nano is preconditioned to receive this type of system with physical space, climate controls, policies and procedures for researchers, and expert staff to manage the lab. Through an equipment support plan, Oliver’s Engineering Quantum Systems (EQuS) group is able to install and run the tool inside MIT.nano, a high-performance, safe, and reliable environment.

A controlled environment is essential for the MBE. “Think of this system like an inverted International Space Station (ISS),” explains Patrick Strohbeen, research scientist in the EQuS group. “The ISS is a small chamber of atmosphere surrounded by the vacuum of space. This MBE system is a chamber of space-level vacuum surrounded by atmosphere.” That vacuum of space is kept at a steady negative 90 degrees Celsius, which enables precise growth of thin films on an atomic scale. It is the largest single deposition chamber (1-meter diameter) the manufacturer, DCA, has sold in the United States.

The journey of a wafer

The system, which in total takes up 600 square feet, is made up of six chambers. First is the load lock, where the wafer is placed into the system and brought down from atmospheric pressure to near the vacuum level of space. Then, the wafer enters the distribution center. This space acts like a central hub, transferring the wafers to other chambers. Next is the deposition, or “growth,” chamber. This is where the system’s primary function takes place — depositing materials, specifically atoms of superconducting metal, onto a substrate, typically silicon. From there, it moves to the oxidation chamber, which facilitates the growth of key ceramic materials for qubits. A fifth storage chamber can hold an additional 10 wafers within the vacuum.

A unique aspect of this system is its sixth chamber, designed for X-ray photoelectron spectroscopy (XPS). Using this chamber, researchers can shoot a photon in the form of X-rays at the surface and, when it hits the surface, it will excite the electron inside the material so that the electron jumps out and is picked up by a sensor that then tells the researcher about the environment the electron came from. As individual layers of atoms are put down in the growth chamber, scientists can move the wafer to the XPS chamber to measure changes in the material structure of the film and back again, all while keeping it inside the vacuum space.

Why is this important? “The quantum community has excellent device physicists and device engineers,” said Strohbeen. “The last piece of the puzzle is: We need to understand the materials platform that we’re using for these devices.” The buried interfaces, so far, have been understudied due to the difficulty in probing them, he explained.

For those of us who are not MBE experts, think of the snow that fell in Massachusetts this winter. How can you tell how much ice is on the pavement without removing all of the snow on top of it? And without changing the natural setting where the snow, ice, and pavement meet? With this system, specifically the XPS chamber, scientists can study the interfaces of buried materials without disturbing the physical or chemical environments. “It is a materials scientist’s playground,” jokes Strohbeen — a controlled space where researchers can learn about and explore materials’ interactions within layers of atoms.

Why MIT.nano?

When Oliver, who is also the director of the MIT Center for Quantum Engineering, secured the MBE Quantum, the next question was where to put it. Enter MIT.nano. Housing 45,000 square feet of cleanroom, this facility exists at MIT to support complex, sensitive equipment with both the infrastructure and the staff needed to maintain it.

“MIT.nano’s ultra-stable building utilities and lab environment are exactly what is needed to support a system that demands extreme repeatability and purity,” says Nick Menounos, MIT.nano associate director of infrastructure. “The success of this installation grew from the early collaboration. Professor Oliver engaged the MIT.nano team in the procurement process almost two years in advance. That foresight, combined with the infrastructure momentum we gained from the recent CHIPS Act project, meant that we could prepare the cleanroom perfectly. We compressed the installation process that normally takes several months and had this extraordinary machine running in under three weeks.”

“From the very beginning, the MIT.nano staff were helpful, knowledgeable, and willing to go above and beyond to make this happen,” says Oliver. “While the MIT.nano facility is certainly an infrastructural crown jewel at MIT, it’s the MIT.nano staff who make it the national treasure it is today.”

Positioning the MBE Quantum in the cleanroom helps the team focus on scalability and device yield. Humidity and particle count, two things carefully measured and maintained at MIT.nano, can affect the output of the device. Minimizing as many variables as possible is key to improving qubit performance. The cleanroom also allows for new device research because an array of fabrication and metrology tools are available without having to leave the clean environment.

“We’re really excited to see what we can do with it,” says Strohbeen. “We bought it as a materials science tool, and it will also be a device development tool due to the flexibility of having it in the cleanroom.”

The MBE system was purchased through a combination of grants from the Army Research Office (ARO) and from the Laboratory for Physical Sciences (LPS). The ARO grant, a Defense University Research Instrumentation Program grant, is the premier grant from ARO for funding large capital equipment purchases that should prove disruptive in technologically relevant areas. It arrives at an important time on campus, as one of MIT’s strategic initiatives — the MIT Quantum Initiative — aims to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.

Open Records Laws Reveal ALPRs’ Sprawling Surveillance. Now States Want to Block What the Public Sees.

EFF: Updates - Thu, 04/30/2026 - 12:54pm

Reporters, community advocates, EFF, and others have used public records laws to reveal and counteract abuse, misuse, and fraudulent narratives around how law enforcement agencies across the country use and share data collected by automated license plate readers (ALPRs). EFF is alarmed by recent laws in several states that have blocked public access to data collected by ALPRs, including, in some cases, information derived from ALPR data. We do not support pending bills in Arizona and Connecticut that would block the public oversight capabilities that ALPR information offers.

Every state has laws granting members of the public the right to obtain records from state and local governments. These are often called “freedom of information acts” (FOIAs) or “public records acts” (PRAs). They are a powerful check by the people on their government, and EFF frequently advocates for robust public access and uses the laws to scrutinize government surveillance

But lawmakers across the country, often in response to public scrutiny of police ALPRs, are introducing or enacting measures aimed at excluding broad swaths of ALPR information from disclosure under these public records laws. This could include whole categories of important information: general information about the extent of law enforcement use; details on ALPR sharing across policing agencies; data on the number of license plate scans conducted, where they happened, and how many “hits” for license plates of interest actually occur; analyses on how many false matches or other errors occur; and images taken of individuals’ own vehicles. 

No thanks. Public records and public scrutiny of ALPR programs have shown that people are harmed by these systems and that retained ALPR data violates people’s privacy. In this moment, lawmakers should not be completely cutting off access to public records that document the abuses perpetuated by ALPRs. 

Transparency with privacy

To be sure, there are legitimate concerns about wholesale public disclosure of raw ALPR data. After all, many of the harms people experience from these systems are based on the government’s collection, retention, and use of this information. Public transparency rights should not exacerbate the privacy harms suffered by people subjected to ALPR surveillance. But many current proposals do not address legitimate privacy concerns in a measured way, much less seek to harmonize people’s privacy with the public’s right to know.

There is a better path to balancing privacy and transparency rights than outright bans or total disclosure. 

Any legislative proposal concerning public access to ALPR data must start with this reality: ALPR data is deeply revealing about where a person goes, and thus about what they are doing and who they are doing it with. That’s a reason why EFF opposes ALPRs. It is dangerous that the police have so much of our ALPR information. Even worse for our privacy would be for police to disclose our ALPR information to our bosses, political opponents, and ex-friends. Or to surveillance-oriented corporations that would use our ALPR information to send us targeted ads, or monetize it by selling it to the highest bidder.

On the other hand, EFF’s firsthand experience using public records from ALPR systems demonstrates the strong accountability value of public access to many kinds of ALPR data, including information like data-sharing reports and network audits. For example, in our “Data Driven” series, we used ALPR data-sharing and hit ratio reports to investigate the extent of ALPR data sharing between police departments and to analyze the number of ALPR scans that are ultimately associated with a crime-related vehicle. We have also identified racist uses of ALPR systems, ALPR surveillance of protestors, and ALPR tracking of a person who sought an abortion. Across the country, municipalities have been shutting down their contracts for ALPR use, often citing concerns with data sharing with federal and immigration agents. 

These records are not just informational—they are leverage. Communities, journalists, and local officials have used ALPR disclosures to block new deployments, refuse contract renewals, and terminate existing agreements with surveillance vendors whose practices proved too dangerous to continue. Without this evidentiary record, it is far harder for cities to exercise their procurement power to say no.

It is not always easy to harmonize transparency and privacy when one person wishes to use a public records law to obtain government records that reveal people’s personal information. The best approach is for public records laws to contain a privacy exemption that requires balancing, on a case-by-case basis, of the transparency benefits versus the privacy costs of disclosure. Many do. These provisions of public records laws already accommodate similar concerns about disclosing personal information of private individuals whose information the government may have collected, government employee’s private data, and other personal information. 

The balancing provisions in these laws are often flexible and allow for nuance. For example, if a government record contains a mix of information that does not reveal people’s private information and some that does, agencies and courts can disclose the non-private information while withholding the truly private information. This is often accomplished with blacking out, or redacting, the private information.

Applying this privacy-and-transparency balancing to ALPR records, it will often be appropriate for the government to disclose some information and withhold other information. Everybody should generally have access to records showing their own movements and other information captured by ALPRs, but the privacy protections in public records laws should foreclose a single person’s ability to get a copy of similar records about everyone else. And even with accessing your own data, there are complications with shared vehicles that should be considered when balancing privacy and transparency.

An example of where it may be appropriate to release unredacted data and images would be vehicles engaged in non-sensitive government business. For example, a member of the public might use ALPR scans of garbage trucks to identify gaps in service, which would not reveal private information. On other hand, it would be inappropriate to release the scans of a government social worker visiting their clients. 

Public records laws should allow a requester to obtain some ALPR information about government surveillance of everyone else, in a manner that accommodates the public transparency interest in disclosure and people’s privacy interests. For example, the best public records laws would disclose the times and places that plate data was collected, but not plate data itself. This can be done, for example, by an agency or court finding that disclosing aggregated and/or deidentified ALPR data protects the privacy or other interests of individuals captured within the data. The best laws recognize that aggregation or de-identification of databases are redactions in service of individual privacy (which responding agencies must do), and are not creating new public records (which responding agencies sometimes need not do). 

Likewise, in a government audit log of police searches of stored ALPR data, it will often be appropriate to disclose an officer’s investigative purposes to conduct a search, and the officer’s search terms – but not the search term if it is a license plate number. Many people do not want the world to know that they are under police investigation, and many public records laws generally limit the disclosure of such sensitive facts because of the reputational and privacy harm inherent in that disclosure.

Aggregate ALPR information about, for example, the amount of data collected and error rates can have important transparency value and impact government policy. Requiring the public release of that kind of data contributes to informed public discussion of how our policing agencies do their jobs. This kind of information has been used to study, critique, and provide oversight of ALPR use.

Thus, the wholesale exemption of ALPR information from disclosure under state public records laws would stymie the public’s ability to monitor how their government is using powerful and controversial surveillance technology. EFF cannot support such laws.

Blocking transparency

In Connecticut, SB 4 is a pending bill that would exclude, from that state’s public records law, information “gathered by” an ALPR or “created through an analysis of the information gathered by” an ALPR. This could ultimately harm individual civilians, who would have less ability to protect themselves from law enforcement that indiscriminately collect vehicle information. Other provisions of this bill would limit government use of ALPRs, and regulate data brokers.

In Arizona, SB 1111 would restrict public access to ALPR data “collected by” an ALPR. The bill would even make it a felony to access or use data from an ALPR (or disseminate it) in violation of this article, which apparently might apply to a member of the public who obtained ALPR data with a public records request. The bill’s author claims it adds “guardrails” for ALPR use.

Earlier this year, Washington state enacted a law that will exempt data “collected by” ALPRs from the state’s public records law. While “bona fide research” will still be a way for some people to obtain ALPR data, this may not include journalists and activists who analyze aggregate data to identify policy flaws. Notably, Washington courts found last year that information generated by ALPR, including images of an individual’s own vehicle, are public records; this new legislation will override that decision, blocking the ability for people to see what photos police have taken of their own vehicles. Other provisions of this new law will limit government use of ALPRs.

A year ago, Illinois’ HB 3339 ended use of that state’s public records law to obtain ALPR information used and collected by the Illinois State Police (ISP), including both information “gathered by an ALPR” and information “created from the analysis of data generated by an ALPR.” This Illinois language for just the ISP is very similar to what is now being considered in Connecticut for all state and local agencies. 

Sadly, the list goes on. Georgia exempted ALPR data (both “captured by or derived from” ALPRs) of any government agency from its open records law. Adding insult to injury, Georgia also made it a misdemeanor to knowingly request, use, or obtain law enforcement’s plate data for any purpose other than law enforcement. Maryland exempted “information gathered by” an ALPR from its public information act. Oklahoma exempted from its open records act the ALPR data “collected, retained or shared” by District Attorneys under that state’s Uninsured Vehicle Enforcement Program.

These laws and bills in seven states are an unwelcome national trend.

Next steps

We urge legislators to reject efforts to amend state public records laws to wholly exempt ALPR information. This would diminish meaningful oversight over these controversial technologies. Public disclosure of some ALPR information is important. 

There is a better approach for states that want to harmonize privacy and transparency in the context of ALPR data: 

  1. Open records laws should cover, and not exclude, information collected by ALPRs, and also any public records derived from that information.
  2. Open records laws should have a privacy exemption that applies to all records, including information collected or derived from ALPRs. That exemption should require a case-by-case balancing of the transparency benefits and privacy costs of disclosure. These provisions work best when agencies and courts can analyze the context of the particular records, the weight of the privacy interests and public interests at stake, and other specific facts to fashion the best balance between these competing values. 
  3. When a document contains both exempt and non-exempt information, open records laws should require disclosure of the latter and withholding of the former. The best public records laws allow agencies to black out, or redact, specific private information while disclosing non-private information in the same records, threading the privacy and transparency needle.
  4. Finally, in the context of a law enforcement ALPR database (including both data collected by ALPRs and audit logs of police searches of stored ALPR data), the law should permit agencies to disclose aggregated and/or deidentified data, while withholding personally identifiable data. Importantly, the law should recognize that the steps an agency takes to protect individual privacy in ALPR databases should not be construed as creating a new public record. 

FOIA balancing standards are one layer in a larger governance stack, and work best alongside strong guardrails on whether and how governments procure ALPR systems in the first place: public debate over vendor contracts, binding surveillance ordinances, strict data‑retention limits, and clear pathways to end ALPR programs entirely where the risks prove too great.

States are demanding property insurance records to study climate change

ClimateWire News - Thu, 04/30/2026 - 6:27am
An unprecedented nationwide data collection will show where storms and wildfires are causing large insurer losses and rate hikes.

Countries agree to second conference on ditching fossil fuels

ClimateWire News - Thu, 04/30/2026 - 6:26am
Colombia and the Netherlands will pass the baton to Ireland and Tuvalu to carry on what nearly 60 countries hope becomes a new form of multilateral cooperation.

US mounts new bid to block shipping carbon tax

ClimateWire News - Thu, 04/30/2026 - 6:24am
The Trump administration has been circulating flyers at this week’s gathering of the International Maritime Organization.

Data centers used to be a prize. States are having second thoughts.

ClimateWire News - Thu, 04/30/2026 - 6:24am
Legislators in at least 28 states this year introduced bills that would roll back tax incentives for the energy-hungry facilities.

Zeldin hints at reprieve from Biden flaring rules

ClimateWire News - Thu, 04/30/2026 - 6:23am
Oil producers are facing a May 7 deadline to stop gas from burning off at some newer wells.

Lawmakers want to strip Trump’s disaster powers

ClimateWire News - Thu, 04/30/2026 - 6:22am
A Democratic bill would give Congress the authority to override presidential rejections of recovery aid.

Takeaways from Lee Zeldin’s week on Capitol Hill

ClimateWire News - Thu, 04/30/2026 - 6:22am
The EPA administrator achieved a string of viral moments to play up online, which could serve his ambitions beyond the agency.

Fast16 Malware

Schneier on Security - Thu, 04/30/2026 - 6:22am

Researchers have reverse-engineered a piece of malware named Fast16. It’s almost certainly state-sponsored, probably US in origin, and was deployed against Iran years before Stuxnet:

“…the Fast16 malware was designed to carry out the most subtle form of sabotage ever seen in an in-the-wild malware tool: By automatically spreading across networks and then silently manipulating computation processes in certain software applications that perform high-precision mathematical calculations and simulate physical phenomena, Fast16 can alter the results of those programs to cause failures that range from faulty research results to catastrophic damage to real-world equipment.”...

Gavin Newsom wants to break up with Elon Musk. Tesla is making that hard.

ClimateWire News - Thu, 04/30/2026 - 6:21am
Trucking fleets are buying Tesla’s lower-cost, higher-range electric big rig — and boosting the state’s climate goals.

Brussels weighs letting fossil fuel companies break EU pollution limits

ClimateWire News - Thu, 04/30/2026 - 6:20am
The European Commission is considering temporary relief from methane penalties as energy firms warn of supply risks.

This year’s World Cup games could be sizzling from extreme heat

ClimateWire News - Thu, 04/30/2026 - 6:20am
Host cities, stadiums and FIFA are working to protect players and spectators by conducting heat risk assessments, enhancing shade and more.

Long-running weather observatory shows the science behind our climate

ClimateWire News - Thu, 04/30/2026 - 6:19am
The work at Blue Hill Observatory and Science Center serves not just to keep weather records, but also to connect ordinary people to climate science.

Digital Hopes, Real Power: From Connection to Collective Action

EFF: Updates - Thu, 04/30/2026 - 3:56am

This is the fifth and final installment of a blog series reflecting on the global digital legacy of the 2011 Arab uprisings. You can read the rest of the series here.

If the Arab Spring was defined by optimism about what the internet could do, the years since have been marked by a more sober understanding of what it takes to defend it. 

Back in 2011, the term “digital rights” was still fairly new. While in the decades prior, open source and hacker communities—as well as a handful of organizations including EFF—had advocated for digital freedoms, it was through the merging of disparate communities from around the world in the 2000s that digital rights came to be more clearly understood as an extension of fundamental human rights.

In 2011, we observed that there were only a few organizations focused on digital rights in the region. Groups like Nawaat, which emerged from the Tunisian diaspora under the Ben Ali regime; the Arab Digital Expression Foundation, formed to promote the creative use of technology; and SMEX, which was initially created to teach journalists and others about social media but has grown to become a powerful force in the region, led the way. Since that time, dozens of organizations have emerged throughout the region to promote freedom of expression, innovation, privacy, and digital security.

Understanding how the digital rights movement evolved in the Middle East and North Africa requires a closer look at the communities that shaped it, and the organizations that are carrying on the fight today. Perspectives from people and organizations that were key to these efforts offer critical insight into how the movement has grown and what challenges lie ahead.

Reem Almasri, a senior researcher and digital sovereignty consultant, says that:

‘Digital rights’ emerged as a term around the Arab Spring, when the internet was still a fairly unregulated space, we were still trying to figure out the tech companies’ policies, and force governments to look at the internet as a fundamental right like water and electricity.

But then the need to converge digital rights to everyday rights—economic, political, social rights—and to connect it to geopolitics has started to be thought about, and to be in discussion as well. And to not look at digital rights as a separate field from everything else that’s affecting it, from the geopolitical context.

Mohamad Najem, who co-founded SMEX in 2008 and has led it to become the largest organization in the region, told me that, at the time, “Nobody gave [social media] a lot of attention in our region.” Their work was “a positive approach to social media, how we can democratize sharing information, how we can share more from civil society, change people’s minds, et cetera.”

“After that phase,” he continues, “we can think about 2012-2013—after the Arab Spring, as an organization we started looking at the infrastructure of the internet, and how freedom of expression and privacy are affected. That’s when we started looking more at what we call digital rights.”

Towards Tech Accountability

In the aftermath of the Arab Spring, social media companies moved from a largely hands-off approach to governance toward more formalized—and often opaque—content moderation systems. Platforms expanded their trust and safety teams and began working more closely with civil society through trusted partnerships in the region and globally. But, Mohamad Najem says:

After the expansion of tech accountability itself and the adaptation of tech companies, we’ve noticed that it’s not taking us anywhere. Gradually we’ve come to a new phase where it feels like tech accountability is an economy by itself that is not leading to real results. So the next phase for us at least and maybe for others in global majority communities is how we can focus on digital public good, how we can push more governments, private and public institutions to adopt more open source software, to look at the ecosystem and understand the US threats happening now, et cetera.

Another group that has played a key role in the fight for digital rights and tech accountability in the region is 7amleh, a Palestinian organization that was founded in 2013. At the time, says Jalal Abukhater:

[I]t was unique and interesting in Palestinian society to have a human rights organization dedicated fully to the topic of digital rights, you know, human rights in a digital format. However, with the years, we saw various milestones, we saw progress of policy decisions and movements through the Israeli government to influence content moderation in Big Tech companies. We saw problems there as an organization.

7amleh took a leading stance in fighting to preserve the digital rights of Palestinians during a period where there was a very strong influence through the Israeli government. There was actually quite important reporting coming through 7amleh on the situation of online content moderation at a time when it wasn’t really a topic being discussed but it was very clearly a situation where there was major influence by government and political suppression happening as a result.

An Ever-Expanding Ecosystem

While in the early days, the digital rights movement attracted specialists, today, people from other fields have recognized how digital rights intersect with their work, and the digital rights community has embraced them.

Almasri says:

Because the digital rights movement has been decentralizing and has stopped being a speciality, it stopped being an exclusive thing for digital rights specialists, since of course the internet not only in the Arab region but all over the world has become a fundamental infrastructure for running any kind of sensitive operations, or operations in general…all types of organizations, and companies, and initiatives are thinking about their digital security, about how internet laws are affecting the use of the internet, or putting them at risk, and how surveillance technologies are affecting their operations.

Abukhater credits the collaborative work that emerged within the region over the years in building the movement’s strength:

[Today], civil society and digital civil society have many forums, many coalitions and networks, but it’s always important to remember that this is work that builds over many years of experience, and relationships, and networks—that it’s different parties coming to support each other at different phases to ensure that this kind of work succeeds and that this ecosystem is sustained globally with support from partner organizations which were very crucial in ensuring that this ecosystem is sustained, especially in Palestine.

Growing Collaborations

Conferences like Bread and Net, first held in Beirut in 2018, and the Palestine Digital Activism Forum (PDAF), first held in Ramallah in 2017, bring activists, academics, journalists, and other practitioners together to network and learn about each other’s work. The pandemic, conflict, and other barriers haven’t stopped either conference from carrying on: PDAF has become an annual virtual event that draws big-name speakers, while Bread & Net has spaced out its meetings but continues to draw bigger crowds each time. 

Almasri credits these meetings with expanding the movement beyond the traditional techies and activists who first got involved. “You see a wide spectrum of different fields. You see artists, archivists, journalists joining these conversations, which is definitely on the brighter side of things when it comes to this field, or this scene.”

She also credits the emergence of alliances such as the Middle East Alliance for Digital Rights (MADR, of which EFF is a member), founded in 2020 by individuals and organizations who had been working together for many years to formalize those collaborations.

“Other than the collaborations at the advocacy level, [MADR] creates a sort of pressure point on Big Tech, on content moderation policies, allows for certain coordination at the level of the UN, et cetera, which I see as really positive because it brings some of the redundant efforts together and helps decide on priorities.”

Looking Forward

In thinking about the future of the movement, Almasri and Najem agree that digital rights are no longer a niche. In Najem’s words, “It’s about everything else…it’s about everything.” 

Almasri adds:

[W]hen it comes to priorities, things that this scene has been working on, I feel that October 7 [2023] was a big turning point in the way that digital rights activists, researchers, and academics—this field—is looking at digital rights in general. Of course, there is the major question of the need to revise tactics to fight Israel’s tech-enabled genocide that is also empowered by the global economy, big tech, and governments of the world?  What alliances should we start building on a regional and global level?

She sees ‘digital sovereignty,’ the ability of people and communities to choose, control, and use technology that serves their needs and values, as one of the next big topics for the movement to tackle, as debates over who owns and hosts our data have sharpened amid revelations that U.S. companies have played a role in regional conflicts.

There have been pockets of debates on how to achieve digital sovereignty, especially from human rights organizations documenting war crimes … There’s an awareness of how the dependence on US-based providers, cloud storage, even hosting infrastructure is a risk, especially after how using these services has been weaponized against the digital existence of certain organizations in the region that have been deplatformed or had their content removed on platforms like Meta and YouTube because their content doesn’t align with the foreign policy of the United States…so it raises a big question about how we look at digital independence, what is the spectrum of independence that civil society in the region can achieve, and in relation to what’s available as well.

Almasri also points to the role of researchers in the region:

There has been a lot more research on the political economy of surveillance technologies, so not only looking at how governments are using them, but their supply chain, who’s investing in these technologies, and how geopolitical networks empowered their proliferation in the hands of governments.

This is where studies looking at the political economy of AI and the military become important, trying to understand how this field of weapons, the military, and AI grew together as part of this global capitalist system rather than looking at these technologies in silos, that is. Looking at the proliferation of these technologies from a geopolitical point of view, looking at the bigger ecosystem rather than zooming in to the specifics of it. I think this has been a big development in the way that we look at digital rights, and the way that digital rights have been converged and integrated into the geopolitical scene.

As the global digital rights community continues to expand, it’s clear that the questions at its core are no longer just about access or expression, but about power—who holds it, how it is exercised, and who is left out of its protections. What began as a fight to keep the internet open has become a broader effort to reimagine it—an effort that is grappling with questions of infrastructure, ownership, and the global inequalities embedded in both.

And yet, despite the scale of these challenges, the movement’s strength lies in the solidarity, the ecosystems, and the networks it has spent more than a decade building. From the early days of the blogging and techie communities to the increasingly powerful digital rights community, advocates in the region have gone up against dictators, endured war and repression, yet remain determined to push forward.

Making the case for curiosity-driven science

MIT Latest News - Thu, 04/30/2026 - 12:00am

“The thing that really struck me when I came to MIT and strikes me every single day is the stuff that’s going on here is amazing. The science, the engineering… every day I hear something that makes my jaw drop,” remarked President Sally Kornbluth during a live discussion with Lizzie O’Leary of Slate’s “What Next: TBD” podcast.

Kornbluth spoke about everything from the importance of curiosity-driven science and why basic science is critical to our nation’s future, to AI and education, and even bravely joined O’Leary in a rendition of the Williams College song, “The Mountains,” in honor of their shared alma mater.

“We are in this time of incredible uncertainty,” said Kornbluth of the current state of higher education and funding for scientific research. “What we are trying to do is keep the science robust.”

Bouncing back to her time at Duke and her love of college basketball, she noted it’s a combination of zone coverage and man-to-man defense when trying to address skepticism about higher education in Washington, D.C. She emphasized: “As one of the top institutions in the world it’s part of our responsibility to articulate the importance of science. Behind the scenes, I am – along with many other [university] presidents – I am in D.C. all the time now. I want to speak to Congressmen and women, Senators, people in the executive branch to explain the importance of what we are doing.”

Kornbluth emphasized that the pipeline of basic science that flows from U.S. universities is a critical asset for our country, cautioning that to keep straining this pipeline could have enormous negative ramifications for the U.S. down the line.

“If you think about research done in this country, it’s done in in universities, it’s done in national labs, and it’s done in industry,” said Kornbluth. Universities are where most of the science with a long pathway to impact, requiring patience, starts. She pointed to immunotherapy for cancer, which began 30-40 years ago in basic immunotherapy research, as an example. With that pipeline being drained, what does the future hold for new cancer therapies or new AI and quantum technologies?

Kornbluth also underscored that uncertainty and lost funding are having a “huge impact on the talent pipeline,” delving into the unique role universities play in training graduate students, who are the next generation of scientific researchers. “We hear, ‘Oh it would be okay if research was more in industry.’ I say, ‘Would you fly on a plane with a pilot who had never flown?’ How do they think people learn how to do research? We are training the next generation… and we are losing funding for them.” She added: “I think we are going to see reverberations for many decades if we don’t rectify that issue.”

When asked how she and her colleagues are working to keep research moving forward, Kornbluth explained that at MIT, “we have tried to find alternative ways to elevate the science. We have a series of presidential initiatives that cut across the whole campus in things like health and life sciences, quantum, humanities and social sciences. The notion is that we are trying to create new opportunities.”

Still, she acknowledged that losses from the endowment tax and diminished federal funding are painful. “There are only four schools right now that are subject to the 8% endowment tax, which is a tax on our earnings. For us, that means $240 million dollars a year plus other losses in grants. So, let’s say the whole thing is, we budgeted for a loss of $300 million a year on a $1.7 billion budget… That has definitely had an impact on us. No question about it. 

“The other thing about it is again there’s all this uncertainty. Our investigators are writing a ton of grants. They don’t know if they’re going off into the void or they really have the sort of competitive opportunities they’ve always had in the past.”

Asked why universities did not see this moment coming, Kornbluth offered a few thoughts. “Look at MIT – 30,000 companies have come from MIT. When you look at something like that, why would you think any government that wants economic flourishing in their country would come after MIT?” she reflected. “It just never would have occurred to us.”

Turning towards the rapid advances in AI, and how the field is impacting education, Kornbluth noted that at MIT and other universities, “we have to focus on the human element, we have to educate our students, they need to know how to write and do mathematics…they have to view AI as a tool to augment their capabilities. That is how we are thinking about it.”

In the course of the conversation, Kornbluth also expressed her unwavering support for international students, noting that most want the opportunity to stay and contribute to research in the U.S. after graduation. “The talent brought to us through our international community is unbelievable. We can attract the very best in the world. You can bet when they talk about competitiveness with China, for example, in AI, quantum, etc., they are not sitting around in China saying, ‘Oh it’s great America is taking all our students.’ They’re thinking, ‘It’s great that America doesn’t want to take as many of our students anymore because we can train them.’ It’s a competitive issue that we really should lean into.”

Study: Immigrants help address the US eldercare shortage

MIT Latest News - Thu, 04/30/2026 - 12:00am

Good caregivers are often in short supply, but after the Covid-19 pandemic hit the U.S. in early 2020, staff levels at nursing homes dropped by 10 percent. What was a simple personnel shortage has moved closer to being a nursing-care crisis.

“We have an aging population, care for them is labor-intensive, and there are shortages everywhere in that supply chain,” says MIT economist Jonathan Gruber.

As it happens, about one-fifth of health care support workers in the U.S. are immigrants. And as a newly published study of the nation’s metro areas shows, changes in immigration levels can affect how much nursing care the elderly receive.

“When immigration rises in a city, it significantly increases the health care workforce,” says Gruber, co-author of the study and a paper detailing its findings.

Overall, Gruber and his colleagues determined that when there is more immigration, registered nurses and other aides work more hours at nursing homes, without displacing already-employed caregivers, while patient outcomes improve. Essentially, a 10 percent increase in female immigrants in a given metro area leads to a 1.1 percent increase in hours that registered nurses spend with elderly patients, while hospitalizations for those patients drop, among other things.

“Even if immigration actually increases labor supply to the medical sector, it was an open question if that would improve outcomes, and it does,” adds Gruber, the Ford Professor of Economics and head of the MIT Department of Economics.

The paper, “Immigration, the Long-Term Care Workforce, and Elder Outcomes in the U.S.,” appears in the American Journal of Health Economics. The authors are Gruber; David C. Grabowski, a professor in the Department of Health Care Policy at Harvard Medical School; and Brian E. McGarry, an assistant professor in the Department of Medicine and the Department of Public Health Sciences at the University of Rochester.

More care, fewer hospitalizations

To conduct the study, the researchers tapped into multiple data sources, including immigration information from 2000 to 2018 appearing in the U.S. Census Bureau’s American Community Survey. Extensive nursing home data came from different types of reports that facilities are required to file in order to maintain Medicare and Medicaid eligibility, allowing the scholars to examine care staffing levels and patient outcomes.

All told, the study encompasses 16 million Medicare beneficiaries in over 13,000 nursing homes in metropolitan statistical areas of the U.S., and evaluates immigrations flows for two decades.

“One of the key groups that’s taking care of our nation’s elders is immigrants,” Gruber says. “So I thought it would be fascinating to understand how much does immigration actually matter for elder care.”

More specifically, the scholars find that for every 10 percent increase in immigration above the norm in metro areas, in addition to the 1.1 percent increase in registered nurse hours, there is a 0.7 percent increase in hours of care provided by certified nurse assistants. There is a 0.6 percent decline in hospitalizations for patients making short-term stays, of up to a month, in nursing homes.

Beyond that, the study yielded other markers showing that patient outcomes improve in these situations. The roughly 1 percent increase in hours of care was accompanied by a decline in the use of physical restraints needed for patients, who also needed less psychiatric medication prescriptions and had fewer urinary tract infections, among other things.   

The fact that those outcomes improved in more immigrant-staffed situations is among the new insights provided by the research.

“There’s a lot of evidence that providing more labor supply to the elderly sector improves patient outcomes,” Gruber says. “But it wasn’t clear whether more immigrants would work the same way, because of language issues or other factors.”

A new lens

The study comes as immigration policy has become a major issue in the U.S., something that Gruber says helped spur his curiosity about its health care implications — although he did not know what the study would reveal, one way or another. In this case, he notes, the impact of immigration on eldercare may be another factor to be considered in the larger debates about the subject.

“I think it provides a new lens on the debate over immigration,” Gruber says. “The debate over immigration has been solely about what will it do to native workers, what will it do to the crime rate, what will it do to tax collection. This adds a new element, which is: What will it do to our citizens’ care? By having more immigration, we provide more care.”

Gruber, Grabowski, and McGarry are continuing to study this issue. In a new working paper, released in February, they found that increases in immigration are consistent with a reduction in the mortality rate, in part by allowing more elderly people the opportunity to receive care at home.

Gruber recognizes that there will continue to be sharp policy disagreements over immigration. Still, as the just-published paper states, to this point, when it comes to nursing care, the “results paint a consistent picture of improved quality of care resulting from increased immigration.”

Pages