Feed aggregator

New vaccine platform promotes rare protective B cells

MIT Latest News - Thu, 02/05/2026 - 2:00pm

A longstanding goal of immunotherapies and vaccine research is to induce antibodies in humans that neutralize deadly viruses such as HIV and influenza. Of particular interest are antibodies that are “broadly neutralizing,” meaning they can in principle eliminate multiple strains of a virus such as HIV, which mutates rapidly to evade the human immune system.

Researchers at MIT and the Scripps Research Institute have now developed a vaccine that generates a significant population of rare precursor B cells that are capable of evolving to produce broadly neutralizing antibodies. Expanding these cells is the first step toward a successful HIV vaccine.

The researchers’ vaccine design uses DNA instead of protein as a scaffold to fabricate a virus-like particle (VLP) displaying numerous copies of an engineered HIV immunogen called eOD-GT8, which was developed at Scripps. This vaccine generated substantially more precursor B cells in a humanized mouse model compared to a protein-based virus-like particle that has shown significant success in human clinical trials.

Preclinical studies showed that the DNA-VLP generated eight times more of the desired, or “on-target,” B cells than the clinical product, which was already shown to be highly potent.

“We were all surprised that this already outstanding VLP from Scripps was significantly outperformed by the DNA-based VLP,” says Mark Bathe, an MIT professor of biological engineering and an associate member of the Broad Institute of MIT and Harvard. “These early preclinical results suggest a potential breakthrough as an entirely new, first-in-class VLP that could transform the way we think about active immunotherapies, and vaccine design, across a variety of indications.”

The researchers also showed that the DNA scaffold doesn’t induce an immune response when applied to the engineered HIV antigen. This means the DNA VLP might be used to deliver multiple antigens when boosting strategies are needed, such as for challenging diseases such as HIV.

“The DNA-VLP allowed us for the first time to assess whether B cells targeting the VLP itself limit the development of ‘on target’ B cell responses — a longstanding question in vaccine immunology,” says Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute and a Howard Hughes Medical Institute Investigator.

Bathe and Irvine are the senior authors of the study, which appears today in Science. The paper’s lead author is Anna Romanov PhD ’25.

Priming B cells

The new study is part of a major ongoing global effort to develop active immunotherapies and vaccines that expand specific lineages of B cells. All humans have the necessary genes to produce the right B cells that can neutralize HIV, but they are exceptionally rare and require many mutations to become broadly neutralizing. If exposed to the right series of antigens, however, these cells can in principle evolve to eventually produce the requisite broadly neutralizing antibodies.

In the case of HIV, one such target antibody, called VRC01, was discovered by National Institutes of Health researchers in 2010 when they studied humans living with HIV who did not develop AIDS. This set off a major worldwide effort to develop an HIV vaccine that would induce this target antibody, but this remains an outstanding challenge.

Generating HIV-neutralizing antibodies is believed to require three stages of vaccination, each one initiated by a different antigen that helps guide B cell evolution toward the correct target, the native HIV envelope protein gp120.

In 2013, William Schief, a professor of immunology and microbiology at Scripps, reported an engineered antigen called eOD-GT6 that could be used for the first step in this process, known as priming. His team subsequently upgraded the antigen to eOD-GT8. Vaccination with eOD-GT8 arrayed on a protein VLP generated early antibody precursors to VRC01 both in mice and more recently in humans, a key first step toward an HIV vaccine.

However, the protein VLP also generated substantial “off-target” antibodies that bound the irrelevant, and potentially highly distracting, protein VLP itself. This could have unknown consequences on propagating target B cells of interest for HIV, as well as other challenging immunotherapy applications.

The Bathe and Irvine labs set out to test if they could use a particle made from DNA, instead of protein, to deliver the priming antigen. These nanoscale particles are made using DNA origami, a method that offers precise control over the structure of synthetic DNA and allows researchers to attach viral antigens at specific locations.

In 2024, Bathe and Daniel Lingwood, an associate professor at Harvard Medical School and a principal investigator at the Ragon Institute, showed this DNA VLP could be used to deliver a SARS-CoV-2 vaccine in mice to generate neutralizing antibodies. From that study, the researchers learned that the DNA scaffold does not induce antibodies to the VLP itself, unlike proteins. They wondered whether this might also enable a more focused antibody response.

Building on these results, Romanov, co-advised by Bathe and Irvine, set off to apply the DNA VLP to the Scripps HIV priming vaccine, based on eOD-GT8.

“Our earlier work with SARS-CoV-2 antigens on DNA-VLPs showed that DNA-VLPs can be used to focus the immune response on an antigen of interest. This property seemed especially useful for a case like HIV, where the B cells of interest are exceptionally rare. Thus, we hypothesized that reducing the competition among other irrelevant B cells (by delivering the vaccine on a silent DNA nanoparticle) may help these rare cells have a better chance to survive,”  Romanov says.

Initial studies in mice, however, showed the vaccine did not induce sufficient early B cell response to the first, priming dose.

After redesigning the DNA VLPs, Romanov and colleagues found that a smaller diameter version with 60 instead of 30 copies of the engineered antigen dramatically out-performed the clinical protein VLP construct, both in overall number of antigen-specific B cells and the fraction of B cells that were on-target to the specific HIV domain of interest. This was a result of improved retention of the particles in B cell follicles in lymph nodes and better collaboration with helper T cells, which promote B cell survival.

Overall, these improvements enabled the particles to generate eightfold more on-target B cells than the vaccine consisting of eOD-GT8 carried by a protein scaffold. Another key finding, elucidated by the Lingwood lab, was that the DNA particles promoted VRC01 precursor B cells toward the VRC01 antibody more efficiently than the protein VLP.

“In the field of vaccine immunology, the question of whether B cell responses to a targeted protective epitope on a vaccine antigen might be hindered by responses to neighboring off-target epitopes on the same antigen has been under intense investigation,” says Schief, who is also vice president for protein design at Moderna. “There are some data from other studies suggesting that off-target responses might not have much impact, but this study shows quite convincingly that reducing off-target responses by using a DNA VLP can improve desired on-target responses.”

“While nanoparticle formulations have been great at boosting antibody responses to various antigens, there is always this nagging question of whether competition from B cells specific for the particle’s own structural antigens won’t get in the way of antibody responses to targeted epitopes,” says Gabriel Victora, a professor of immunology, virology, and microbiology at Rockefeller University, who was not involved in the study. “DNA-based particles that leverage B cells’ natural tolerance to nucleic acids are a clever idea to circumvent this problem, and the research team’s elegant experiments clearly show that this strategy can be used to make difficult epitopes easier to target.”

A “silent” scaffold

The fact that the DNA-VLP scaffold doesn’t induce scaffold-specific antibodies means that it could be used to carry second and potentially third antigens needed in the vaccine series, as the researchers are currently investigating. It also might offer significantly improved on-target antibodies for numerous antigens that are outcompeted and dominated by off-target, irrelevant protein VLP scaffolds in this or other applications.

“A breakthrough of this paper is the rigorous, mechanistic quantification of how DNA-VLPs can ‘focus’ antibody responses on target antigens of interest, which is a consequence of the silent nature of this DNA-based scaffold we’ve previously shown is stealth to the immune system,” Bathe says.

More broadly, this new type of VLP could be used to generate other kinds of protective antibody responses against pandemic threats such as flu, or potentially against chemical warfare agents, the researchers suggest. Alternatively, it might be used as an active immunotherapy to generate antibodies that target amyloid beta or tau protein to treat degenerative diseases such as Alzheimer’s, or to generate antibodies that target noxious chemicals such as opioids or nicotine to help people suffering from addiction.

The research was funded by the National Institutes of Health; the Ragon Institute of MGH, MIT, and Harvard; the Howard Hughes Medical Institute; the National Science Foundation; the Novo Nordisk Foundation; a Koch Institute Support (core) Grant from the National Cancer Institute; the National Institute of Environmental Health Sciences; the Gates Foundation Collaboration for AIDS Vaccine Discovery; the IAVI Neutralizing Antibody Center; the National Institute of Allergy and Infectious Diseases; and the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies.

“Essential” torch heralds the start of the 2026 Winter Olympics

MIT Latest News - Thu, 02/05/2026 - 8:00am

Before the thrill of victory; before the agony of defeat; before the gold medalist’s national anthem plays, there is the Olympic torch. A symbol of unity, friendship, and the spirit of competition, the torch links today’s Olympic Games to its heritage in ancient Greece.

The torch for the 2026 Milano Cortina Olympic Games and Paralympic Games was designed by Carlo Ratti, a professor of the practice for the MIT Department of Urban Studies and Planning and the director of the Senseable City Lab in the MIT School of Architecture and Planning.

A native of Turin, Italy, and a respected designer and architect worldwide, Ratti’s work and that of his firm, Carlo Ratti Associati, has been featured at various international expositions such as the French Pavilion at the Osaka Expo (World’s Fair) in 2025 and the Italian Pavilion at the Dubai Expo in 2020. Their design for The Cloud, a 400-foot tall spherical structure that would serve as a unique observation deck, was a finalist for the 2012 Olympic Games in London, but ultimately not built.

Ratti relishes the opportunity to participate in these events.

“You can push the boundaries more at these [venues] because you are building something that is temporary,” says Ratti. “They allow for more creativity, so it’s a good moment to experiment.”

Based on his previous work, Ratti was invited to design the torch by the Olympic organizers. He approached the project much as he instructs his students working in his lab.

“It is about what the object or the design is to convey,” Ratti says. “How it can touch people, how it can relate to people, how it can transmit emotions. That’s the most important thing.”

To Ratti, the fundamental aspect of the torch is the flame. A few months before the games begin, the torch is lit in Olympia, Greece, using a parabolic mirror reflecting the sun’s rays. In ancient Greece, the flame was considered “sacred,” and was to remain lit throughout the competition. Ratti, familiar with the history of the Olympic torch, is less impressed with designs that he deems overwrought. Many torches added superfluous ornamentation to its exterior much like cars are designed around their engines, he says. Instead, he decided to strip away everything that wasn’t essential to the flame itself.

What is “essential”

“Essential” — the official name for the 2026 Winter Olympic torch — was designed to perform regardless of the weather, wind, or altitude it would encounter on its journey from Olympia to Milan. The process took three years with many designs created, considered, and discussed with the local and global Olympic committees and Olympic sponsor Versalis. And, as with Ratti’s work at MIT, researchers and engineers collaborated in the effort.

“Each design pushed the boundaries in different directions, but all of them with the key principle to put the flame at the center,” says Ratti who wanted the torch to embody “an ethos of frugality.”

At the core of Ratti’s torch is a high-performance burner powered by bio-GPL produced by energy company ENI from 100 percent renewable feedstocks. Furthermore, the torch can be recharged 10 times. In previous years, torches were used only once. This allowed for a 10-fold reduction in the number of torches created.

Also unique to this torch is its internal mechanism, which is visible via a vertical opening along its side, allowing audiences to see the burner in action. This reinforces the desire to keep the emphasis on the flame instead of the object.

In keeping with the requisite for minimalism and sustainability, the torch is primarily composed of recycled aluminum. It is the lightest torch created for the Olympics, weighing just under 2.5 pounds. The body is finished with a PVD coating that is heat resistant, letting it shift colors by reflecting the environments — such as the mountains and the city lights — through which it is carried. The Olympic torch is a blue-green shade, while the Paralympic torch is gold.

The torch won an honorable mention in Italy’s most prestigious industrial design award, the Compasso d’Oro.

The Olympic Relay

The torch relay is considered an event itself, drawing thousands as it is carried to the host city by hundreds of volunteers. Its journey for the 2026 Olympics started in late November and, after visiting cities across Greece, will have covered all 110 Italian provinces before arriving in Milan for the opening ceremony on Feb. 6.

Ratti carried the torch for a portion of its journey through Turin in mid-January — another joyful invitation to this quadrennial event. He says winter sports are his favorite; he grew up skiing where these games are being held, and has since skied around the world — from Utah to the Himalayas.

In addition to a highly sustainable torch, there was another statement Ratti wanted to make: He wanted to showcase the Italy of today and of the future. It is the same issue he confronted as the curator of the 2025 Biennale Architettura in Venice titled “Intelligens. Natural. Artificial. Collective: an architecture exhibition, but infused with technology for the future.”

“When people think about Italy, they often think about the past, from ancient Romans to the Renaissance or Baroque period,” he says. “Italy does indeed have a significant past. But the reality is that it is also the second-largest industrial powerhouse in Europe and is leading in innovation and tech in many fields. So, the 2026 torch aims to combine both past and future. It draws on Italian design from the past, but also on future-forward technologies.”

“There should be some kind of architectural design always translating into form some kind of ethical principles or ideals. It’s not just about a physical thing. Ultimately, it’s about the human dimension. That applies to the work we do at MIT or the Olympic torch.”

Backdoor in Notepad++

Schneier on Security - Thu, 02/05/2026 - 7:00am

Hackers associated with the Chinese government used a Trojaned version of Notepad++ to deliver malware to selected users.

Notepad++ said that officials with the unnamed provider hosting the update infrastructure consulted with incident responders and found that it remained compromised until September 2. Even then, the attackers maintained credentials to the internal services until December 2, a capability that allowed them to continue redirecting selected update traffic to malicious servers. The threat actor “specifically targeted Notepad++ domain with the goal of exploiting insufficient update verification controls that existed in older versions of Notepad++.” Event logs indicate that the hackers tried to re-exploit one of the weaknesses after it was fixed but that the attempt failed...

Trump cut science funding. Small businesses are paying the price.

ClimateWire News - Thu, 02/05/2026 - 6:09am
Some federal contractors are feeling the squeeze after the president slashed support for climate programs and other research efforts.

Hawaii cites Trump court loss to defend state’s climate lawsuit

ClimateWire News - Thu, 02/05/2026 - 6:08am
The administration has tried to stop states from suing the fossil fuel industry to pay up for climate impacts.

Power companies fight DOE order keeping coal plant open

ClimateWire News - Thu, 02/05/2026 - 6:08am
The owners of a Colorado facility that was forced to operate past its retirement date said the Trump administration saw an energy emergency where none existed.

Judge finds Texas anti-ESG law unconstitutional

ClimateWire News - Thu, 02/05/2026 - 6:07am
A federal court has stopped the state from refusing to do business with companies that "boycott" fossil fuels.

Illinois defies Trump by launching climate Superfund fight

ClimateWire News - Thu, 02/05/2026 - 6:06am
A top Democrat is pushing for her state to join New York and Vermont in making the fossil fuel sector pay for historical emissions.

Senate Republicans ask FEMA to halt current federal flood insurance risk pricing

ClimateWire News - Thu, 02/05/2026 - 6:04am
The lawmakers look to get rid of the new pricing over premium costs rising and homeowners dropping their coverage.

Two-thirds of poorer Europeans can’t keep homes cool in ever-hotter summers

ClimateWire News - Thu, 02/05/2026 - 6:04am
A new survey underscores the unequal impacts of climate change.

Czech premier calls on the EU to slash carbon prices

ClimateWire News - Thu, 02/05/2026 - 6:03am
Andrej Babiš told European leaders the EU emissions trading scheme had become too costly.

Winter Games organizers open to earlier start dates as planet warms

ClimateWire News - Thu, 02/05/2026 - 6:03am
The International Olympic Committee has long acknowledged that the changing climate is a challenge for finding future hosts and organizing competitions.

Norwegian skier hands IOC a petition to ‘ski fossil free’

ClimateWire News - Thu, 02/05/2026 - 6:02am
The petition asks the International Olympic Committee and the International Ski and Snowboard Federation to publish a report evaluating the appropriateness of fossil fuel marketing before next season.

Careful land allocation for carbon dioxide removal is critical for safeguarding biodiversity

Nature Climate Change - Thu, 02/05/2026 - 12:00am

Nature Climate Change, Published online: 05 February 2026; doi:10.1038/s41558-026-02567-3

A spatial assessment of global decarbonization scenarios reveals that land allocated for carbon dioxide removal substantially overlaps with areas of high biodiversity importance. The implications of such overlap depend on location and mode of implementation and demonstrate that careful assessment will be required when implementing decarbonization pathways to safeguard biodiversity.

Protecting Our Right to Sue Federal Agents Who Violate the Constitution

EFF: Updates - Wed, 02/04/2026 - 7:50pm

Federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights. For example, we have a First Amendment right to record on-duty police, including ICE and CBP, but federal agents are violating this right. Indeed, Alex Pretti was exercising this right shortly before federal agents shot and killed him. So were the many people who filmed agents shooting and killing Pretti and Renee Good – thereby creating valuable evidence that contradicts false claims by government leaders.

To protect our digital rights, we need the rule of law. When an armed agent of the government breaks the law, the civilian they injure must be made whole. This includes a lawsuit by the civilian (or their survivor) against the agent, seeking money damages to compensate them for their injury. Such systems of accountability encourage agents to follow the law, whereas impunity encourages them to break it.

Unfortunately, there is a gaping hole in the rule of law: when a federal agent violates the U.S. Constitution, it is increasingly difficult to sue them for damages. For these reasons, EFF supports new statutes to fill this hole, including California S.B. 747.

The Problem

In 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark statute empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.

However, there is no comparable statute empowering people to sue federal officials who violate the U.S. Constitution.

So in 1971, the U.S. Supreme Court stepped into this gap, in a watershed case called Bivens v. Six Unknown FBI Agents. The plaintiff alleged that FBI agents unlawfully searched his home and used excessive force against him. Justice Brennan, writing for a six-Justice majority of the Court, ruled that “damages may be obtained for injuries consequent upon a violation of the Fourth Amendment by federal officials.”  He explained: “Historically, damages have been regarded as the ordinary remedy for an invasion of personal interests in liberty.” Further: “The very essence of civil liberty certainly consists of the right of every individual to claim the protection of the laws, whenever he receives an injury.”

Subsequently, the Court expanded Bivens in cases where federal officials violated the U.S. Constitution by discriminating in a workplace, and by failing to provide medical care in a prison.

In more recent years, however, the Court has whittled Bivens down to increasing irrelevance. For example, the Court has rejected damages litigation against federal officials who allegedly violated the U.S. Constitution by strip searching a detained person, and by shooting a person located across the border.

In 2022, the Court by a six-to-three vote rejected a damages claim against a Border Patrol agent who used excessive force when investigating alleged smuggling.  In an opinion concurring in the judgment, Justice Gorsuch conceded that he “struggle[d] to see how this set of facts differs meaningfully from those in Bivens itself.” But then he argued that Bivens should be overruled because it supposedly “crossed the line” against courts “assuming legislative authority.”

Last year, the Court unanimously declined to extend Bivens to excessive force in a prison.

The Solution

At this juncture, legislatures must solve the problem. We join calls for Congress to enact a federal statute, parallel to the one it enacted during Reconstruction, to empower people to sue federal officials (and not just state and local officials) who violate the U.S. Constitution.

In the meantime, it is heartening to see state legislatures step forward fill this hole. One such effort is California S.B. 747, which EFF is proud to endorse.

State laws like this one do not violate the Supremacy Clause of the U.S. Constitution, which provides that the Constitution is the supreme law of the land. In the words of one legal explainer, this kind of state law “furthers the ultimate supremacy of the federal Constitution by helping people vindicate their fundamental constitutional rights.” 

This kind of state law goes by many names. The author of S.B. 747, California Senator Scott Wiener, calls it the “No Kings Act.” Protect Democracy, which wrote a model bill, calls it the “Universal Constitutional Remedies Act.” The originator of this idea, Professor Akhil Amar, calls it a “converse 1983”: instead of Congress authorizing suit against state officials for violating the U.S. Constitution, states would authorize suit against federal officials for doing the same thing.

We call these laws a commonsense way to protect the rule of law, which is a necessary condition to preserve our digital rights. EFF has long supported effective judicial remedies, including support for nationwide injunctions and private rights of action, and opposition to qualified immunity.

We also support federal and state legislation to guarantee our right to sue federal agents for damages when they violate the U.S. Constitution.

Smart AI Policy Means Examing Its Real Harms and Benefits

EFF: Updates - Wed, 02/04/2026 - 5:40pm

The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.

Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.

We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.

Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.

EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.

So let’s look at the real-world landscape.

AI’s Real and Potential Harms

Thinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.

There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on.  If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.

And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.

These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool.  For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.

These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.

Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.

We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.

Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers. 

Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.

Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.

AI’s Real and Potential Benefits

However harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.

Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.

To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.

Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.

AI Advancements in Scientific and Medical Research

AI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.

For example:

  • The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that  uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
  • Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).

 Researchers are using AI to help develop new medical treatments:

  • Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
  • Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
  • Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
  • Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI Uses for Accessibility and Accountability 

AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:

  • AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
  • Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”

When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:

  •  The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
  • An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.

It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.

Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.

Context Matters

It can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.

Brian Hedden named co-associate dean of Social and Ethical Responsibilities of Computing

MIT Latest News - Wed, 02/04/2026 - 1:25pm

Brian Hedden PhD ’12 has been appointed co-associate dean of the Social and Ethical Responsibilities of Computing (SERC) at MIT, a cross-cutting initiative in the MIT Schwarzman College of Computing, effective Jan. 16.

Hedden is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS). He joined the MIT faculty last fall from the Australian National University and the University of Sydney, where he previously served as a faculty member. He earned his BA from Princeton University and his PhD from MIT, both in philosophy.

“Brian is a natural and compelling choice for SERC, as a philosopher whose work speaks directly to the intellectual challenges facing education and research today, particularly in computing and AI. His expertise in epistemology, decision theory, and ethics addresses questions that have become increasingly urgent in an era defined by information abundance and artificial intelligence. His scholarship exemplifies the kind of interdisciplinary inquiry that SERC exists to advance,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

Hedden’s research focuses on how we ought to form beliefs and make decisions, and it explores how philosophical thinking about rationality can yield insights into contemporary ethical issues, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization.

Joining co-associate dean Nikos Trichakis, the J.C. Penney Professor of Management at the MIT Sloan School of Management, Hedden will help lead SERC and advance the initiative’s ongoing research, teaching, and engagement efforts. He succeeds professor of philosophy Caspar Hare, who stepped down at the conclusion of his three-year term on Sept. 1, 2025.

Since its inception in 2020, SERC has launched a range of programs and activities designed to cultivate responsible “habits of mind and action” among those who create and deploy computing technologies, while fostering the development of technologies in the public interest.

The SERC Scholars Program invites undergraduate and graduate students to work alongside postdoctoral mentors to explore interdisciplinary ethical challenges in computing. The initiative also hosts an annual prize competition that challenges MIT students to envision the future of computing, publishes a twice-yearly series of case studies, and collaborates on coordinated curricular materials, including active-learning projects, homework assignments, and in-class demonstrations. In 2024, SERC introduced a new seed grant program to support MIT researchers investigating ethical technology development; to date, two rounds of grants have been awarded to 24 projects.

Antonio Torralba, three MIT alumni named 2025 ACM fellows

MIT Latest News - Wed, 02/04/2026 - 1:15pm

Antonio Torralba, Delta Electronics Professor of Electrical Engineering and Computer Science and faculty head of artificial intelligence and decision-making at MIT, has been named to the 2025 cohort of Association for Computing Machinery (ACM) Fellows. He shares the honor of an ACM Fellowship with three MIT alumni: Eytan Adar ’97, MEng ’98; George Candea ’97, MEng ’98; and Gookwon Edward Suh SM ’01, PhD ’05.

A principal investigator within both the Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds, and Machines, Torralba received his BS in telecommunications engineering from Telecom BCN, Spain, in 1994, and a PhD in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, France, in 2000. At different points in his MIT career, he has been director of both the MIT Quest for Intelligence (now the MIT Siegel Family Quest for Intelligence) and the MIT-IBM Watson AI Lab. 

Torralba’s research focuses on computer vision, machine learning, and human visual perception; as he puts it, “I am interested in building systems that can perceive the world like humans do.” Alongside Phillip Isola and William Freeman, he recently co-authored “Foundations of Computer Vision,” an 800-plus page textbook exploring the foundations and core principles of the field. 

Among other awards and recognitions, he is the recipient of the 2008 National Science Foundation Career award; the 2010 J. K. Aggarwal Prize from the International Association for Pattern Recognition; the 2017 Frank Quick Faculty Research Innovation Fellowship; the Louis D. Smullin (’39) Award for Teaching Excellence; and the 2020 PAMI Mark Everingham Prize. In 2021, he was awarded the inaugural Thomas Huang Memorial Prize by the Pattern Analysis and Machine Intelligence Technical Committee and was named a fellow of the Association for the Advancement of Artificial Intelligence. In 2022, he received an honorary doctoral degree from the Universitat Politècnica de Catalunya — BarcelonaTech (UPC). 

ACM fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.

3 Questions: Using AI to accelerate the discovery and design of therapeutic drugs

MIT Latest News - Wed, 02/04/2026 - 1:00pm

In the pursuit of solutions to complex global challenges including disease, energy demands, and climate change, scientific researchers, including at MIT, have turned to artificial intelligence, and to quantitative analysis and modeling, to design and construct engineered cells with novel properties. The engineered cells can be programmed to become new therapeutics — battling, and perhaps eradicating, diseases.

James J. Collins is one of the founders of the field of synthetic biology, and is also a leading researcher in systems biology, the interdisciplinary approach that uses mathematical analysis and modeling of complex systems to better understand biological systems. His research has led to the development of new classes of diagnostics and therapeutics, including in the detection and treatment of pathogens like Ebola, Zika, SARS-CoV-2, and antibiotic-resistant bacteria. Collins, the Termeer Professor of Medical Engineering and Science and professor of biological engineering at MIT, is a core faculty member of the Institute for Medical Engineering and Science (IMES), the director of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health, as well as an institute member of the Broad Institute of MIT and Harvard, and core founding faculty at the Wyss Institute for Biologically Inspired Engineering, Harvard.

In this Q&A, Collins speaks about his latest work and goals for this research.

Q.  You’re known for collaborating with colleagues across MIT, and at other institutions. How have these collaborations and affiliations helped you with your research? 

A: Collaboration has been central to the work in my lab. At the MIT Jameel Clinic for Machine Learning in Health, I formed a collaboration with Regina Barzilay [the Delta Electronics Professor in the MIT Department of Electrical Engineering and Computer Science and affiliate faculty member at IMES] and Tommi Jaakkola [the Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society] to use deep learning to discover new antibiotics. This effort combined our expertise in artificial intelligence, network biology, and systems microbiology, leading to the discovery of halicin, a potent new antibiotic effective against a broad range of multidrug-resistant bacterial pathogens. Our results were published in Cell in 2020 and showcased the power of bringing together complementary skill sets to tackle a global health challenge.

At the Wyss Institute, I’ve worked closely with Donald Ingber [the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard], leveraging his organs-on-chips technology to test the efficacy of AI-discovered and AI-generated antibiotics. These platforms allow us to study how drugs behave in human tissue-like environments, complementing traditional animal experiments and providing a more nuanced view of their therapeutic potential.

The common thread across our many collaborations is the ability to combine computational predictions with cutting-edge experimental platforms, accelerating the path from ideas to validated new therapies.

Q. Your research has led to many advances in designing novel antibiotics, using generative AI and deep learning. Can you talk about some of the advances you’ve been a part of in the development of drugs that can battle multi-drug-resistant pathogens, and what you see on the horizon for breakthroughs in this arena?

A: In 2025, our lab published a study in Cell demonstrating how generative AI can be used to design completely new antibiotics from scratch. We used genetic algorithms and variational autoencoders to generate millions of candidate molecules, exploring both fragment-based designs and entirely unconstrained chemical space. After computational filtering, retrosynthetic modeling, and medicinal chemistry review, we synthesized 24 compounds and tested them experimentally. Seven showed selective antibacterial activity. One lead, NG1, was highly narrow-spectrum, eradicating multi-drug-resistant Neisseria gonorrhoeae, including strains resistant to first-line therapies, while sparing commensal species. Another, DN1, targeted methicillin-resistant Staphylococcus aureus (MRSA) and cleared infections in mice through broad membrane disruption. Both were non-toxic and showed low rates of resistance.

Looking ahead, we are using deep learning to design antibiotics with drug-like properties that make them stronger candidates for clinical development. By integrating AI with high-throughput biological testing, we aim to accelerate the discovery and design of antibiotics that are novel, safe, and effective, ready for real-world therapeutic use. This approach could transform how we respond to drug-resistant bacterial pathogens, moving from a reactive to a proactive strategy in antibiotic development.

Q. You’re a co-founder of Phare Bio, a nonprofit organization that uses AI to discover new antibiotics, and the Collins Lab has helped to launch the Antibiotics-AI Project in collaboration with Phare Bio. Can you tell us more about what you hope to accomplish with these collaborations, and how they tie back to your research goals?

A: We founded Phare Bio as a nonprofit to take the most promising antibiotic candidates emerging from the Antibiotics-AI Project at MIT and advance them toward the clinic. The idea is to bridge the gap between discovery and development by collaborating with biotech companies, pharmaceutical partners, AI companies, philanthropies, other nonprofits, and even nation states. Akhila Kosaraju has been doing a brilliant job leading Phare Bio, coordinating these efforts and moving candidates forward efficiently.

Recently, we received a grant from ARPA-H to use generative AI to design 15 new antibiotics and develop them as pre-clinical candidates. This project builds directly on our lab’s research, combining computational design with experimental testing to create novel antibiotics that are ready for further development. By integrating generative AI, biology, and translational partnerships, we hope to create a pipeline that can respond more rapidly to the global threat of antibiotic resistance, ultimately delivering new therapies to patients who need them most.

3D-printed metamaterials that stretch and fail by design

MIT Latest News - Wed, 02/04/2026 - 12:35pm

Metamaterials — materials whose properties are primarily dictated by their internal microstructure, and not their chemical makeup — have been redefining the engineering materials space for the last decade. To date, however, most metamaterials have been lightweight options designed for stiffness and strength.

New research from the MIT Department of Mechanical Engineering introduces a computational design framework to support the creation of a new class of soft, compliant, and deformable metamaterials. These metamaterials, termed 3D woven metamaterials, consist of building blocks that are composed of intertwined fibers that self-contact and entangle to endow the material with unique properties.

“Soft materials are required for emerging engineering challenges in areas such as soft robotics, biomedical devices, or even for wearable devices and functional textiles,” explains Carlos Portela, the Robert N. Noyce Career Development Professor and associate professor of mechanical engineering.

In an open-access paper published Jan. 26 in the journal Nature Communications, researchers from Portela’s lab provide a universal design framework that generates complex 3D woven metamaterials with a wide range of properties. The work also provides open-source code that allows users to create designs to fit specifications and generate a file for printing or simulating the material using a 3D printer.

“Normal knitting or weaving have been constrained by the hardware for hundreds of years — there’s only a few patterns that you can make clothes out of, for example — but that changes if hardware is no longer a limitation,” Portela says. “With this framework, you can come up with interesting patterns that completely change the way the textile is going to behave.”

Possible applications include wearable sensors that move with human skin, fabrics for aerospace or defense needs, flexible electronic devices, and a variety of other printable textiles.

The team developed general design rules — in the form of an algorithm — that first provide a graph representation of the metamaterial. The attributes of this graph eventually dictate how each fiber is placed and connected within the metamaterial. The fundamental building blocks are woven unit cells that can be functionally graded via control of various design parameters, such as the radius and pitch of the fibers that make up the woven struts.

“Because this framework allows these metamaterials to be tailored to be softer in one place and stiffer in another, or to change shape as they stretch, they can exhibit an exceptional range of behaviors that would be hard to design using conventional soft materials,” says Molly Carton, lead author of the study. Carton, a former postdoc in Portela’s lab, is now an assistant research professor in mechanical engineering at the University of Maryland.

Further, the simulation framework also allows users to predict the deformation response of these materials, capturing complex phenomena such as self-contact within fibers and entanglement, and design to predict and resist deformation or tearing patterns.

“The most exciting part was being able to tailor failure in these materials and design arbitrary combinations,” says Portela. “Based on the simulations, we were able to fabricate these spatially varying geometries and experiment on them at the microscale.”

This work is the first to provide a tool for users to design, print, and simulate an emerging class of metamaterials that are extensible and tough. It also demonstrates that through tuning of geometric parameters, users can control and predict how these materials will deform and fail, and presents several new design building blocks that substantially expand the property space of woven metamaterials.

“Until now, these complex 3D lattices have been designed manually, painstakingly, which limits the number of designs that anyone has tested,” says Carton. “We’ve been able to describe how these woven lattices work and use that to create a design tool for arbitrary woven lattices. With that design freedom, we’re able to design the way that a lattice changes shape as it stretches, how the fibers entangle and knot with each other, as well as how it tears when stretched to the limit.”

Carton says she believes the framework will be useful across many disciplines. “In releasing this framework as a software tool, our hope is that other researchers will explore what’s possible using woven lattices and find new ways to use this design flexibility,” she says. “I’m looking forward to seeing what doors our work can open.”

The paper, “Design framework for programmable three-dimensional woven metamaterials,” is available now in the journal Nature Communications. Its other MIT-affiliated authors are James Utama Surjadi, Bastien F. G. Aymon, and Ling Xu.

This work was performed, in part, through the use of MIT.nano’s fabrication and characterization facilities.

Pages