Feed aggregator

Friday Squid Blogging: Pet Squid Simulation

Schneier on Security - Fri, 05/16/2025 - 5:05pm

From Hackaday.com, this is a neural network simulation of a pet squid.

Autonomous Behavior:

  • The squid moves autonomously, making decisions based on his current state (hunger, sleepiness, etc.).
  • Implements a vision cone for food detection, simulating realistic foraging behavior.
  • Neural network can make decisions and form associations.
  • Weights are analysed, tweaked and trained by Hebbian learning algorithm.
  • Experiences from short-term and long-term memory can influence decision-making.
  • Squid can create new neurons in response to his environment (Neurogenesis) ...

House Moves Forward With Dangerous Proposal Targeting Nonprofits

EFF: Updates - Fri, 05/16/2025 - 4:39pm

This week, the U.S. House Ways and Means Committee moved forward with a proposal that would allow the Secretary of the Treasury to strip any U.S. nonprofit of its tax-exempt status by unilaterally determining the organization is a “Terrorist Supporting Organization.” This proposal, which places nearly unlimited discretion in the hands of the executive branch to target organizations it disagrees with, poses an existential threat to nonprofits across the U.S. 

This proposal, added to the House’s budget reconciliation bill, is an exact copy of a House-passed bill that EFF and hundreds of nonprofits across the country strongly opposed last fall. Thankfully, the Senate rejected that bill, and we urge the House to do the same when the budget reconciliation bill comes up for a vote on the House floor. 

The goal of this proposal is not to stop the spread of or support for terrorism; the U.S. already has myriad other laws that do that, including existing tax code section 501(p), which allows the government to revoke the tax status of designated “Terrorist Organizations.” Instead, this proposal is designed to inhibit free speech by discouraging nonprofits from working with and advocating on behalf of disadvantaged individuals and groups, like Venezuelans or Palestinians, who may be associated, even completely incidentally, with any group the U.S. deems a terrorist organization. And depending on what future groups this administration decides to label as terrorist organizations, it could also threaten those advocating for racial justice, LGBTQ rights, immigrant communities, climate action, human rights, and other issues opposed by this administration. 

On top of its threats to free speech, the language lacks due process protections for targeted nonprofit organizations. In addition to placing sole authority in the hands of the Treasury Secretary, the bill does not require the Treasury Secretary to disclose the reasons for or evidence supporting a “Terrorist Supporting Organization” designation. This, combined with only providing an after-the-fact administrative or judicial appeals process, would place a nearly insurmountable burden on any nonprofit to prove a negative—that they are not a terrorist supporting organization—instead of placing the burden where it should be, on the government. 

As laid out in letter led by ACLU and signed by over 350 diverse nonprofits, this bill would provide the executive branch with: 

“the authority to target its political opponents and use the fear of crippling legal fees, the stigma of the designation, and donors fleeing controversy to stifle dissent and chill speech and advocacy. And while the broadest applications of this authority may not ultimately hold up in court, the potential reputational and financial cost of fending off an investigation and litigating a wrongful designation could functionally mean the end of a targeted nonprofit before it ever has its day in court.” 

Current tax law makes it a crime for the President and other high-level officials to order IRS investigations over policy disagreements. This proposal creates a loophole to this rule that could chill nonprofits for years to come. 

There is no question that nonprofits and educational institutions – along with many other groups and individuals – are under threat from this administration. If passed, future administrations, regardless of party affiliation, could weaponize the powers in this bill against nonprofits of all kinds. We urge the House to vote down this proposal. 

A day in the life of MIT MBA student David Brown

MIT Latest News - Fri, 05/16/2025 - 1:25pm

MIT Sloan was my first and only choice,” says MIT graduate student David Brown. After receiving his BS in chemical engineering at the U.S. Military Academy at West Point, Brown spent eight years as a helicopter pilot in the U.S. Army, serving as a platoon leader and troop commander. 

Now in the final year of his MBA, Brown has co-founded a climate tech company — Helix Carbon — with Ariel Furst, an MIT assistant professor in the Department of Chemical Engineering, and Evan Haas MBA ’24, SM ’24. Their goal: erase the carbon footprint of tough-to-decarbonize industries like ironmaking, polyurethanes, and olefins by generating competitively-priced, carbon-neutral fuels directly from waste carbon dioxide (CO2). It’s an ambitious project; they’re looking to scale the company large enough to have a gigaton per year impact on CO2 emissions. They have lab space off campus, and after graduation, Brown will be taking a full-time job as chief operating officer.

“What I loved about the Army was that I felt every day that the work I was doing was important or impactful in some way. I wanted that to continue, and felt the best way to have the greatest possible positive impact was to use my operational skills learned from the military to help close the gap between the lab and impact in the market.”

The following photo essay provides a snapshot of what a typical day for Brown has been like as an MIT student.

Usha Lee McFarling named director of the Knight Science Journalism Program

MIT Latest News - Fri, 05/16/2025 - 12:30pm

The Knight Science Journalism Program (KSJ) at MIT has announced that Usha Lee McFarling, national science correspondent for STAT and former KSJ Fellow, will be joining the team in August as their next director.

As director, McFarling will play a central role in helping to manage KSJ — an elite mid-career fellowship program that brings prominent science journalists from around the world for 10 months of study and intellectual exploration at MIT, Harvard University, and other institutions in the Boston area.

“I’m eager to take the helm during this critical time for science journalism, a time when journalism is under attack both politically and economically and misinformation — especially in areas of science and health — is rife,” says McFarling. “My goal is for the program to find even more ways to support our field and its practitioners as they carry on their important work.”

McFarling is a veteran science writer, most recently working for STAT News. She previously reported for the Los Angeles Times, The Boston Globe, Knight Ridder Washington Bureau, and the San Antonio Light, and was a Knight Science Journalism Fellow in 1992-93. McFarling graduated from Brown University with a degree in biology in 1988 and later earned a master’s degree in biological psychology from the University of California at Berkeley.

Her work on the diseased state of the world’s oceans earned the 2007 Pulitzer Prize for explanatory journalism and a Polk Award, among others. Her coverage of health disparities at STAT has earned an Edward R. Murrow award, and awards from the Association of Health Care Journalists, and the Asian American Journalists Association. In 2024, she was awarded the Victor Cohn prize for excellence in medical science reporting and the Bernard Lo, MD award in bioethics.

McFarling will succeed director Deborah Blum, who served as director for 10 years. Blum, also a Pulitzer-prize winning journalist and the bestselling author of six books, is retiring to return to a full-time writing career. She will join the board of Undark, a magazine she helped found while at KSJ, and continue as a board member of the Council for the Advancement of Science Writing and the Burroughs Wellcome Fund, among others.

“It’s been an honor to serve as director of the Knight Science Journalism program for the past 10 years and a pleasure to be able to support the important work that science journalists do,” Blum says. “And I know that under the direction of Usha McFarling — who brings such talent and intelligence to the job — that KSJ will continue to grow and thrive in all the best ways.”

Communications Backdoor in Chinese Power Inverters

Schneier on Security - Fri, 05/16/2025 - 9:55am

This is a weird story:

U.S. energy officials are reassessing the risk posed by Chinese-made devices that play a critical role in renewable energy infrastructure after unexplained communication equipment was found inside some of them, two people familiar with the matter said.

[…]

Over the past nine months, undocumented communication devices, including cellular radios, have also been found in some batteries from multiple Chinese suppliers, one of them said.

Reuters was unable to determine how many solar power inverters and batteries they have looked at...

Zeldin could target a single word to undo endangerment finding

ClimateWire News - Fri, 05/16/2025 - 6:18am
The EPA administrator might have provided a road map to revoking the critical scientific finding in a rule that repeals the power plant climate regulation.

FEMA in ‘transition phase’ this disaster season, agency chief says

ClimateWire News - Fri, 05/16/2025 - 6:18am
David Richardson told staff in a private meeting that FEMA’s role won’t change much in 2025, even as the administration considers reducing aid to states.

Brazil taps US climate diplomat for COP30 support

ClimateWire News - Fri, 05/16/2025 - 6:16am
Jonathan Pershing previously served as a climate envoy for the State Department.

House Republicans blast OSHA heat rule

ClimateWire News - Fri, 05/16/2025 - 6:15am
The proposed regulation would force employers to provide workers with water and a cool place to rest when temperatures climb.

House Democrats slam banks over climate commitments

ClimateWire News - Fri, 05/16/2025 - 6:13am
“Ignoring climate change’s destabilizing effects on the economy is not an option,” the lawmakers wrote.

Microsoft signs major carbon-removal deal with Rubicon Carbon

ClimateWire News - Fri, 05/16/2025 - 6:12am
The deal entails delivery of carbon credits generated from projects that sequester carbon dioxide by planting trees or restoring degraded land.

EU will work on setting water use caps for thirsty data centers

ClimateWire News - Fri, 05/16/2025 - 6:11am
The European Commission will propose the measure by the end of 2026 as part of a scheme to make data centers more sustainable.

European Central Bank official warns against undermining ESG rules

ClimateWire News - Fri, 05/16/2025 - 6:08am
The European Commission has proposed amendments to environmental, social and governance legislation amid complaints that the rules pose too great a regulatory burden on business.

In India, Indigenous women seek to protect lands from climate change

ClimateWire News - Fri, 05/16/2025 - 6:06am
The women have created what are known as dream maps, showing their villages in their ideal states.

The U.S. Copyright Office’s Draft Report on AI Training Errs on Fair Use

EFF: Updates - Fri, 05/16/2025 - 12:53am

Within the next decade, generative AI could join computers and electricity as one of the most transformational technologies in history, with all of the promise and peril that implies. Governments’ responses to GenAI—including new legal precedents—need to thoughtfully address real-world harms without destroying the public benefits GenAI can offer. Unfortunately, the U.S. Copyright Office’s rushed draft report on AI training misses the mark.

The Report Bungles Fair Use

Released amidst a set of controversial job terminations, the Copyright Office’s report covers a wide range of issues with varying degrees of nuance. But on the core legal question—whether using copyrighted works to train GenAI is a fair use—it stumbles badly. The report misapplies long-settled fair use principles and ultimately puts a thumb on the scale in favor of copyright owners at the expense of creativity and innovation.

To work effectively, today’s GenAI systems need to be trained on very large collections of human-created works—probably millions of them. At this scale, locating copyright holders and getting their permission is daunting for even the biggest and wealthiest AI companies, and impossible for smaller competitors. If training makes fair use of copyrighted works, however, then no permission is needed.

Right now, courts are considering dozens of lawsuits that raise the question of fair use for GenAI training. Federal District Judge Vince Chhabria is poised to rule on this question, after hearing oral arguments in Kadrey v. Meta PlatformsThe Third Circuit Court of Appeals is expected to consider a similar fair use issue in Thomson Reuters v. Ross Intelligence. Courts are well-equipped to resolve this pivotal issue by applying existing law to specific uses and AI technologies. 

Courts Should Reject the Copyright Office’s Fair Use Analysis

The report’s fair use discussion contains some fundamental errors that place a thumb on the scale in favor of rightsholders. Though the report is non-binding, it could influence courts, including in cases like Kadrey, where plaintiffs have already filed a copy of the report and urged the court to defer to its analysis.   

Courts need only accept the Copyright Office’s draft conclusions, however, if they are persuasive. They are not.   

The Office’s fair use analysis is not one the courts should follow. It repeatedly conflates the use of works for training models—a necessary step in the process of building a GenAI model—with the use of the model to create substantially similar works. It also misapplies basic fair use principles and embraces a novel theory of market harm that has never been endorsed by any court.

The first problem is the Copyright Office’s transformative use analysis. Highly transformative uses—those that serve a different purpose than that of the original work—are very likely to be fair. Courts routinely hold that using copyrighted works to build new software and technology—including search engines, video games, and mobile apps—is a highly transformative use because it serves a new and distinct purpose. Here, the original works were created for various purposes and using them to train large language models is surely very different.

The report attempts to sidestep that conclusion by repeatedly ignoring the actual use in question—training —and focusing instead on how the model may be ultimately used. If the model is ultimately used primarily to create a class of works that are similar to the original works on which it was trained, the Office argues, then the intermediate copying can’t be considered transformative. This fundamentally misunderstands transformative use, which should turn on whether a model itself is a new creation with its own distinct purpose, not whether any of its potential uses might affect demand for a work on which it was trained—a dubious standard that runs contrary to decades of precedent.

The Copyright Office’s transformative use analysis also suggests that the fair use analysis should consider whether works were obtained in “bad faith,” and whether developers respected the right “to control” the use of copyrighted works.  But the Supreme Court is skeptical that bad faith has any role to play in the fair use analysis and has made clear that fair use is not a privilege reserved for the well-behaved. And rightsholders don’t have the right to control fair uses—that’s kind of the point.

Finally, the Office adopts a novel and badly misguided theory of “market harm.” Traditionally, the fair use analysis requires courts to consider the effects of the use on the market for the work in question. The Copyright Office suggests instead that courts should consider overall effects of the use of the models to produce generally similar works. By this logic, if a model was trained on a Bridgerton novel—among millions of other works—and was later used by a third party to produce romance novels, that might harm series author Julia Quinn’s bottom line.

This market dilution theory has four fundamental problems. First, like the transformative use analysis, it conflates training with outputs. Second, it’s not supported by any relevant precedent. Third, it’s based entirely on speculation that Bridgerton fans will buy random “romance novels” instead of works produced by a bestselling author they know and love.  This relies on breathtaking assumptions that lack evidence, including that all works in the same genre are good substitutes for each other—regardless of their quality, originality, or acclaim. Lastly, even if competition from other, unique works might reduce sales, it isn’t the type of market harm that weighs against fair use.

Nor is lost revenue from licenses for fair uses a type of market harm that the law should recognize. Prioritizing private licensing market “solutions” over user rights would dramatically expand the market power of major media companies and chill the creativity and innovation that copyright is intended to promote. Indeed, the fair use doctrine exists in part to create breathing room for technological innovation, from the phonograph record to the videocassette recorder to the internet itself. Without fair use, crushing copyright liability could stunt the development of AI technology.

We’re still digesting this report, but our initial review suggests that, on balance, the Copyright Office’s approach to fair use for GenAI training isn’t a dispassionate report on how existing copyright law applies to this new and revolutionary technology. It’s a policy judgment about the value of GenAI technology for future creativity, by an office that has no business making new, free-floating policy decisions.

The courts should not follow the Copyright Office’s speculations about GenAI. They should follow precedent.

In Memoriam: John L. Young, Cryptome Co-Founder

EFF: Updates - Thu, 05/15/2025 - 3:57pm

John L. Young, who died March 28 at age 89 in New York City, was among the first people to see the need for an online library of official secrets, a place where the public could find out things that governments and corporations didn’t want them to know. He made real the idea – revolutionary in its time – that the internet could make more information available to more people than ever before.

John and architect Deborah Natsios, his wife, in 1996 founded Cryptome, an online library which collects and publishes data about freedom of expression, privacy, cryptography, dual-use technologies, national security, intelligence, and government secrecy. Its slogan: “The greatest threat to democracy is official secrecy which favors a few over the many.” And its invitation: “We welcome documents for publication that are prohibited by governments worldwide.”

Cryptome soon became known for publishing an encyclopedic array of government, court, and corporate documents. Cryptome assembled an indispensable, almost daily chronicle of the ‘crypto wars’ of the 1990s – when the first generation of internet lawyers and activists recognized the need to free up encryption from government control and undertook litigation, public activism and legislative steps to do so.  Cryptome became required reading for anyone looking for information about that early fight, as well as many others.    

John and Cryptome were also among the early organizers and sponsors of WikiLeaks, though like many others, he later broke with that organization over what he saw as its monetization. Cryptome later published Wikileaks’ alleged internal emails. Transparency was the core of everything John stood for.

John was one of the early, under-recognized heroes of the digital age.

John was a West Texan by birth and an architect by training and trade. Even before he launched the website, his lifelong pursuit of not-for-profit, public-good ideals led him to seek access to documents about shadowy public development entities that seemed to ignore public safety, health, and welfare. As the digital age dawned, this expertise in and passion for exposing secrets evolved into Cryptome with John its chief information architect, designing and building a real-time archive of seminal debates shaping cyberspace’s evolving information infrastructures.

The FBI and Secret Service tried to chill his activities. Big Tech companies like Microsoft tried to bully him into pulling documents off the internet. But through it all, John remained a steadfast if iconoclastic librarian without fear or favor.

John served in the United States Army Corps of Engineers in Germany (1953–1956) and earned degrees in philosophy and architecture from Rice University (1957–1963) and his graduate degree in architecture from Columbia University in 1969. A self-identified radical, he became an activist and helped create the community service group Urban Deadline, where his fellow student-activists initially suspected him of being a police spy. Urban Deadline went on to receive citations from the Citizens Union of the City of New York and the New York City Council.

John was one of the early, under-recognized heroes of the digital age. He not only saw the promise of digital technology to help democratize access to information, he brought that idea into being and nurtured it for many years.  We will miss him and his unswerving commitment to the public’s right to know.

The Kids Online Safety Act Will Make the Internet Worse for Everyone

EFF: Updates - Thu, 05/15/2025 - 2:00pm

The Kids Online Safety Act (KOSA) is back in the Senate. Sponsors are claiming—again—that the latest version won’t censor online content. It isn’t true. This bill still sets up a censorship regime disguised as a “duty of care,” and it will do what previous versions threatened: suppress lawful, important speech online, especially for young people.

TAKE ACTION

KOSA Will silence kids and adults

KOSA Still Forces Platforms to Police Legal Speech

At the center of the bill is a requirement that platforms “exercise reasonable care” to prevent and mitigate a sweeping list of harms to minors, including depression, anxiety, eating disorders, substance use, bullying, and “compulsive usage.” The bill claims to bar lawsuits over “the viewpoint of users,” but that’s a smokescreen. Its core function is to let government agencies sue platforms, big or small, that don’t block or restrict content someone later claims contributed to one of these harms. 

When the safest legal option is to delete a forum, platforms will delete the forum.

This bill won’t bother big tech. Large companies will be able to manage this regulation, which is why Apple and X have agreed to support it. In fact, X helped negotiate the text of the last version of this bill we saw. Meanwhile, those companies’ smaller competitors will be left scrambling to comply. Under KOSA, a small platform hosting mental health discussion boards will be just as vulnerable as Meta or TikTok—but much less able to defend itself. 

To avoid liability, platforms will over-censor. It’s not merely hypothetical. It’s what happens when speech becomes a legal risk. The list of harms in KOSA’s “duty of care” provision is so broad and vague that no platform will know what to do regarding any given piece of content. Forums won’t be able to host posts with messages like “love your body,” “please don’t do drugs,” or “here’s how I got through depression” without fearing that an attorney general or FTC lawyer might later decide the content was harmful. Support groups and anti-harm communities, which can’t do their work without talking about difficult subjects like eating disorders, mental health, and drug abuse, will get caught in the dragnet. 

When the safest legal option is to delete a forum, platforms will delete the forum.

There’s Still No Science Behind KOSA’s Core Claims

KOSA relies heavily on vague, subjective harms like “compulsive usage.” The bill defines it as repetitive online behavior that disrupts life activities like eating, sleeping, or socializing. But here’s the problem: there is no accepted clinical definition of “compulsive usage” of online services.

There’s no scientific consensus that online platforms cause mental health disorders, nor agreement on how to measure so-called “addictive” behavior online. The term sounds like settled medical science, but it’s legislative sleight-of-hand: an undefined concept given legal teeth, with major consequences for speech and access to information.

Carveouts Don’t Fix the First Amendment Problem

The bill says it can’t be enforced based on a user’s “viewpoint.” But the text of the bill itself preferences certain viewpoints over others. Plus, liability in KOSA attaches to the platform, not the user. The only way for platforms to reduce risk in the world of KOSA is to monitor, filter, and restrict what users say.

If the FTC can sue a platform because minors saw a medical forum discussing anorexia, or posts about LGBTQ identity, or posts discussing how to help a friend who’s depressed, then that’s censorship. The bill’s stock language that “viewpoints are protected” won’t matter. The legal incentives guarantee that platforms will silence even remotely controversial speech to stay safe.

Lawmakers who support KOSA today are choosing to trust the current administration, and future administrations, to define what youth—and to some degree, all of us—should be allowed to read online. 

KOSA will not make kids safer. It will make the internet more dangerous for anyone who relies on it to learn, connect, or speak freely. Lawmakers should reject it, and fast. 

TAKE ACTION

TELL CONGRESS: OPPOSE KOSA

EFF to California Lawmakers: There’s a Better Way to Help Young People Online

EFF: Updates - Thu, 05/15/2025 - 11:46am

We’ve covered a lot of federal and state proposals that badly miss the mark when attempting to grapple with protecting young people’s safety online. These include bills that threaten to cut young people off from vital information, infringe on their First Amendment rights to speak for themselves, subject them (and adults) to invasive and insecure age verification technology, and expose them to danger by sharing personal information with people they may not want to see it.

Several such bills are moving through the California legislature this year, continuing a troubling years-long trend of lawmakers pushing similarly problematic proposals. This week, EFF sent a letter to the California legislature expressing grave concerns with lawmakers’ approach to regulating young people’s ability to speak online.

We’re far from the only ones who have issues with this approach. Many of the laws California has passed attempting to address young people’s online safety have been subsequently challenged in court and stopped from going into effect.

Our letter outlines the legal, technical, and policy problems with proposed “solutions” including age verification mandates, age gating, mandatory parental controls, and proposals that will encourage platforms to take down speech that’s even remotely controversial.

There are better paths that don’t hurt young people’s First Amendment rights.

We also note that the current approach completely ignores what we’ve heard from thousands of young people: the online platforms and communities they frequent can be among the safest spaces for them in the physical or digital world. These responses show the relationship between social media and young people’s mental health is far more nuanced than many lawmakers are willing to believe.

While our letter is addressed to California’s Assembly and Senate, they are not the only state lawmakers taking this approach. All lawmakers should listen to the people they’re trying to protect and find ways to help young people without hurting the spaces that are so important to them.

There are better paths that don’t hurt young people’s First Amendment rights and still help protect them against many of the harms that lawmakers have raised. In fact, elements of such approaches, such as data minimization, are already included in some of these otherwise problematic bills. A well-crafted privacy law that empowers everyone—children and adults—to control how their data is collected and used would be a crucial step in curbing many of these problems.

We recognize that many young people face real harms online, that families are grappling with how to deal with them, and that tech companies are not offering much help.

However, many of the California legislature’s proposals—this year, and for several years—miss the root of the problem. We call on lawmakers work with us to enact better solutions.

With AI, researchers predict the location of virtually any protein within a human cell

MIT Latest News - Thu, 05/15/2025 - 10:30am

A protein located in the wrong part of a cell can contribute to several diseases, such as Alzheimer’s, cystic fibrosis, and cancer. But there are about 70,000 different proteins and protein variants in a single human cell, and since scientists can typically only test for a handful in one experiment, it is extremely costly and time-consuming to identify proteins’ locations manually.

A new generation of computational techniques seeks to streamline the process using machine-learning models that often leverage datasets containing thousands of proteins and their locations, measured across multiple cell lines. One of the largest such datasets is the Human Protein Atlas, which catalogs the subcellular behavior of over 13,000 proteins in more than 40 cell lines. But as enormous as it is, the Human Protein Atlas has only explored about 0.25 percent of all possible pairings of all proteins and cell lines within the database.

Now, researchers from MIT, Harvard University, and the Broad Institute of MIT and Harvard have developed a new computational approach that can efficiently explore the remaining uncharted space. Their method can predict the location of any protein in any human cell line, even when both protein and cell have never been tested before.

Their technique goes one step further than many AI-based methods by localizing a protein at the single-cell level, rather than as an averaged estimate across all the cells of a specific type. This single-cell localization could pinpoint a protein’s location in a specific cancer cell after treatment, for instance.

The researchers combined a protein language model with a special type of computer vision model to capture rich details about a protein and cell. In the end, the user receives an image of a cell with a highlighted portion indicating the model’s prediction of where the protein is located. Since a protein’s localization is indicative of its functional status, this technique could help researchers and clinicians more efficiently diagnose diseases or identify drug targets, while also enabling biologists to better understand how complex biological processes are related to protein localization.

“You could do these protein-localization experiments on a computer without having to touch any lab bench, hopefully saving yourself months of effort. While you would still need to verify the prediction, this technique could act like an initial screening of what to test for experimentally,” says Yitong Tseo, a graduate student in MIT’s Computational and Systems Biology program and co-lead author of a paper on this research.

Tseo is joined on the paper by co-lead author Xinyi Zhang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and the Eric and Wendy Schmidt Center at the Broad Institute; Yunhao Bai of the Broad Institute; and senior authors Fei Chen, an assistant professor at Harvard and a member of the Broad Institute, and Caroline Uhler, the Andrew and Erna Viterbi Professor of Engineering in EECS and the MIT Institute for Data, Systems, and Society (IDSS), who is also director of the Eric and Wendy Schmidt Center and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS). The research appears today in Nature Methods.

Collaborating models

Many existing protein prediction models can only make predictions based on the protein and cell data on which they were trained or are unable to pinpoint a protein’s location within a single cell.

To overcome these limitations, the researchers created a two-part method for prediction of unseen proteins’ subcellular location, called PUPS.

The first part utilizes a protein sequence model to capture the localization-determining properties of a protein and its 3D structure based on the chain of  amino acids that forms it.

The second part incorporates an image inpainting model, which is designed to fill in missing parts of an image. This computer vision model looks at three stained images of a cell to gather information about the state of that cell, such as its type, individual features, and whether it is under stress.

PUPS joins the representations created by each model to predict where the protein is located within a single cell, using an image decoder to output a highlighted image that shows the predicted location.

“Different cells within a cell line exhibit different characteristics, and our model is able to understand that nuance,” Tseo says.

A user inputs the sequence of amino acids that form the protein and three cell stain images — one for the nucleus, one for the microtubules, and one for the endoplasmic reticulum. Then PUPS does the rest.

A deeper understanding

The researchers employed a few tricks during the training process to teach PUPS how to combine information from each model in such a way that it can make an educated guess on the protein’s location, even if it hasn’t seen that protein before.

For instance, they assign the model a secondary task during training: to explicitly name the compartment of localization, like the cell nucleus. This is done alongside the primary inpainting task to help the model learn more effectively.

A good analogy might be a teacher who asks their students to draw all the parts of a flower in addition to writing their names. This extra step was found to help the model improve its general understanding of the possible cell compartments.

In addition, the fact that PUPS is trained on proteins and cell lines at the same time helps it develop a deeper understanding of where in a cell image proteins tend to localize.

PUPS can even understand, on its own, how different parts of a protein’s sequence contribute separately to its overall localization.

“Most other methods usually require you to have a stain of the protein first, so you’ve already seen it in your training data. Our approach is unique in that it can generalize across proteins and cell lines at the same time,” Zhang says.

Because PUPS can generalize to unseen proteins, it can capture changes in localization driven by unique protein mutations that aren’t included in the Human Protein Atlas.

The researchers verified that PUPS could predict the subcellular location of new proteins in unseen cell lines by conducting lab experiments and comparing the results. In addition, when compared to a baseline AI method, PUPS exhibited on average less prediction error across the proteins they tested.

In the future, the researchers want to enhance PUPS so the model can understand protein-protein interactions and make localization predictions for multiple proteins within a cell. In the longer term, they want to enable PUPS to make predictions in terms of living human tissue, rather than cultured cells.

This research is funded by the Eric and Wendy Schmidt Center at the Broad Institute, the National Institutes of Health, the National Science Foundation, the Burroughs Welcome Fund, the Searle Scholars Foundation, the Harvard Stem Cell Institute, the Merkin Institute, the Office of Naval Research, and the Department of Energy.

Particles carrying multiple vaccine doses could reduce the need for follow-up shots

MIT Latest News - Thu, 05/15/2025 - 10:00am

Around the world, 20 percent of children are not fully immunized, leading to 1.5 million child deaths each year from diseases that are preventable by vaccination. About half of those underimmunized children received at least one vaccine dose but did not complete the vaccination series, while the rest received no vaccines at all.

To make it easier for children to receive all of their vaccines, MIT researchers are working to develop microparticles that can release their payload weeks or months after being injected. This could lead to vaccines that can be given just once, with several doses that would be released at different time points.

In a study appearing today in the journal Advanced Materials, the researchers showed that they could use these particles to deliver two doses of diphtheria vaccine — one released immediately, and the second two weeks later. Mice that received this vaccine generated as many antibodies as mice that received two separate doses two weeks apart.

The researchers now hope to extend those intervals, which could make the particles useful for delivering childhood vaccines that are given as several doses over a few months, such as the polio vaccine.

“The long-term goal of this work is to develop vaccines that make immunization more accessible — especially for children living in areas where it’s difficult to reach health care facilities. This includes rural regions of the United States as well as parts of the developing world where infrastructure and medical clinics are limited,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research.

Jaklenec and Robert Langer, the David H. Koch Institute Professor at MIT, are the senior authors of the study. Linzixuan (Rhoda) Zhang, an MIT graduate student who recently completed her PhD in chemical engineering, is the paper’s lead author.

Self-boosting vaccines

In recent years, Jaklenec, Langer, and their colleagues have been working on vaccine delivery particles made from a polymer called PLGA. In 2018, they showed they could use these types of particles to deliver two doses of the polio vaccine, which were released about 25 days apart.

One drawback to PLGA is that as the particles slowly break down in the body, the immediate environment can become acidic, which may damage the vaccine contained within the particles.

The MIT team is now working on ways to overcome that issue in PLGA particles and is also exploring alternative materials that would create a less acidic environment. In the new study, led by Zhang, the researchers decided to focus on another type of polymer, known as polyanhydride.

“The goal of this work was to advance the field by exploring new strategies to address key challenges, particularly those related to pH sensitivity and antigen degradation,” Jaklenec says.

Polyanhydrides, biodegradable polymers that Langer developed for drug delivery more than 40 years ago, are very hydrophobic. This means that as the polymers gradually erode inside the body, the breakdown products hardly dissolve in water and generate a much less acidic environment.

Polyanhydrides usually consist of chains of two different monomers that can be assembled in a huge number of possible combinations. For this study, the researchers created a library of 23 polymers, which differed from each other based on the chemical structures of the monomer building blocks and the ratio of the two monomers that went into the final product.

The researchers evaluated these polymers based on their ability to withstand temperatures of at least 104 degrees Fahrenheit (40 degrees Celsius, or slightly above body temperature) and whether they could remain stable throughout the process required to form them into microparticles.

To make the particles, the researchers developed a process called stamped assembly of polymer layers, or SEAL. First, they use silicon molds to form cup-shaped particles that can be filled with the vaccine antigen. Then, a cap made from the same polymer is applied and sealed using heat. Polymers that proved too brittle or didn’t seal completely were eliminated from the pool, leaving six top candidates.

The researchers used those polymers to design particles that would deliver diphtheria vaccine two weeks after injection, and gave them to mice along with vaccine that was released immediately. Four weeks after the initial injection, those mice showed comparable levels of antibodies to mice that received two doses two weeks apart.

Extended release

As part of their study, the researchers also developed a machine-learning model to help them explore the factors that determine how long it takes the particles to degrade once in the body. These factors include the type of monomers that go into the material, the ratio of the monomers, the molecular weight of the polymer, and the loading capacity or how much vaccine can go into the particle.

Using this model, the researchers were able to rapidly evaluate nearly 500 possible particles and predict their release time. They tested several of these particles in controlled buffers and showed that the model’s predictions were accurate.

In future work, this model could also help researchers to develop materials that would release their payload after longer intervals — months or even years. This could make them useful for delivering many childhood vaccines, which require multiple doses over several years.

“If we want to extend this to longer time points, let’s say over a month or even further, we definitely have some ways to do this, such as increasing the molecular weight or the hydrophobicity of the polymer. We can also potentially do some cross-linking. Those are further changes to the chemistry of the polymer to slow down the release kinetics or to extend the retention time of the particle,” Zhang says.

The researchers now hope to explore using these delivery particles for other types of vaccines. The particles could also prove useful for delivering other types of drugs that are sensitive to acidity and need to be given in multiple doses, they say.

“This technology has broad potential for single-injection vaccines, but it could also be adapted to deliver small molecules or other biologics that require durability or multiple doses. Additionally, it can accommodate drugs with pH sensitivities,” Jaklenec says.

The research was funded, in part, by the Koch Institute Support (core) Grant from the National Cancer Institute.

Pages