Feed aggregator

The Right to Repair Is Law in Washington State

EFF: Updates - Tue, 06/03/2025 - 12:49pm

Thanks in part to your support, the right to repair is now law in Washington.

Gov. Bob Ferguson signed two bills guaranteeing Washingtonians' right to access tools, parts, and information so they can fix personal electronics, appliances, and wheelchairs. This is the epitome of common-sense legislation. When you own something, you should have the final say about who fixes, adapts, or modifies it—and how.

When you own something, you should have the final say about who fixes, adapts, or modifies it—and how.

Advocates in Washington have worked for years to pass a strong right-to-repair law in the state. In addition to Washington’s Public Interest Research Group, the consumer electronics bill moved forward with a growing group of supporting organizations, including environmental advocates, consumer advocates, and manufacturers such as Google and Microsoft. Meanwhile, advocacy from groups including  Disability Rights Washington and the Here and Now Project made the case for the wheelchair's inclusion in the right-to-repair bill, bringing their personal stories to Olympia to show why this bill was so important.

And it’s not just states that recognize the need for people to be able to fix their own stuff.  Earlier this month, U.S. Army Secretary Dan Driscoll issued a memo stating that the Army should “[identify] and propose contract modifications for right to repair provisions where intellectual property constraints limit the Army's ability to conduct maintenance and access the appropriate maintenance tools, software, and technical data – while preserving the intellectual capital of American industry.” The memo said that the Army should seek this in future procurement contracts and also to amend existing contracts to include the right to repair.

This is a bedrock of sound procurement with a long history in America. President Lincoln only bought rifles with standardized tooling to outfit the Union Army, for the obvious reason that it would be a little embarrassing for the Commander in Chief to have to pull his troops off the field because the Army’s sole supplier had decided not to ship this week’s delivery of ammo and parts. Somehow, the Department of Defense forgot this lesson over the ensuing centuries, so that today, billions of dollars in public money are spent on material and systems that the US military can only maintain by buying service from a “beltway bandit.”

This recognizes what millions of people have said repeatedly: limiting people’s ability to fix their own stuff stands in the way of needed repairs and maintenance. That’s true whether you’re a farmer with a broken tractor during harvest, a homeowner with a misbehaving washing machine or a cracked smartphone screen, a hospital med-tech trying to fix a ventilator, or a soldier struggling with a broken generator.

The right to repair is gaining serious momentum. All 50 states have now considered some form of right-to-repair legislation. Washington is the eighth state to pass one of these bills into law—let’s keep it up.

The Federal Government Demands Data from SNAP—But Says Nothing About Protecting It

EFF: Updates - Tue, 06/03/2025 - 12:42pm

Last month, the U.S. Department of Agriculture issued a troubling order to all state agency directors of Supplemental Nutrition Assistance Programs (SNAP): hand over your data.

This is part of a larger effort by the Trump administration to gain “unfettered access to comprehensive data from all state programs that receive federal funding,” through Executive Order 14243. While the order says this data sharing is intended to cut down on fraud, it is written so broadly that it could authorize almost any data sharing. Such an effort flies in the face of well-established data privacy practices and places people at considerable risk. 

A group SNAP recipients and organizations have thankfully sued to try and block the data sharing granted through the Executive Order.  And the state of New Mexico has even refused to comply with the order, “due to questions and concerns regarding the legality of USDA’s demand for the information,” according to Source NM.

The federal government has said very little about how they will use this information. Several populations targeted by the Trump Administration are eligible to be on the SNAP program, including asylum seekers, refugees, and victims of trafficking. Additionally, although undocumented immigrants are not eligible for SNAP benefits, their household members who are U.S. citizens or have other eligible immigration statuses may be—raising the distinct concern that SNAP information could be shared with immigration or other enforcement authorities.

We all deserve privacy rights. Accessing public benefits to feed yourself shouldn't require you to give those up.

EFF has long advocated for privacy policies that ensure that information provided in one context is not used for other reasons. People who hand over their personal information should do so freely and with full information about how their information will be used. Whether you're seeking services from the government or a company, we all deserve privacy rights. Accessing public benefits to feed yourself shouldn't require you to give those up.

It's particularly important to respect privacy for government programs that provide essential support services to vulnerable populations such as SNAP.  SNAP supports people who need assistance buying food—arguably the most basic need. Often, fear of reprisal and inappropriate government data sharing, such as immigration status of household members not receiving benefits, prevents eligible people from enrolling in food assistance despite need.  Discouraging eligible people from enrolling in SNAP benefits runs counterproductive to the goals of the program, which aim to reduce food insecurity, improve health outcomes, and benefit local economies.

This is just the latest government data-sharing effort that raises alarm bells for digital rights. No one should worry that asking their government for help with hunger will get them in trouble. The USDA must promise it will not weaponize programs that put food on the table during times of need. 

The PERA and PREVAIL Acts Would Make Bad Patents Easier to Get—and Harder to Fight

EFF: Updates - Tue, 06/03/2025 - 11:23am

Two dangerous bills have been reintroduced in Congress that would reverse over a decade of progress in fighting patent trolls and making the patent system more balanced. The Patent Eligibility Restoration Act (PERA) and the PREVAIL Act would each cause significant harm on their own. Together, they form a one-two punch—making it easier to obtain vague and overly broad patents, while making it harder for the public to challenge them.

These bills don’t just share bad ideas—they share sponsors, a coordinated rollout, and backing from many of the same lobbying groups. Congress should reject both.

TAKE ACTION

Tell Congress: Don't Bring Back The Worst Patents

PERA Would Legalize Patents on Basic Software—and Human Genes

PERA would overturn long-standing court decisions that have helped keep some of the worst patents out of the system. This includes the Supreme Court’s Alice v. CLS Bank decision, which bars patents on abstract ideas, and Myriad v. AMP, which correctly ruled that naturally occurring human genes cannot be patented.

Thanks to the Alice decision, courts have invalidated a rogue’s gallery of terrible software patents—such as patents on online photo contests, online bingo, upselling, matchmaking, and scavenger hunts. These patents didn’t describe real inventions—they merely applied old ideas to general-purpose computers.

PERA would wipe out the Alice framework and replace it with vague, hollow exceptions. For example: it would ban patents on “dance moves” and “marriage proposals,” but would allow nearly anything involving a computer or machine—even if it only mentions the use of a computer. This is the same language used in many bad software patents that patent trolls have wielded for years. If PERA passes, patent claims  that are currently seen as weak will become much harder to challenge. 

Adding to that, PERA would bring back patents on human genes—exactly what was at stake in the Myriad case. EFF joined that fight, alongside scientists and patients, to prevent patents that interfered with essential diagnostic testing. Congress should not undo that victory. Some things just shouldn’t be patented. 

PERA’s requirement that living genes can constitute an invention if they are “isolated” is meaningless; every gene used in science is “isolated” from the human body. This legal wordplay was used to justify human gene patents for decades, and it’s deeply troubling that some U.S. Senators are on board with bringing them back. 

PREVAIL Weakens the Public’s Best Defense Against Patent Abuse

While PERA makes it easier to obtain a bad patent, the PREVAIL Act makes it harder to get rid of one.

PREVAIL would severely limit inter partes review (IPR), the most effective process for challenging wrongly granted patents. This faster, more affordable process—administered by the U.S. Patent and Trademark Office—has knocked out thousands of invalid patents that should never have been issued.

EFF has used IPR to protect the public. In 2013, we challenged and invalidated a patent on podcasting, which was being used to threaten creators across the internet. Thousands of our supporters chipped in to help us bring that case. Under PREVAIL, that challenge wouldn’t have been allowed. The bill would significantly limit IPR petitions unless you’ve been directly sued or threatened—a major blow to nonprofits, open source advocates, and membership-based defense groups that act in the public interest. 

PREVAIL doesn’t stop at limiting who can file an IPR. It also undermines the fairness of the IPR process itself. It raises the burden of proof, requiring challengers to overcome a presumption that the patent is valid—even when the Patent Office is the one reviewing it. The bill forces an unfair choice: anyone who challenges a patent at the Patent Office would have to give up the right to fight the same patent in court, even though key legal arguments (such as those involving abstract subject matter) can only be made in court.

It gets worse. PREVAIL makes it easier for patent owners to rewrite their claims during review, taking advantage of hindsight about what’s likely to hold up. And if multiple parties want to challenge the same patent, only the first to file may get heard. This means that patents used to threaten dozens or even hundreds of targets could get extra protection, just because one early challenger didn’t bring the best arguments.

These changes aren’t about improving the system. They’re about making it easier for a small number of patent owners to extract settlements, and harder for the public to push back.

A Step Backward, Not Forward

Supporters of these bills claim they’re trying to restore balance to the patent system. But that’s not what PERA and PREVAIL do. They don’t fix what’s broken—they break what’s working.

Patent trolling is still a severe problem. In 2024, patent trolls filed a stunning 88% of all patent lawsuits in the tech sector

At the same time, patent law has come a long way over the past decade. Courts can now reject abstract software patents earlier and more easily. The IPR process has become a vital tool for holding the Patent Office accountable and protecting real innovators. And the Myriad decision has helped keep essential parts of human biology in the public domain.

PERA and PREVAIL would undo all of that.

These bills have support from a variety of industry groups, including those representing biotech firms, university tech transfer offices, and some tech companies that rely on aggressive patent licensing. While those voices deserve to be heard, the public deserves better than legislation that makes it easier to secure a 20-year monopoly on an idea, and harder for anyone else to challenge it.

Instead of PERA and PREVAIL, Congress should focus on helping developers, creators, and small businesses that rely on technology—not those who exploit it through bad patents.

Some of that legislation is already written. Congress should consider making end-users immune from patent threats, closing loopholes that allow certain patent-holders to avoid having their patents reviewed, and adding transparency requirements so that people accused of patent infringement can at least figure out who’s making the allegations. 

But right now, EFF is fighting back, and we need your help. These bills may be dressed up as reform, but we’ve seen them before—and we know the damage they’d do.

TAKE ACTION

Tell Congress: Reject PERA and PREVAIL

Study shows making hydrogen with soda cans and seawater is scalable and sustainable

MIT Latest News - Tue, 06/03/2025 - 11:00am

Hydrogen has the potential to be a climate-friendly fuel since it doesn’t release carbon dioxide when used as an energy source. Currently, however, most methods for producing hydrogen involve fossil fuels, making hydrogen less of a “green” fuel over its entire life cycle.

A new process developed by MIT engineers could significantly shrink the carbon footprint associated with making hydrogen.

Last year, the team reported that they could produce hydrogen gas by combining seawater, recycled soda cans, and caffeine. The question then was whether the benchtop process could be applied at an industrial scale, and at what environmental cost.

Now, the researchers have carried out a “cradle-to-grave” life cycle assessment, taking into account every step in the process at an industrial scale. For instance, the team calculated the carbon emissions associated with acquiring and processing aluminum, reacting it with seawater to produce hydrogen, and transporting the fuel to gas stations, where drivers could tap into hydrogen tanks to power engines or fuel cell cars. They found that, from end to end, the new process could generate a fraction of the carbon emissions that is associated with conventional hydrogen production.

In a study appearing today in Cell Reports Sustainability, the team reports that for every kilogram of hydrogen produced, the process would generate 1.45 kilograms of carbon dioxide over its entire life cycle. In comparison, fossil-fuel-based processes emit 11 kilograms of carbon dioxide per kilogram of hydrogen generated.

The low-carbon footprint is on par with other proposed “green hydrogen” technologies, such as those powered by solar and wind energy.

“We’re in the ballpark of green hydrogen,” says lead author Aly Kombargi PhD ’25, who graduated this spring from MIT with a doctorate in mechanical engineering. “This work highlights aluminum’s potential as a clean energy source and offers a scalable pathway for low-emission hydrogen deployment in transportation and remote energy systems.”

The study’s MIT co-authors are Brooke Bao, Enoch Ellis, and professor of mechanical engineering Douglas Hart.

Gas bubble

Dropping an aluminum can in water won’t normally cause much of a chemical reaction. That’s because when aluminum is exposed to oxygen, it instantly forms a shield-like layer. Without this layer, aluminum exists in its pure form and can readily react when mixed with water. The reaction that occurs involves aluminum atoms that efficiently break up molecules of water, producing aluminum oxide and pure hydrogen. And it doesn’t take much of the metal to bubble up a significant amount of the gas.

“One of the main benefits of using aluminum is the energy density per unit volume,” Kombargi says. “With a very small amount of aluminum fuel, you can conceivably supply much of the power for a hydrogen-fueled vehicle.”

Last year, he and Hart developed a recipe for aluminum-based hydrogen production. They found they could puncture aluminum’s natural shield by treating it with a small amount of gallium-indium, which is a rare-metal alloy that effectively scrubs aluminum into its pure form. The researchers then mixed pellets of pure aluminum with seawater and observed that the reaction produced pure hydrogen. What’s more, the salt in the water helped to precipitate gallium-indium, which the team could subsequently recover and reuse to generate more hydrogen, in a cost-saving, sustainable cycle.

“We were explaining the science of this process in conferences, and the questions we would get were, ‘How much does this cost?’ and, ‘What’s its carbon footprint?’” Kombargi says. “So we wanted to look at the process in a comprehensive way.”

A sustainable cycle

For their new study, Kombargi and his colleagues carried out a life cycle assessment to estimate the environmental impact of aluminum-based hydrogen production, at every step of the process, from sourcing the aluminum to transporting the hydrogen after production. They set out to calculate the amount of carbon associated with generating 1 kilogram of hydrogen — an amount that they chose as a practical, consumer-level illustration.

“With a hydrogen fuel cell car using 1 kilogram of hydrogen, you can go between 60 to 100 kilometers, depending on the efficiency of the fuel cell,” Kombargi notes.

They performed the analysis using Earthster — an online life cycle assessment tool that draws data from a large repository of products and processes and their associated carbon emissions. The team considered a number of scenarios to produce hydrogen using aluminum, from starting with “primary” aluminum mined from the Earth, versus “secondary” aluminum that is recycled from soda cans and other products, and using various methods to transport the aluminum and hydrogen.

After running life cycle assessments for about a dozen scenarios, the team identified one scenario with the lowest carbon footprint. This scenario centers on recycled aluminum — a source that saves a significant amount of emissions compared with mining aluminum — and seawater — a natural resource that also saves money by recovering gallium-indium. They found that this scenario, from start to finish, would generate about 1.45 kilograms of carbon dioxide for every kilogram of hydrogen produced. The cost of the fuel produced, they calculated, would be about $9 per kilogram, which is comparable to the price of hydrogen that would be generated with other green technologies such as wind and solar energy.

The researchers envision that if the low-carbon process were ramped up to a commercial scale, it would look something like this: The production chain would start with scrap aluminum sourced from a recycling center. The aluminum would be shredded into pellets and treated with gallium-indium. Then, drivers could transport the pretreated pellets as aluminum “fuel,” rather than directly transporting hydrogen, which is potentially volatile. The pellets would be transported to a fuel station that ideally would be situated near a source of seawater, which could then be mixed with the aluminum, on demand, to produce hydrogen. A consumer could then directly pump the gas into a car with either an internal combustion engine or a fuel cell.

The entire process does produce an aluminum-based byproduct, boehmite, which is a mineral that is commonly used in fabricating semiconductors, electronic elements, and a number of industrial products. Kombargi says that if this byproduct were recovered after hydrogen production, it could be sold to manufacturers, further bringing down the cost of the process as a whole.

“There are a lot of things to consider,” Kombargi says. “But the process works, which is the most exciting part. And we show that it can be environmentally sustainable.”

The group is continuing to develop the process. They recently designed a small reactor, about the size of a water bottle, that takes in aluminum pellets and seawater to generate hydrogen, enough to power an electric bike for several hours. They previously demonstrated that the process can produce enough hydrogen to fuel a small car. The team is also exploring underwater applications, and are designing a hydrogen reactor that would take in surrounding seawater to power a small boat or underwater vehicle.

This research was supported, in part, by the MIT Portugal Program.

New Linux Vulnerabilities

Schneier on Security - Tue, 06/03/2025 - 7:07am

They’re interesting:

Tracked as CVE-2025-5054 and CVE-2025-4598, both vulnerabilities are race condition bugs that could enable a local attacker to obtain access to access sensitive information. Tools like Apport and systemd-coredump are designed to handle crash reporting and core dumps in Linux systems.

[…]

“This means that if a local attacker manages to induce a crash in a privileged process and quickly replaces it with another one with the same process ID that resides inside a mount and pid namespace, apport will attempt to forward the core dump (which might contain sensitive information belonging to the original, privileged process) into the namespace.”...

Trump fired the heat experts. Now he might kill their heat rule.

ClimateWire News - Tue, 06/03/2025 - 6:18am
Government layoffs threaten to make it easier for the Trump administration to ditch draft regulations for heat safety.

Trump seeks record-high FEMA funding after vowing to cut agency

ClimateWire News - Tue, 06/03/2025 - 6:17am
The president’s request for an additional $4 billion in disaster aid indicates that he might not carry through with his threats to dismantle the Federal Emergency Management Agency.

Labor Department ready to roll back climate investing rule

ClimateWire News - Tue, 06/03/2025 - 6:16am
The administration intends to issue new guidelines after a Trump-appointed judge twice upheld a Biden-era rule that lets investors consider climate costs.

Relaxing tailpipe rules would hurt climate and consumers, critics say

ClimateWire News - Tue, 06/03/2025 - 6:16am
The Trump team is looking to roll back fuel economy standards put in place by the Biden administration.

EU science advisers slam Brussels’ weakened 2040 climate plans

ClimateWire News - Tue, 06/03/2025 - 6:15am
Using international carbon credits in place of domestic action undermines climate efforts, the scientific advisory board says.

EU climate chief lobbied Germany to back weakened 2040 goal

ClimateWire News - Tue, 06/03/2025 - 6:13am
Wopke Hoekstra successfully pushed the incoming coalition to back foreign carbon credits, helping shift the EU-level 2040 talks.

States roll out red carpets for data centers. But some lawmakers push back.

ClimateWire News - Tue, 06/03/2025 - 6:12am
The fights revolve around the things that tech companies and data center developers seem to most want: large tracts of land, tax breaks and huge volumes of electricity and water.

River dammed by huge Swiss landslide flows once again

ClimateWire News - Tue, 06/03/2025 - 6:12am
Authorities are still leaving open the possibility of evacuations farther downstream if required, though the risk to other villages appears very low.

Flood-induced selective migration patterns examined

Nature Climate Change - Tue, 06/03/2025 - 12:00am

Nature Climate Change, Published online: 03 June 2025; doi:10.1038/s41558-025-02346-6

Selective migration patterns emerge in flood-prone regions in the USA. The sociodemographic profiles of individuals who were more inclined to move in or out of flood-prone areas were strikingly different. Media sentiment aggravates population replacement in these regions, leading to short-term structure changes in the housing market and long-term socioeconomic decline.

New 3D printing method enables complex designs and creates less waste

MIT Latest News - Tue, 06/03/2025 - 12:00am

Hearing aids, mouth guards, dental implants, and other highly tailored structures are often products of 3D printing. These structures are typically made via vat photopolymerization — a form of 3D printing that uses patterns of light to shape and solidify a resin, one layer at a time.

The process also involves printing structural supports from the same material to hold the product in place as it’s printed. Once a product is fully formed, the supports are removed manually and typically thrown out as unusable waste.

MIT engineers have found a way to bypass this last finishing step, in a way that could significantly speed up the 3D-printing process. They developed a resin that turns into two different kinds of solids, depending on the type of light that shines on it: Ultraviolet light cures the resin into an highly resilient solid, while visible light turns the same resin into a solid that is easily dissolvable in certain solvents.

The team exposed the new resin simultaneously to patterns of UV light to form a sturdy structure, as well as patterns of visible light to form the structure’s supports. Instead of having to carefully break away the supports, they simply dipped the printed material into solution that dissolved the supports away, revealing the sturdy, UV-printed part.

The supports can dissolve in a variety of food-safe solutions, including baby oil. Interestingly, the supports could even dissolve in the main liquid ingredient of the original resin, like a cube of ice in water. This means that the material used to print structural supports could be continuously recycled: Once a printed structure’s supporting material dissolves, that mixture can be blended directly back into fresh resin and used to print the next set of parts — along with their dissolvable supports.

The researchers applied the new method to print complex structures, including functional gear trains and intricate lattices.

“You can now print — in a single print — multipart, functional assemblies with moving or interlocking parts, and you can basically wash away the supports,” says graduate student Nicholas Diaco. “Instead of throwing out this material, you can recycle it on site and generate a lot less waste. That’s the ultimate hope.”

He and his colleagues report the details of the new method in a paper appearing today in Advanced Materials Technologies. The MIT study’s co-authors include Carl Thrasher, Max Hughes, Kevin Zhou, Michael Durso, Saechow Yap, Professor Robert Macfarlane, and Professor A. John Hart, head of MIT’s Department of Mechanical Engineering.

Waste removal

Conventional vat photopolymerization (VP) begins with a 3D computer model of a structure to be printed — for instance, of two interlocking gears. Along with the gears themselves, the model includes small support structures around, under, and between the gears to keep every feature in place as the part is printed. This computer model is then sliced into many digital layers that are sent to a VP printer for printing.

A standard VP printer includes a small vat of liquid resin that sits over a light source. Each slice of the model is translated into a matching pattern of light that is projected onto the liquid resin, which solidifies into the same pattern. Layer by layer, a solid, light-printed version of the model’s gears and supports forms on the build platform. When printing is finished, the platform lifts the completed part above the resin bath. Once excess resin is washed away, a person can go in by hand to remove the intermediary supports, usually by clipping and filing, and the support material is ultimately thrown away.

“For the most part, these supports end up generating a lot of waste,” Diaco says.

Print and dip

Diaco and the team looked for a way to simplify and speed up the removal of printed supports and, ideally, recycle them in the process. They came up with a general concept for a resin that, depending on the type of light that it is exposed to, can take on one of two phases: a resilient phase that would form the desired 3D structure and a secondary phase that would function as a supporting material but also be easily dissolved away.

After working out some chemistry, the team found they could make such a two-phase resin by mixing two commercially available monomers, the chemical building blocks that are found in many types of plastic. When ultraviolet light shines on the mixture, the monomers link together into a tightly interconnected network, forming a tough solid that resists dissolution. When the same mixture is exposed to visible light, the same monomers still cure, but at the molecular scale the resulting monomer strands remain separate from one another. This solid can quickly dissolve when placed in certain solutions.

In benchtop tests with small vials of the new resin, the researchers found the material did transform into both the insoluble and soluble forms in response to ultraviolet and visible light, respectively. But when they moved to a 3D printer with LEDs dimmer than the benchtop setup, the UV-cured material fell apart in solution. The weaker light only partially linked the monomer strands, leaving them too loosely tangled to hold the structure together.

Diaco and his colleagues found that adding a small amount of a third “bridging” monomer could link the two original monomers together under UV light, knitting them into a much sturdier framework. This fix enabled the researchers to simultaneously print resilient 3D structures and dissolvable supports using timed pulses of UV and visible light in one run.

The team applied the new method to print a variety of intricate structures, including interlocking gears, intricate lattices, a ball within a square frame, and, for fun, a small dinosaur encased in an egg-shaped support that dissolved away when dipped in solution.

“With all these structures, you need a lattice of supports inside and out while printing,” Diaco says. “Removing those supports normally requires careful, manual removal. This shows we can print multipart assemblies with a lot of moving parts, and detailed, personalized products like hearing aids and dental implants, in a way that’s fast and sustainable.”

“We’ll continue studying the limits of this process, and we want to develop additional resins with this wavelength-selective behavior and mechanical properties necessary for durable products,” says professor of mechanical engineering John Hart. “Along with automated part handling and closed-loop reuse of the dissolved resin, this is an exciting path to resource-efficient and cost-effective polymer 3D printing at scale.”

This research was supported, in part, by the Center for Perceptual and Interactive Intelligence (InnoHK) in Hong Kong, the U.S. National Science Foundation, the U.S. Office of Naval Research, and the U.S. Army Research Office.

Teaching AI models what they don’t know

MIT Latest News - Tue, 06/03/2025 - 12:00am

Artificial intelligence systems like ChatGPT provide plausible-sounding answers to any question you might ask. But they don’t always reveal the gaps in their knowledge or areas where they’re uncertain. That problem can have huge consequences as AI systems are increasingly used to do things like develop drugs, synthesize information, and drive autonomous cars.

Now, the MIT spinout Themis AI is helping quantify model uncertainty and correct outputs before they cause bigger problems. The company’s Capsa platform can work with any machine-learning model to detect and correct unreliable outputs in seconds. It works by modifying AI models to enable them to detect patterns in their data processing that indicate ambiguity, incompleteness, or bias.

“The idea is to take a model, wrap it in Capsa, identify the uncertainties and failure modes of the model, and then enhance the model,” says Themis AI co-founder and MIT Professor Daniela Rus, who is also the director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We’re excited about offering a solution that can improve models and offer guarantees that the model is working correctly.”

Rus founded Themis AI in 2021 with Alexander Amini ’17, SM ’18, PhD ’22 and Elaheh Ahmadi ’20, MEng ’21, two former research affiliates in her lab. Since then, they’ve helped telecom companies with network planning and automation, helped oil and gas companies use AI to understand seismic imagery, and published papers on developing more reliable and trustworthy chatbots.

“We want to enable AI in the highest-stakes applications of every industry,” Amini says. “We’ve all seen examples of AI hallucinating or making mistakes. As AI is deployed more broadly, those mistakes could lead to devastating consequences. Our software can make these systems more transparent.”

Helping models know what they don’t know

Rus’ lab has been researching model uncertainty for years. In 2018, she received funding from Toyota to study the reliability of a machine learning-based autonomous driving solution.

“That is a safety-critical context where understanding model reliability is very important,” Rus says.

In separate work, Rus, Amini, and their collaborators built an algorithm that could detect racial and gender bias in facial recognition systems and automatically reweight the model’s training data, showing it eliminated bias. The algorithm worked by identifying the unrepresentative parts of the underlying training data and generating new, similar data samples to rebalance it.

In 2021, the eventual co-founders showed a similar approach could be used to help pharmaceutical companies use AI models to predict the properties of drug candidates. They founded Themis AI later that year.

“Guiding drug discovery could potentially save a lot of money,” Rus says. “That was the use case that made us realize how powerful this tool could be.”

Today Themis is working with companies in a wide variety of industries, and many of those companies are building large language models. By using Capsa, the models are able to quantify their own uncertainty for each output.

“Many companies are interested in using LLMs that are based on their data, but they’re concerned about reliability,” observes Stewart Jamieson SM ’20, PhD ’24, Themis AI's head of technology. “We help LLMs self-report their confidence and uncertainty, which enables more reliable question answering and flagging unreliable outputs.”

Themis AI is also in discussions with semiconductor companies building AI solutions on their chips that can work outside of cloud environments.

“Normally these smaller models that work on phones or embedded systems aren’t very accurate compared to what you could run on a server, but we can get the best of both worlds: low latency, efficient edge computing without sacrificing quality,” Jamieson explains. “We see a future where edge devices do most of the work, but whenever they’re unsure of their output, they can forward those tasks to a central server.”

Pharmaceutical companies can also use Capsa to improve AI models being used to identify drug candidates and predict their performance in clinical trials.

“The predictions and outputs of these models are very complex and hard to interpret — experts spend a lot of time and effort trying to make sense of them,” Amini remarks. “Capsa can give insights right out of the gate to understand if the predictions are backed by evidence in the training set or are just speculation without a lot of grounding. That can accelerate the identification of the strongest predictions, and we think that has a huge potential for societal good.”

Research for impact

Themis AI’s team believes the company is well-positioned to improve the cutting edge of constantly evolving AI technology. For instance, the company is exploring Capsa’s ability to improve accuracy in an AI technique known as chain-of-thought reasoning, in which LLMs explain the steps they take to get to an answer.

“We’ve seen signs Capsa could help guide those reasoning processes to identify the highest-confidence chains of reasoning,” Amini says. “We think that has huge implications in terms of improving the LLM experience, reducing latencies, and reducing computation requirements. It’s an extremely high-impact opportunity for us.”

For Rus, who has co-founded several companies since coming to MIT, Themis AI is an opportunity to ensure her MIT research has impact.

“My students and I have become increasingly passionate about going the extra step to make our work relevant for the world," Rus says. “AI has tremendous potential to transform industries, but AI also raises concerns. What excites me is the opportunity to help develop technical solutions that address these challenges and also build trust and understanding between people and the technologies that are becoming part of their daily lives.”

At MIT, Lindsay Caplan reflects on artistic crossroads where humans and machines meet

MIT Latest News - Mon, 06/02/2025 - 4:35pm

The intersection of art, science, and technology presents a unique, sometimes challenging, viewpoint for both scientists and artists. It is in this nexus that art historian Lindsay Caplan positions herself: “My work as an art historian focuses on the ways that artists across the 20th century engage with new technologies like computers, video, and television, not merely as new materials for making art as they already understand it, but as conceptual platforms for reorienting and reimagining the foundational assumptions of their practice.”

With this introduction, Caplan, an assistant professor at Brown University, opened the inaugural Resonances Lecture — a new series by STUDIO.nano to explore the generative edge where art, science, and technology meet. Delivered on April 28 to an interdisciplinary crowd at MIT.nano, Caplan’s lecture, titled “Analogical Engines — Collaborations across Art and Technology in the 1960s,” traced how artists across Europe and the Americas in the 1960s engaged with and responded to the emerging technological advances of computer science, cybernetics, and early AI. “By the time we reached the 1960s,” she said, “analogies between humans and machines, drawn from computer science and fields like information theory and cybernetics, abound among art historians and artists alike.”

Kaplan’s talk centered on two artistic networks, with a particular emphasis on American artist Liliane Lijn: New Tendencies exhibitions (1961-79) and the Signals gallery in London (1964-66). She deftly analyzed the artist’s material experimentation with contemporary advances in emergent technologies — quantum physics and mathematical formalism, particularly Heisenberg's uncertainty principle. She argued that both art historical formalism and mathematical formalism share struggles with representation, indeterminacy, and the tension between constructed and essential truths.

Following her talk, Caplan was joined by MIT faculty Mark Jarzombek, professor of the history and theory of architecture, and Gediminas Urbonas, associate professor of art, culture, and technology (ACT), for a panel discussion moderated by Ardalan SadeghiKivi SM ’22, lecturer of comparative media studies. The conversation expanded on Caplan’s themes with discussions of artists’ attraction to newly developed materials and technology, and the critical dimension of reimagining and repurposing technologies that were originally designed with an entirely different purpose.

Urbonas echoed the urgency of these conversations. “It is exceptionally exciting to witness artists working in dialectical tension with scientists — a tradition that traces back to the founding of the Center for Advanced Visual Studies at MIT and continues at ACT today,” reflected Urbonas. “The dual ontology of science and art enables us to grasp the world as a web of becoming, where new materials, social imaginaries, and aesthetic values are co-constituted through interdisciplinary inquiry. Such collaborations are urgent today, offering tools to reimagine agency, subjectivity, and the role of culture in shaping the future.”

The event concluded with a reception in MIT.nano’s East Lobby, where attendees could view MIT ACT student projects currently on exhibition in MIT.nano’s gallery spaces. The reception was, itself, an intersection of art and technology. “The first lecture of the Resonances Lecture Series lived up to the title,” reflects Jarzombek. “A brilliant talk by Lindsay Caplan proved that the historical and aesthetical dimensions in the sciences have just as much relevance to a critical posture as the technical.”

The Resonances lecture and panel series seeks to gather artists, designers, scientists, engineers, and historians who examine how scientific endeavors shape artistic production, and vice versa. Their insights expose the historical context on how art and science are made and distributed in society and offer hints at the possible futures of such productions.

“When we were considering who to invite to launch this lecture series, Lindsay Caplan immediately came to mind,” says Tobias Putrih, ACT lecturer and academic advisor for STUDIO.nano. “She is one of the most exciting thinkers and historians writing about the intersection between art, technology, and science today. We hope her insights and ideas will encourage further collaborative projects.”

The Resonances series is one of several new activities organized by STUDIO,nano, a program within MIT.nano, to connect the arts with cutting-edge research environments. “MIT.nano generates extraordinary scientific work,” says Samantha Farrell, manager of STUDIO.nano, “but it’s just as vital to create space for cultural reflection. STUDIO.nano invites artists to engage directly with new technologies — and with the questions they raise.”

In addition to the Resonances lectures, STUDIO.nano organizes exhibitions in the public spaces at MIT.nano, and an Encounters series, launched last fall, to bring artists to MIT.nano. To learn about current installations and ongoing collaborations, visit the STUDIO.nano web page.

The Defense Attorney’s Arsenal In Challenging Electronic Monitoring

EFF: Updates - Mon, 06/02/2025 - 4:32pm

In criminal prosecutions, electronic monitoring (EM) is pitched as a “humane alternative" to incarceration – but it is not. The latest generation of “e-carceration” tools are burdensome, harsh, and often just as punitive as imprisonment. Fortunately, criminal defense attorneys have options when shielding their clients from this over-used and harmful tech.

Framed as a tool that enhances public safety while reducing jail populations, EM is increasingly used as a condition of pretrial release, probation, parole, or even civil detention. However, this technology imposes serious infringements on liberty, privacy, and due process for not only those placed on it but also for people they come into contact with. It can transform homes into digital jails, inadvertently surveil others, impose financial burdens, and punish every misstep—no matter how minor or understandable.

Even though EM may appear less severe than incarceration, research and litigation reveal that these devices often function as a form of detention in all but name. Monitored individuals must often remain at home for long periods, request permission to leave for basic needs, and comply with curfews or “exclusion zones.” Violations, even technical ones—such as a battery running low or a dropped GPS signal—can result in arrest and incarceration. Being able to take care of oneself and reintegrate into the world becomes a minefield of compliance and red tape. The psychological burden, social stigma, and physical discomfort associated with EM are significant, particularly for vulnerable populations.   

For many, EM still evokes bulky wrist or ankle “shackles” that can monitor a subject’s location, and sometimes even their blood alcohol levels. These devices have matured with digital technology however,  increasingly imposed through more sophisticated devices like smartwatches or mobile phones applications. Newer iterations of EM have also followed a trajectory of collecting much more data, including biometrics and more precise location information.

This issue is more pressing than ever, as the 2020 COVID pandemic led to an explosion in EM adoption. As incarceration and detention facilities became superspreader zones, judges kept some offenders out of these facilities by expanding the use of EM; so much so that some jurisdictions ran out of classic EM devices like ankle bracelets.

Today the number of people placed on EM in the criminal system continues to skyrocket. Fighting the spread of EM requires many tactics, but on the front lines are the criminal defense attorneys challenging EM impositions. This post will focus on the main issues for defense attorneys to consider while arguing against the imposition of this technology.

PRETRIAL ELECTRONIC MONITORING

We’ve seen challenges to EM programs in a variety of ways, including attacking the constitutionality of the program as a whole and arguing against pretrial and/or post-conviction imposition. However, it is likely that the most successful challenges will come from individualized challenges to pretrial EM.

First, courts have not been receptive to arguments that entire EM programs are unconstitutional. For example, in Simon v. San Francisco et.al, 135 F.4th 784 (9 Cir. 2025), the Ninth Circuit held that although San Francisco’s EM program constituted a Fourth Amendment search, a warrant was not required. The court explained their decision by stating that the program was a condition of pretrial release, included the sharing of location data, and was consented to by the individual (with counsel present) by signing a form that essentially operated as a contract. This decision exemplifies the court’s failure to grasp the coercive nature of this type of “consent” that is pervasive in the criminal legal system.

Second, pretrial defendants have more robust rights than they do after conviction. While a person’s expectation of privacy may be slightly diminished following arrest but before trial, the Fourth Amendment is not entirely out of the picture. Their “privacy and liberty interests” are, for instance, “far greater” than a person who has been convicted and is on probation or parole. United States v. Scott, 450 F.3d 863, 873 (9th Cir. 2006). Although individuals continue to retain Fourth Amendment rights after conviction, the reasonableness analysis will be heavily weighted towards the state as the defendant is no longer presumed innocent. However, even people on probation have a “substantial” privacy interest. United States v. Lara, 815 F.3d 605, 610 (9th Cir. 2016). 

THE FOURTH AMENDMENT

The first foundational constitutional rights threatened by the sheer invasiveness of EM are those protected by the Fourth Amendment. This concern is only heightened as the technology improves and collects increasingly detailed information. Unlike traditional probation or parole supervision, EM often tracks individuals with no geographic limitations or oversight, and can automatically record more than just approximate location information.

Courts have increasingly recognized that this new technology poses greater and more novel threats to our privacy than earlier generations. In Grady v. North Carolina, 575 U.S. 306 (2015), the Supreme Court, relying on United States v. Jones, 565 U.S. 400 (2012) held that attaching a GPS tracking device to a person—even a convicted sex offender—constitutes a Fourth Amendment search and is thus subject to the inquiry of reasonableness. A few years later, the monumental decision in Carpenter v. United States, 138 S. Ct. 2206 (2018), firmly established that Fourth Amendment analysis is affected by the advancement of technology, holding that that long-term cell-site location tracking by law enforcement constituted a search requiring a warrant.

As criminal defense attorneys are well aware, the Fourth Amendment’s ostensibly powerful protections are often less effective in practice. Nevertheless, this line of cases still forms a strong foundation for arguing that EM should be subjected to exacting Fourth Amendment scrutiny.

DUE PROCESS

Three key procedural due process challenges that defense attorneys can raise under the Fifth and Fourteenth Amendments are: inadequate hearing, lack of individualized assessment, and failure to consider ability to pay.

Many courts impose EM without adequate consideration of individual circumstances or less restrictive alternatives. Defense attorneys should demand evidentiary hearings where the government must prove that monitoring is necessary and narrowly tailored. If the defendant is not given notice, hearing, or the opportunity to object, that could arguably constitute a violation of due process. For example, in the previously mentioned case, Simon v. San Francisco, the Ninth Circuit found that individuals who were not informed of the details regarding the city’s pretrial EM program in the presence of counsel had their rights violated.

Second, imposition of EM should be based on an individualized assessment rather than a blanket rule. For pretrial defendants, EM is frequently used as a condition of bail. Although under both federal and state bail frameworks, courts are generally required to impose the least restrictive conditions necessary to ensure the defendant’s court appearance and protect the community, many jurisdictions have included EM as a default condition rather than individually assessing whether EM is appropriate. The Bail Reform Act of 1984, for instance, mandates that release conditions be tailored to the individual’s circumstances. Yet in practice, many jurisdictions impose EM categorically, without specific findings or consideration of alternatives. Defense counsel should challenge this practice by insisting that judges articulate on the record why EM is necessary, supported by evidence related to flight risk or danger. Where clients have stable housing, employment, and no history of noncompliance, EM may be more restrictive than justified.

Lastly, financial burdens associated with EM may also implicate due process where a failure to pay can result in violations and incarceration. In Bearden v. Georgia, 461 U.S. 660 (1983), the Supreme Court held that courts cannot revoke probation for failure to pay fines or restitution without first determining whether the failure was willful. Relying on Bearden, defense attorneys can argue that EM fees imposed on indigent clients amount to unconstitutional punishment for poverty. Similarly, a growing number of lower courts have agreed, particularly where clients were not given the opportunity to contest their ability to pay. Defense attorneys should request fee waivers, present evidence of indigence, and challenge any EM orders that functionally condition liberty on wealth.

STATE LAW PROTECTIONS

State constitutions and statutes often provide stronger protections than federal constitutional minimums. In addition to state corollaries to the Fourth and Fifth Amendment, some states have also enacted statutes to govern pretrial release and conditions. A number of states have established a presumption in favor of release on recognizance or personal recognizance bonds. In those jurisdictions, the state has to overcome this presumption before the court can impose restrictive conditions like EM. Some states require courts to impose the least restrictive conditions necessary to achieve legitimate purposes, making EM appropriate only when less restrictive alternatives are inadequate.

Most pretrial statutes list specific factors courts must consider, such as community ties, employment history, family responsibilities, nature of the offense, criminal history, and risk of flight or danger to community. Courts that fail to adequately consider these factors or impose generic monitoring conditions may violate statutory requirements.

For example, Illinois's SAFE-T Act includes specific protections against overly restrictive EM conditions, but implementation has been inconsistent. Defense attorneys in Illinois and states with similar laws should challenge monitoring conditions that violate specific statutory requirements.

TECHNOLOGICAL ISSUES

Attorneys should also consider the reliability of EM technology. Devices frequently produce false violations and alerts, particularly in urban areas or buildings where GPS signals are weak. Misleading data can lead to violation hearings and even incarceration. Attorneys should demand access to raw location data, vendor records, and maintenance logs. Expert testimony can help demonstrate technological flaws, human error, or system limitations that cast doubt on the validity of alleged violations.

In some jurisdictions, EM programs are operated by private companies under contracts with probation departments, courts, or sheriffs. These companies profit from fees paid by clients and have minimal oversight. Attorneys should request copies of contracts, training manuals, and policies governing EM use. Discovery may reveal financial incentives, lack of accountability, or systemic issues such as racial or geographic disparities in monitoring. These findings can support broader litigation or class actions, particularly where indigent individuals are jailed for failing to pay private vendors.

Recent research provides compelling evidence that EM fails to achieve its stated purposes while creating significant harms. Studies have not found significant relationships between EM of individuals on pretrial release and their court appearance rates or likelihood of arrest. Nor do they show that law enforcement is employing EM on individuals they would otherwise put in jail.

To the contrary, studies indicate that law enforcement is using EM to surveil and constrain the liberty of those who wouldn't otherwise be detained, as the rise in the number of people placed on EM has not coincided with a decrease in detention. This research demonstrates that EM represents an expansion of government control rather than a true alternative to detention.

Additionally, EM devices may be rife with technical issues as described above. Communication system failures that prevent proper monitoring, and device malfunctions that cause electronic shocks. Cutting of ankle bracelets is a common occurrence among users, especially when the technology is malfunctioning or hurting them. Defense attorneys should document all technical issues and argue that unreliable technology cannot form the basis for liberty restrictions or additional criminal charges.

CREATING A RECORD FOR APPEAL

Attorneys should always make sure they are creating a record on which the EM imposition can be appealed, should the initial hearing be unsuccessful. This will require lawyers to include the factual basis for challenge and preserve the appropriate legal arguments. The modern generation of EM has yet to undergo the extensive judicial review that ankle shackles have been subjected to, making it integral to make an extensive record of the ways in which it is more invasive and harmful, so that it can be properly argued to an appellate court that the nature of the newest EM requires more than perfunctory application of decades-old precedence. As we saw with Carpenter, the rapid advancement of technology may push the courts to reconsider older paradigms for constitutional analysis and find them wanting. Thus, a comprehensive record would be critical to show EM as it is—an extension of incarceration—rather than a benevolent alternative to detention. 

Defeating electronic monitoring will require a multidimensional approach that includes litigating constitutional claims, contesting factual assumptions, exposing technological failures, and advocating for systemic reforms. As the carceral state evolves, attorneys must remain vigilant and proactive in defending the rights of their clients.

The EU’s “Encryption Roadmap” Makes Everyone Less Safe

EFF: Updates - Mon, 06/02/2025 - 4:15pm

EFF has joined more than 80 civil society organizations, companies, and cybersecurity experts in signing a letter urging the European Commission to change course on its recently announced “Technology Roadmap on Encryption.” The roadmap, part of the EU’s ProtectEU strategy, discusses new ways for law enforcement to access encrypted data. That framing is dangerously flawed. 

Let’s be clear: there is no technical “lawful access” to end-to-end encrypted messages that preserves security and privacy. Any attempt to circumvent encryption—like client-side scanning—creates new vulnerabilities, threatening the very people governments claim to protect.

This letter is significant in not just its content, but in who signed it. The breadth of the coalition makes one thing clear: civil society and the global technical community overwhelmingly reject the idea that weakening encryption can coexist with respect for fundamental rights.

Strong encryption is a pillar of cybersecurity, protecting everyone: activists, journalists, everyday web users, and critical infrastructure. Undermining it doesn’t just hurt privacy. It makes everyone’s data more vulnerable and weakens the EU’s ability to defend against cybersecurity threats.

EU officials should scrap any roadmap focused on circumvention and instead invest in stronger, more widespread use of end-to-end encryption. Security and human rights aren’t in conflict. They depend on each other.

You can read the full letter here.

AI stirs up the recipe for concrete in MIT study

MIT Latest News - Mon, 06/02/2025 - 3:45pm

For weeks, the whiteboard in the lab was crowded with scribbles, diagrams, and chemical formulas. A research team across the Olivetti Group and the MIT Concrete Sustainability Hub (CSHub) was working intensely on a key problem: How can we reduce the amount of cement in concrete to save on costs and emissions? 

The question was certainly not new; materials like fly ash, a byproduct of coal production, and slag, a byproduct of steelmaking, have long been used to replace some of the cement in concrete mixes. However, the demand for these products is outpacing supply as industry looks to reduce its climate impacts by expanding their use, making the search for alternatives urgent. The challenge that the team discovered wasn’t a lack of candidates; the problem was that there were too many to sort through.

On May 17, the team, led by postdoc Soroush Mahjoubi, published an open-access paper in Nature’s Communications Materials outlining their solution. “We realized that AI was the key to moving forward,” notes Mahjoubi. “There is so much data out there on potential materials — hundreds of thousands of pages of scientific literature. Sorting through them would have taken many lifetimes of work, by which time more materials would have been discovered!”

With large language models, like the chatbots many of us use daily, the team built a machine-learning framework that evaluates and sorts candidate materials based on their physical and chemical properties. 

“First, there is hydraulic reactivity. The reason that concrete is strong is that cement — the ‘glue’ that holds it together — hardens when exposed to water. So, if we replace this glue, we need to make sure the substitute reacts similarly,” explains Mahjoubi. “Second, there is pozzolanicity. This is when a material reacts with calcium hydroxide, a byproduct created when cement meets water, to make the concrete harder and stronger over time.  We need to balance the hydraulic and pozzolanic materials in the mix so the concrete performs at its best.”

Analyzing scientific literature and over 1 million rock samples, the team used the framework to sort candidate materials into 19 types, ranging from biomass to mining byproducts to demolished construction materials. Mahjoubi and his team found that suitable materials were available globally — and, more impressively, many could be incorporated into concrete mixes just by grinding them. This means it’s possible to extract emissions and cost savings without much additional processing. 

“Some of the most interesting materials that could replace a portion of cement are ceramics,” notes Mahjoubi. “Old tiles, bricks, pottery — all these materials may have high reactivity. That’s something we’ve observed in ancient Roman concrete, where ceramics were added to help waterproof structures. I’ve had many interesting conversations on this with Professor Admir Masic, who leads a lot of the ancient concrete studies here at MIT.”

The potential of everyday materials like ceramics and industrial materials like mine tailings is an example of how materials like concrete can help enable a circular economy. By identifying and repurposing materials that would otherwise end up in landfills, researchers and industry can help to give these materials a second life as part of our buildings and infrastructure.

Looking ahead, the research team is planning to upgrade the framework to be capable of assessing even more materials, while experimentally validating some of the best candidates. “AI tools have gotten this research far in a short time, and we are excited to see how the latest developments in large language models enable the next steps,” says Professor Elsa Olivetti, senior author on the work and member of the MIT Department of Materials Science and Engineering. She serves as an MIT Climate Project mission director, a CSHub principal investigator, and the leader of the Olivetti Group.

“Concrete is the backbone of the built environment,” says Randolph Kirchain, co-author and CSHub director. “By applying data science and AI tools to material design, we hope to support industry efforts to build more sustainably, without compromising on strength, safety, or durability.

In addition to Mahjoubi, Olivetti, and Kirchain, co-authors on the work include MIT postdoc Vineeth Venugopal, Ipek Bensu Manav SM ’21, PhD ’24; and CSHub Deputy Director Hessam AzariJafari.

Pages