Feed aggregator

MIT engineers develop a magnetic transistor for more energy-efficient electronics

MIT Latest News - Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Yikes, Encryption’s Y2K Moment is Coming Years Early

EFF: Updates - Thu, 04/09/2026 - 5:32pm

Google moved up its estimated deadline for quantum preparedness in cryptography to 2029—only 33 months from now. That’s earlier than previous deadlines, and they proposed the new post-quantum migration deadline because of two new papers that comprise a big jump in the state of the technology. It’s ahead of schedule, but not altogether unexpected. Cryptographers and engineers have been working on this for years, and as the deadline gets closer, it’s not surprising to see more precise timeline estimates come up.

The preparation for the Y2K bug is not a perfect analogy. Like Y2K, if systems are not updated in time, anyone with a powerful enough quantum computer will be able to more easily insert malware into the core systems of a computer and fake authentication to allow impersonation merely by observing network traffic. These are the threats whose mitigation timelines have been moved up.

But unlike Y2K, there’s a second sort of attack that we already need to be prepared for: quantum computers will be able to decrypt years of captured messages sent over encrypted messaging platforms shared any time before those platforms updated to quantum-proof encryption. That type of attack has been the main focus of engineering efforts so far and mitigation is well on its way, since anything before the upgrade might eventually be compromised.

Fortunately, not all cryptography is broken by quantum computers. Notably, symmetric encryption is quantum resistant. That means that if you have disk encryption turned on, you shouldn’t have to worry about quantum computers breaking into your phone, as long as your system’s keys are long enough. The problem is how you get the keys to do that encryption, and how you authenticate software on your device and in the cloud.

Engineers: Time to Lock In

For those whose work touches on any sort of cryptographic deployment, you’re hopefully already working on the post-quantum transition. If not, you really should be; there are quite a few relevant posts and updates with more information about what this news means for you. Your key agreement systems should be upgraded soon if they’re not already because of store-now-decrypt-later attacks. Now it’s time to prepare for authentication attacks on forged signatures as well.

In some cases, you may need to wait on others to finish their work first. If you’re using NGINX to host websites on Ubuntu, for example, the security settings you need to upgrade key agreement were just released in version 26.04. Updates are rolling out, so keep checking in and upgrade your systems as soon as you’re able to.

Users: Stay Updated, Check on Your Chats

But if you’re not in any position to be updating software or hardware, there may be some additional steps you can take to make sure you're as protected as possible. You’ll want to get the latest post-quantum protections as soon as they're available, so if you don't already have a habit of applying software updates in a timely manner, now’s a good time to start.

If you want to know if the website you’re using or the encrypted messaging app you’re chatting over will leak its data in a few years to anyone storing traffic now, you can search for its name with the word "quantum." The engineers are usually pretty proud of their work and have announced their post-quantum support (like what we’ve seen from Signal and iMessage). If you can’t find that information, you may want to have extra consideration for what you say over the internet, or switch the tools you're using. Those are the big areas to worry about now, before quantum computers are actually here, because they could result in the mass leakage of old messages.

The new deadline means that some technologies are simply not going to make it in time and will have to be left by the wayside, like trusted execution environments (TEEs), due to the slower speed of hardware deployments. TEEs are how companies do private processing on user data in the cloud, and they’re particularly relevant to AI offerings. 

Even now, though they offer more protection than processing data in the clear, TEEs are not as secure as homomorphic encryption or doing the processing on device. Post-quantum, the security level gets much closer to computation on cleartext, and even with strong user controls, that makes it way too easy to accidentally backdoor your own encrypted chats. If you’re worried about the contents of messages in an encrypted chat being exposed, you’ll probably want to completely avoid using AI features that might leak that content, such as summarization of recent chat history and notifications, and reply composition assistance. 

How’s the Transition Going So Far?

The work to update the world to post-quantum is well on its way. NIST finalized the standards for post-quantum cryptographic algorithms back in 2024. The larger platforms, websites, and hosting providers have already updated their algorithms, so even now, you’re probably already using post-quantum algorithms to access some of the internet. Measurements vary pretty widely, but up to about 4 in 10 websites currently support a post-quantum key exchange.

There’s still some work to be done in figuring out how to make the needed changes—for example, the way you find out a website’s private key to make HTTPS possible is being reworked to make room for larger signatures. Some technologies are just coming to market, like the post-quantum root of trust available now in some Chromebooks. In practice, this means that as you think about replacing your current devices in the next few years, you may want to check if you’re picking up hardware that has post-quantum support, if those specific protections are required for your threat model.

For the areas that still need updating, how much can we expect to actually get ready by the new deadline? It’s likely that not every cryptographically-capable device and deployment will be ready in time, and hardware with hard-coded certificates will probably be the last to update. We saw that happen when SHA-1 was deprecated; Point of Sale systems in particular were late adopters. While governments and large companies with quantum computers may not be interested in stealing money from cash registers, they will be interested in accessing secrets about people’s private lives. That’s why it’s so important that everyone does their part to upgrade, to protect the details of private communications and browsing. 

And there’s a good chance that older devices that won’t receive quantum-resistant updates were probably vulnerable to some other attack already. Quantum computation is just one type of attack on cryptography that’s notable for the scale of migration required, and how every public-key cryptosystem and authentication scheme has to do the work to prepare. That’s not a difference in kind, it’s a difference in scale, and some systems will inevitably be left behind.

Quantum preparedness hits different industries and services in different ways, but services that handle communications and financial information are particularly susceptible to risk, and need to act quickly to protect the privacy and security of billions of people.

Learning with audiobooks

MIT Latest News - Thu, 04/09/2026 - 2:00pm

Millions of students nationwide use text-supplemented audiobooks, learning tools that are thought to help those who struggle with reading keep up in the classroom. A new study from scientists at MIT’s McGovern Institute for Brain Research finds that many students do benefit from the audiobooks, gaining new vocabulary through the stories they hear. But study participants learned significantly more when audiobooks were paired with explicit one-on-one instruction — and this was especially true for students who were poor readers. The group’s findings were reported on March 17 in the journal Developmental Science.

“It is an exciting moment in this ed-tech space,” says Grover Hermann Professor of Health Sciences and Technology John Gabrieli, noting a rapid expansion of online resources meant to support students and educators. “The admirable goal in all this is: Can we use technology to help kids progress, especially kids who are behind for one reason or another?” His team’s study — one of few randomized, controlled trials to evaluate educational technology — suggests a nuanced approach is needed as these tools are deployed in the classroom. “What you can get out of a software package will be great for some people, but not so great for other people,” Gabrieli says. “Different people need different levels of support.” Gabrieli is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute. 

Ola Ozernov-Palchik and Halie Olson, scientists in Gabrieli’s lab, launched the audiobook study in 2020, when most schools in the United States had closed to slow the spread of Covid-19. The pandemic meant the researchers would not be able to ask families to visit an MIT lab to participate in the study — but it also underscored the urgency of understanding which educational technologies are effective, and for whom.

“What we were really concerned about as the pandemic hit is that the types of gaps that we see widen through the summers — the summer slide that affects poor readers and disadvantaged children to a greater extent — would be amplified by the pandemic,” says Ozernov-Palchik. Many educational technologies purport to ameliorate these gaps. But, Ozernov-Palchik says, “fewer than 10 percent of educational technology tools have undergone any type of research. And we know that when we use unproven methods in education, the students who are most vulnerable are the ones who are left further and further behind.”

So the team designed a study that could be done remotely, involving hundreds of third- and fourth-graders around the country. They focused on evaluating the impact of audiobooks on children’s vocabularies, because vocabulary knowledge is so important for educational success. Ozernov-Palchik explains that books are important for exposing children to new words, and when children miss out on that experience because they struggle to read, they can fall further behind in school.

Audiobooks allow students to access similar content in a different way. For their study, the researchers partnered with Learning Ally, an organization that produces audiobooks synchronized with highlighted text on a computer screen, so students can follow along as they listen.

“The idea is, they’re going to learn vocabulary implicitly through accessing those linguistically rich materials,” Ozernov-Palchik says. But that idea was untested. In contrast, she says, “we know that really what works in education, especially for the most vulnerable students, is explicit instruction.”

Before beginning their study, Ozernov-Palchik and Olson trained a team of online tutors to provide that explicit instruction. The tutors — college students with no educational expertise — learned how to apply proven educational methods to support students’ learning and understanding of challenging new words they encountered in their audiobooks.

Students in the study were randomly assigned to an eight-week intervention. Some were asked to listen to Learning Ally audiobooks for about 90 minutes a week. Another group received one-on-one tutoring twice a week, in addition to listening to audiobooks. A third group, in which students participated in mindfulness practice without using audiobooks or receiving tutoring, served as a control.

A diverse group of students participated, spanning different reading abilities and socioeconomic backgrounds. The study’s remote design — with flexibly scheduled testing and tutoring sessions conducted over Zoom — helped make that possible. “I think the pandemic pushed researchers to rethink how we might use these technologies to make our research more accessible and better represent the people that we’re actually trying to learn about,” says Olson, a postdoc who was a graduate student in Gabrieli’s lab.

Testing before and after the intervention showed that overall, students in the audiobooks-only group gained vocabulary. But on their own, the books did not benefit everyone. Children who were poor readers showed no improvement from audiobooks alone, but did make significant gains in vocabulary when the audiobooks were paired with one-on-one instruction. Even good readers learned more vocabulary when they received tutoring, although the differences for this group were less dramatic.

Individualized, one-on-one instruction can be time-consuming, and may not be routinely paired with audiobooks in the classroom. But the researchers say their study shows that effective instruction can be provided remotely, and you don’t need highly trained professionals to do it.

For students from households with lower socioeconomic status, the researchers found no evidence of significant gains, even when audiobooks were paired with explicit instruction — further emphasizing that different students have different needs. “I think this carefully done study is a note of caution about who benefits from what,” Gabrieli says.

The researchers say their study highlights the value and feasibility of objectively evaluating educational technologies — and that effort will continue. At Boston University, where she is a research assistant professor, Ozernov-Palchik has launched a new initiative to evaluate artificial intelligence-based educational tools’ impacts on student learning. 

A philosophy of work

MIT Latest News - Thu, 04/09/2026 - 2:00pm

What makes work valuable? Michal Masny, the NC Ethics of Technology Postdoctoral Fellow in the MIT Department of Philosophy, investigates the role work plays in our lives and its impact on our well-being. 

Masny sees numerous benefits to work, beyond a paycheck. It’s a space for people to develop excellence at something, make a social contribution, gain social recognition, and create and sustain community. 

“Consider a future in which we shorten the work week, or one in which we eliminate work altogether,” Masny says. “I don’t believe either of these scenarios would be unambiguously good for everyone.”

“Work is both necessary and positively valuable,” he argues, further suggesting that our lives might be worsened if we were to eliminate work completely. “There can be optimal combinations of work and leisure time.”

Masny is completing his two-year term in the NC Ethics of Technology Fellowship at the end of the spring semester. In addition to advancing his research, Masny has been working to foster dialogue and educate students on issues at the intersection of philosophy and computing. This semester, Masny is teaching an undergraduate course, 24.131 (Ethics of Technology).

Masny advocates for an updated approach to educating complete, socially aware students. “I want to create scientists who think about their projects and potential outcomes as lawyers and philosophers might, and vice versa,” he says. Masny argues for the importance of eliminating the “wisdom gap” between these groups, citing scientist Carl Sagan’s warning about the dangers of becoming “powerful without becoming commensurately wise” as scientific and technological advances continue.

“The traditional division of labor is that scientists and engineers invent new technologies, and then philosophers and lawyers evaluate and regulate them,” he continues. “But the pace at which new technologies are invented and deployed has made this division of labor untenable.” 

Established in 2021 with support from the NC Cultural Foundation, the fellowship was created with the goal of advancing critical discourse and research in the ethics of technology and AI at MIT, and by making important research and information available to the global community. 

Venture capitalist Songyee Yoon, founder and managing partner of AI-focused investment firm Principal Venture Partners and a supporter of the NC Ethics of Technology Fellowship, believes technology and scientific discovery are among humanity’s most valuable public goods, and artificial intelligence represents the most consequential technology of our time. 

“If we want the fabric of our society to be built responsibly, we must train our builders upstream, at the very moment they begin learning to design and scale technology. There is no better place to begin this work than MIT,” she says. “Supporting the Ethics of Technology Fellows Program was born from that conviction, and I am deeply encouraged to see it embraced at MIT.”

“In philosophy, you’re supposed to question everything”

Masny arrived at MIT in fall 2024, following a year as a postdoc at the Kavli Center for Ethics, Science, and the Public at the University of California at Berkeley. Originally from Poland, Masny received his PhD in philosophy from Princeton University after completing studies at Oxford University and the University of Warwick in the United Kingdom. 

He works mainly in value theory, ethics of technology, and social and political philosophy. His current research interests include the nature of human and animal well-being, our obligations to future generations, the risk of human extinction, the future of work, and anti-aging technology. 

During his tenure in the fellowship, Masny has published several research articles on ethical issues concerning the future of humanity — a topic closely relevant to thinking about the existential risks of AI development and deployment. 

“In philosophy, you’re supposed to question everything,” he says.  

Masny’s work in the fellowship continues a tradition of collaborative investigation and exploration that MIT encourages and celebrates. In fall 2024, Masny co-taught an introductory undergraduate course, STS.006J/24.06J (Bioethics), with Robin Scheffler, an associate professor in the Program in Science, Technology, and Society

During the 2024-25 academic year, Masny led a student research group, “Deepfakes: Ethical, Political, and Epistemological Issues,” as a part of the Social and Ethical Responsibilities of Computing (SERC) Scholars Program. The group explored the ethical, political, and epistemological dimensions of concerns over misleading deepfakes, and how they can be mitigated.

Students in Masny’s cohort spent spring 2025 working in small groups on a number of projects and presented their findings in a poster session during the MIT Ethics of Computing Research Symposium at the MIT Schwarzman College of Computing.

In summer 2025, Masny assisted with a summer course in philosophy, 24.133/134 (Experiential Ethics), in which students subject their computer science and engineering projects to ethical scrutiny with the help of trained philosophers. 

He’s encouraged by the opportunities to test his ideas and share them with people who can help refine and improve them. 

Communities of practice and engagement

When considering the value of his experience at MIT, Masny lauds the philosophy department and the opportunities to collaborate with so many different kinds of scholars. To answer the kinds of questions his research uncovers, he says, you must range further afield. He values the space MIT creates for broad inquiry while also seeking connections between his findings on work, its value, and the human impact of technology on our social lives. 

“Typically, undergraduate philosophy courses include two hour-long lectures followed by discussion; a lecture is like an audiobook,” he says. Instead, he believes, they should more like listening to a podcast or watching a talk show. 

“I want the class to be an event in a student’s schedule,” he continues. 

Masny is also considering how to integrate valuable philosophical tools into life outside the classroom. Philosophy and research can support other kinds of inquiry. Developing philosophers’ mindsets is a net positive, by his reckoning. Designing better questions, for example, can lead to better, more insightful, more accurate answers. It can also improve students’ abilities to identify challenges.

Masny will begin teaching at the University of Colorado at Boulder in fall 2026, and wants to test new ideas while continuing his research into the value of work. 

Kieran Setiya, the Peter de Florez Professor in Philosophy and head of the Department of Linguistics and Philosophy, says the NC Ethics of Technology Postdoctoral Fellowship has allowed MIT to bring in a series of exceptional young philosophers working at the intersection of ethics and AI, studying the systemic effects of new computing technologies and the moral, social, and political challenges they pose.

“This is just the kind of applied interdisciplinary thinking we need to support and sustain at MIT,” he adds.

Slice and dice

MIT Latest News - Thu, 04/09/2026 - 2:00pm

What if the Trojan horse had been pulled to pieces, revealing the ruse and fending off the invasion, just as it entered the gates of Troy?

That’s an apt description of a newly characterized bacterial defense system that chops up foreign DNA.

Bacteria and the viruses that infect them, bacteriophages — phages for short — are ceaselessly at odds, with bacteria developing methods to protect themselves against phages that are constantly striving to overcome those safeguards.

New research from the Department of Biology at MIT, recently published in Nature, describes a defense system that is integrated into the protective membrane that encapsulates bacteria. SNIPE, which stands for surface-associated nuclease inhibiting phage entry, contains a nuclease domain that cleaves genetic material, chopping the invading phage genome into harmless fragments before it can appropriate the host’s molecular machinery to make more phages. 

Daniel Saxton, a postdoc in the Laub Lab and the paper’s first author, was initially drawn to studying this bacterial defense system in E. coli, in part because it is highly unusual to have a nuclease that localizes to the membrane, as most nucleases are free-floating in the cytoplasm, the gelatinous fluid that fills the space inside cells.

“The other thing that caught my attention is that this is something we call a direct defense system, meaning that when a phage infects a cell, that cell will actually survive the attack,” Saxton says. “It’s hard to fend off a phage directly in a cell and survive — but this defense system can do it.” 

Light it up

For Saxton, the project came into focus during a fluorescence-based experiment in which viral genetic material would light up if it successfully penetrated the bacteria. 

“SNIPE was obliterating the phage DNA so fast that we couldn’t even see a fluorescent spot,” Saxton recalls. “I don’t think I’ve ever seen such an effective defense system before — you can barrage the bacteria with hundreds of phage per cell, but SNIPE is like god-tier protection.”

When the nuclease domain of SNIPE was mutated so it couldn’t chop up DNA, fluorescent spots appeared as usual, and the bacteria succumbed to the phage infection. 

Bacteria maintain tight control over all their defense systems, lest they be turned against their host. Some systems remain dormant until they flare up, for example, to halt all translation of all proteins in the cell, while others can distinguish between bacterial DNA and foreign, invading phage DNA. There were only two previously characterized mechanisms in the latter category before researchers uncovered SNIPE. 

“Right now, the phage field is at a really interesting spot where people are discovering phage defense systems at a breakneck pace,” Saxton says. 

Problems at the periphery

Saxton says they had to approach the work in a somewhat roundabout way because there are currently no published structures depicting all the steps of phage genome injection. Studying processes at the membrane is challenging: Membranes are dense and chaotic, and phage genome injection is a highly transient process, lasting only a few minutes. 

SNIPE seems to discern viral DNA by interacting with proteins the phage uses to tunnel through the bacteria’s protective membrane. This “subcellular localization,” according to Saxton, may also prevent SNIPE from inadvertently chopping up the bacteria’s own genetic material.

The model outlined in the paper is that one region of SNIPE binds to a bacterial membrane protein called ManYZ, while another region likely binds to the tape measure protein from the phage. 

The tape measure protein got its name because it determines the length of the phage tail — the part of the phage between the small, leglike protrusions and the bulbous head, which contains the phage’s genetic material. The researchers revealed that the phage’s tape measure protein enters the cytoplasm during injection, a phenomenon that had not been physically demonstrated before. 

There may also be other proteins or interactions involved. 

“If you shunt the phage genome injection through an alternate pathway that isn’t ManYZ, suddenly SNIPE doesn’t defend against the phage nearly as well,” Saxton says. “It’s unclear exactly how these proteins interact, but we do know that these two proteins are involved in this genome injection process.” 

Future directions

Saxton hopes that future work will expand our understanding of what occurs during phage genome injection and uncover the structures of the proteins involved, especially the tunnel complex in the membrane through which phages insert their genome.

Members of the Laub Lab are already collaborating with another lab to determine the structure of SNIPE. In the meantime, Saxton has been working on a new defense system in which molecular mimicry — bacterial proteins imitating phage proteins — may play a role. 

Michael T. Laub, the Salvador E. Luria Professor of Biology and a Howard Hughes Medical Institute investigator, notes that one of the breakthrough experiments for demonstrating how SNIPE works came from a brainstorming session at a lab retreat.

“Daniel and I were kind of stuck with how to directly measure the effect of SNIPE during infection, but another postdoc in the lab, Ian Roney, who is a co-author on the paper, came up with a very clever idea that ultimately worked perfectly,” Laub recalls. “It’s a great example of how powerful internal collaborations can be in pushing our science forward.”

Comparison Shopping Is Not a (Computer) Crime

EFF: Updates - Thu, 04/09/2026 - 1:20pm

As long as people have had more than one purchasing option, they’ve been comparing those options and looking for bargains. Online shoppers are no exception; in fact, one of the potential benefits of the internet is that it expands our options for everything from car rentals to airline tickets to dish soap. New AI tools can make the process even easier. These tools could provide some welcome relief for consumers facing sky-high prices that many cannot afford.

Unfortunately, Amazon is trying to block these helpful new tools, which can steer shoppers towards competitors. Taking a page from Facebook and RyanAir, they are trying to use computer crime laws to do it. 

Amazon’s target is Perplexity, which makes an AI-enabled web browser, called Comet, that allows users to browse the web as they normally would, but can also perform certain actions on the user’s behalf. For example, a user could ask Comet to find the best price on a 24-pack of toilet paper, and if satisfied with the results, have the browser order it. Amazon claims that Perplexity violated the Computer Fraud and Abuse Act (CFAA) by building a tool that helps users access information on Amazon and engage with the site.

Unfortunately, a federal district court agreed. The court’s fundamental mistake: relying on the Ninth Circuit’s misguided decision in Facebook v Power Ventures, rather than the court’s much better and more applicable reasoning in hiQ Labs.

Perplexity has appealed to the Ninth Circuit. As we explain in an amicus brief filed in support, the district court’s mistake, if affirmed, could lead to myriad unintended consequences. Overbroad readings of the CFAA have undermined research, security, competition, and innovation. For years, we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Amazon’s claims here, not least because most of Amazon’s website is publicly available.

The court’s approach would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may create separate accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, it can’t just ban the researchers from using the site; it can render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.

A broad reading of CFAA in this case would also undermine competition by enabling companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features.

The Ninth Circuit should follow Van Buren’s lead and interpret the CFAA narrowly, as Congress intended. Website owners do not need new shields against independent accountability.

Related Cases: Facebook v. Power Ventures

EFF is Leaving X

EFF: Updates - Thu, 04/09/2026 - 12:25pm

After almost twenty years on the platform, EFF is logging off of X. This isn’t a decision we made lightly, but it might be overdue. The math hasn’t worked out for a while now.

The Numbers Aren’t Working Out

We posted to Twitter (now known as X) five to ten times a day in 2018. Those tweets garnered somewhere between 50 and 100 million impressions per month. By 2024, our 2,500 X posts generated around 2 million impressions each month. Last year, our 1,500 posts earned roughly 13 million impressions for the entire year. To put it bluntly, an X post today receives less than 3% of the views a single tweet delivered seven years ago. 

We Expected More

When Elon Musk acquired Twitter in October 2022, EFF was clear about what needed fixing

We called for: 

  • Transparent content moderation: Publicly shared policies, clear appeals processes, and renewed commitment to the Santa Clara Principles
  • Real security improvements: Including genuine end-to-end encryption for direct messages
  • Greater user control: Giving users and third-party developers the means to control the user experience through filters and interoperability.

Twitter was never a utopia. We've criticized the platform for about as long as it’s been around. Still, Twitter did deserve recognition from time to time for vociferously fighting for its users’ rights. That changed. Musk fired the entire human rights team and laid off staffers in countries where the company previously fought off censorship demands from repressive regimes. Many users left. Today we're joining them. 

"But You're Still on Facebook and TikTok?" 

Yes. And we understand why that looks contradictory. Let us explain. 

EFF exists to protect people’s digital rights. Not just the people who already value our work, have opted out of surveillance, or have already migrated to the fediverse. The people who need us most are often the ones most embedded in the walled gardens of the mainstream platforms and subjected to their corporate surveillance. 

Young people, people of color, queer folks, activists, and organizers use Instagram, TikTok, and Facebook every day. These platforms host mutual aid networks and serve as hubs for political organizing, cultural expression, and community care. Just deleting the apps isn't always a realistic or accessible option, and neither is pushing every user to the fediverse when there are circumstances like:

  • You own a small business that depends on Instagram for customers.
  • Your abortion fund uses TikTok to spread crucial information.
  • You're isolated and rely on online spaces to connect with your community.

Our presence on Facebook, Instagram, YouTube, and TikTok is not an endorsement. We've spent years exposing how these platforms suppress marginalized voices, enable invasive behavioral advertising, and flag posts about abortion as dangerous. We’ve also taken action in court, in legislatures, and through direct engagement with their staff to push them to change poor policies and practices.

We stay because the people on those platforms deserve access to information, too. We stay because some of our most-read posts are the ones criticizing the very platform we're posting on. We stay because the fewer steps between you and the resources you need to protect yourself, the better. 

We'll Keep Fighting. Just Not on X

When you go online, your rights should go with you. X is no longer where the fight is happening. The platform Musk took over was imperfect but impactful. What exists today is something else: diminished, and increasingly de minimis

EFF takes on big fights, and we win. We do that by putting our time, skills, and our members’ support where they will effect the most change. Right now, that means Bluesky, Mastodon, LinkedIn, Instagram, TikTok, Facebook, YouTube, and eff.org. We hope you follow us there and keep supporting the work we do. Our work protecting digital rights is needed more than ever before, and we’re here to help you take back control.

A new type of electrically driven artificial muscle fiber

MIT Latest News - Thu, 04/09/2026 - 11:00am

Muscles are remarkably effective systems for generating controlled force, and engineers developing hardware for robots or prosthetics have long struggled to create analogs that can approach their unique combination of strength, rapid response, scalability, and control. But now, researchers at the MIT Media Lab and Politecnico di Bari in Italy have developed artificial muscle fibers that come closer to matching many of these qualities.

Like the fibers that bundle together to form biological muscles, these fibers can be arranged in different configurations to meet the demands of a given task. Unlike conventional robotic actuation systems, they are compliant enough to interface comfortably with the human body and operate silently without motors, external pumps, or other bulky supporting hardware.

The new electrofluidic fiber muscles — electrically driven actuators built in fiber format — are described in a recent paper published in Science Robotics. The work is led by Media Lab PhD candidate Ozgun Kilic Afsar; Vito Cacucciolo, a professor at the Politecnico di Bari; and four co-authors.

The new system brings together two technologies, Afsar explains. One is a fluidically driven artificial muscle known as a thin McKibben actuator, and the other is a miniaturized solid-state pump based on electrohydrodynamics (EHD), which can generate pressure inside a sealed fluid compartment without moving parts or an external fluid supply.

Until now, most fluid-driven soft actuators have relied on external “heavy, bulky, oftentimes noisy hydraulic infrastructure,” Afsar says, “which makes them difficult to integrate into systems where mobility or compact, lightweight design is important.” This has created a fundamental bottleneck in the practical use of fluidic actuators in real-world applications.

The key to breaking through that bottleneck was the use of integrated pumps based on electrohydrodynamic principles. These millimeter-scale, electrically driven pumps generate pressure and flow by injecting charge into a dielectric fluid, creating ions that drag the fluid along with them. Weighing just a few grams each and not much thicker than a toothpick, they can be fabricated continuously and scaled easily. “We integrated these fiber pumps into a closed fluidic circuit with the thin McKibben actuators,” Afsar says, noting that this was not a simple task given the different dynamics of the two components.

A key design strategy was to pair these fibers in what are known as antagonistic configurations. Cacucciolo explains that this is where “one muscle contracts while another elongates,” as when you bend your arm and your biceps contract while your triceps stretch. In their system, a millimeter-scale fiber pump sits between two similarly scaled McKibben actuators, driving fluid into one actuator to contract it while simultaneously relaxing the other.

“This is very much reminiscent of how biological muscles are configured and organized,” Afsar says. “We didn’t choose this configuration simply for the sake of biomimicry, but because we needed a way to store the fluid within the muscle design.” The need for an external reservoir open to the atmosphere has been one of the main factors limiting the practical use of EHD pumps in robotic systems outside the lab. By pairing two McKibben fibers in line, with a fiber pump between them to form a closed circuit, the team eliminated that need entirely.

Another key finding was that the muscle fibers needed to be pre-pressurized, rather than simply filled. “There is a minimum internal system pressure that the system can tolerate,” Afsar says, “below which the pump can degrade or temporarily stop working.” This happens because of cavitation, in which vapor bubbles form when the pressure at the pump inlet drops below the vapor pressure of the liquid, eventually leading to dielectric breakdown.

To prevent cavitation, they applied a “bias” pressure from the outset so that the pressure at the fiber pump inlet never falls below the liquid’s vapor pressure. The magnitude of this bias pressure can be adjusted depending on the application. “To achieve the maximum contraction the muscle can generate, we found there is a specific bias pressure range that is optimal,” she says. “If you want to configure the system for faster response, you might increase that bias pressure, though with some reduction in maximum contraction.”

Cacucciolo adds that most of today’s robotic limbs and hands are built around electric servo motors, whose configuration differs fundamentally from that of natural muscles. Servo motors generate rotational motion on a shaft that must be converted into linear movement, whereas muscle fibers naturally contract and extend linearly, as do these electrofluidic fibers. 

“Most robotic arms and humanoid robots are designed around the servo motors that drive them,” he says. “That creates integration constraints, because servo motors are hard to package densely and tend to concentrate mass near the joints they drive. By contrast, artificial muscles in fiber form can be packed tightly inside a robot or exoskeleton and distributed throughout the structure, rather than concentrated near a joint.”

These electrofluidic muscles may be especially useful for wearable applications, such as exoskeletons that help a person lift heavier loads or assistive devices that restore or augment dexterity. But the underlying principles could also apply more broadly. “Our findings extend to fluid-driven robotic systems in general,” Cacucciolo says. “Wherever fluidic actuators are used, or where engineers want to replace external pumps with internal ones, these design principles could apply across a wide range of fluid-driven robotic systems.”

This work “presents a major advancement in fiber-format soft actuation,” which “addresses several long-standing hurdles in the field, particularly regarding portability and power density,” says Herbert Shea, a professor in the Soft Transducers Laboratory at Ecole Polytechnique Federale de Lausanne in Switzerland, who was not associated with this research. “The lack of moving parts in the pump makes these muscles silent, a major advantage for prosthetic devices and assistive clothing,” he says.

Shea adds that “this high-quality and rigorous work bridges the gap between fundamental fluid dynamics and practical robotic applications. The authors provide a complete system-level solution — characterizing the individual components, developing a predictive physical model, and validating it through a range of demonstrators.”

In addition to Afsar and Cacucciolo, the team also included Gabriele Pupillo and Gennaro Vitucci at Politecnico di Bari and Wedyan Babatain and Professor Hiroshi Ishii at the MIT Media Lab. The work was supported by the European Research Council and the Media Lab’s multi-sponsored consortium.

Bridging space research and policy

MIT Latest News - Thu, 04/09/2026 - 11:00am

While earning her dual master’s degrees in aeronautics and astronautics and public policy, Carissma McGee SM ’25 learned to navigate between two seemingly distinct worlds, bridging rigorous technical analysis and policy decisions.

As an undergraduate congressional intern and researcher, she saw a persistent gap in space policymaking. Policymakers often lacked technical expertise, while researchers were rarely involved in increasingly complex questions surrounding intellectual property and international collaboration in space.

Her work on intellectual property frameworks for space collaborations directly addresses that gap, combining expertise in gravitational microlensing and space telescope operations with policy analysis to tackle emerging governance challenges.

“I want to bring an expert level in science in the rooms where policy decisions are made,” says McGee, now a doctoral student in aeronautics and astronautics. “That perspective is critical for shaping the future of research and exploration.”

Likewise, she wants to bring her expertise in public policy into the lab.

“I enjoy being able to ask questions about intellectual property, territorial claims, knowledge transfer, or allocation of resources early on in a research project,” adds McGee.

McGee’s fascination with space started during her high school years in Delaware, when she first volunteered at a local observatory and then interned at the NASA Goddard Space Flight Center in Maryland.

Following high school, McGee attended Howard University. She was selected to participate in the Karsh STEM Scholars Program, a full-ride scholarship track for students committed to working continuously toward earning doctoral degrees. Howard, which holds an R1 research classification from the Carnegie Foundation, is in close proximity the Goddard Space Flight Center, as well as the American Astronomical Society and the D.C. Space Grant Consortium.

In 2020, after her first year at Howard, the Covid-19 pandemic sent McGee back to her hometown in Delaware. As it turned out, that gave her an opportunity to work with her local congresswoman, Lisa Blunt Rochester, then a U.S. representative. In addition to supporting the congresswoman’s constituents, she drafted dozens of letters related to STEM education and energy reform.

Working in government gave McGee an opportunity to use her voice to “advocate for astronomy and astrophysics with the American Astronomical Society, advocate for space sciences, and for science representation.”

As an undergraduate, McGee also conducted research linking computational physics and astronomy, working with both NASA’s Jet Propulsion Laboratory and Yale University’s Department of Astronomy. She also continued research begun in 2021 with the Harvard and Smithsonian Center for Astrophysics’ Black Hole Initiative, contributing to work associated with the Event Horizon Telescope.

When she visited MIT in 2023, McGee was struck by the Institute’s openness to interdisciplinary work and support of her interest in combining aeronautics and astronautics with policy.

Once at MIT, she started working in the Space, Telecommunications, Astronomy, and Radiation Laboratory (STAR Lab) with advisor Kerri Cahoy, professor of aeronautics and astronautics. McGee says she experienced a great deal of freedom to craft her own program.

“I was drawn to the lab’s work on satellite missions and CubeSats, and excited to discover that I could pursue exoplanet astrophysics research within this framework and that submitting a dual thesis or focusing on astrophysics applications was possible,” says McGee. “When I expressed interest in participating in the Technology [and] Policy Program for a dual thesis in a framework for space policy, my advisors encouraged me to explore how we could integrate these diverse interests into a path forward.”

In 2024, McGee was awarded a MathWorks Fellowship to pursue research associated with the Nancy Grace Roman Space Telescope and join a NASA mission.

“It was just amazing to join the exoplanet group at NASA,” she says. “I had a front-row seat to see how real researchers and workers navigate complex problems.”

McGee credits MathWorks with helping fellows to “be at the forefront of knowledge and shaping innovation.”

One of her proudest academic accomplishments is PyLIMASS, a software system she developed with collaborators at Louisiana State University, the Ohio State University, and NASA’s Goddard Space Flight Center. The tool enables more accurate mass and distance estimates in gravitational microlensing events, helping the Roman Space Telescope project meet its precision goals for studying exoplanets.

“To build software that didn’t previously exist — and to know it will be used for the Roman mission — is incredibly exciting,” McGee says.

In May 2025, McGee graduated with dual master’s degrees in aeronautics and astronautics and technology and policy. That same month, she presented her research at the American Astronomical Society meeting in Anchorage, Alaska, and at the Technology Management and Policy Conference in Portugal.

McGee remained at MIT to pursue her doctoral degree. Last fall, as an MIT BAMIT Community Advancement Program and Fund Fellow, she hosted a daylong conference for STEM students focused on how intellectual property frameworks shape technical fields.

McGee’s accomplishments and contributions have been celebrated with a number of honors recently. In 2026, she was named Miss Black Massachusetts United States, was recognized among MIT’s Graduate Students of Excellence, and received the MIT MLK Leadership Award in recognition of her service, integrity, and community impact.

Beyond her academic work, McGee is active across campus. She teaches Pilates with MIT Recreation, participates in the Graduate Women in Aerospace Engineering group, and serves as a graduate resident assistant in an undergraduate dorm on East Campus.

She credits the AeroAstro graduate community with keeping her momentum going.

“Even if we’re tired, there’s this powerful camaraderie among AeroAstro graduate students working together. Seeing my peers are pushing through similar research milestones and solve daunting problems motivates you to advance beyond the finish line to further developments in the field.”

New technique makes AI models leaner and faster while they’re still learning

MIT Latest News - Thu, 04/09/2026 - 9:00am

Training a large artificial intelligence model is expensive, not just in dollars, but in time, energy, and computational resources. Traditionally, obtaining a smaller, faster model either requires training a massive one first and then trimming it down, or training a small one from scratch and accepting weaker performance. 

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Max Planck Institute for Intelligent Systems, European Laboratory for Learning and Intelligent Systems, ETH, and Liquid AI have now developed a new method that sidesteps this trade-off entirely, compressing models during training, rather than after.

The technique, called CompreSSM, targets a family of AI architectures known as state-space models, which power applications ranging from language processing to audio generation and robotics. By borrowing mathematical tools from control theory, the researchers can identify which parts of a model are pulling their weight and which are dead weight, before surgically removing the unnecessary components early in the training process.

"It's essentially a technique to make models grow smaller and faster as they are training," says Makram Chahine, a PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author of the paper. "During learning, they're also getting rid of parts that are not useful to their development."

The key insight is that the relative importance of different components within these models stabilizes surprisingly early during training. Using a mathematical quantity called Hankel singular values, which measure how much each internal state contributes to the model's overall behavior, the team showed they can reliably rank which dimensions matter and which don't after only about 10 percent of the training process. Once those rankings are established, the less-important components can be safely discarded, and the remaining 90 percent of training proceeds at the speed of a much smaller model.

"What's exciting about this work is that it turns compression from an afterthought into part of the learning process itself,” says senior author Daniela Rus, MIT professor and director of CSAIL. “Instead of training a large model and then figuring out how to make it smaller, CompreSSM lets the model discover its own efficient structure as it learns. That's a fundamentally different way to think about building AI systems.”

The results are striking. On image classification benchmarks, compressed models maintained nearly the same accuracy as their full-sized counterparts while training up to 1.5 times faster. A compressed model reduced to roughly a quarter of its original state dimension achieved 85.7 percent accuracy on the CIFAR-10 benchmark, compared to just 81.8 percent for a model trained at that smaller size from scratch. On Mamba, one of the most widely used state-space architectures, the method achieved approximately 4x training speedups, compressing a 128-dimensional model down to around 12 dimensions while maintaining competitive performance.

"You get the performance of the larger model, because you capture most of the complex dynamics during the warm-up phase, then only keep the most-useful states," Chahine says. "The model is still able to perform at a higher level than training a small model from the start."

What makes CompreSSM distinct from existing approaches is its theoretical grounding. Conventional pruning methods train a full model and then strip away parameters after the fact, meaning you still pay the full computational cost of training the big model. Knowledge distillation, another popular technique, requires training a large "teacher" model to completion and then training a second, smaller "student" model on top of it, essentially doubling the training effort. CompreSSM avoids both of these costs by making informed compression decisions mid-stream.

The team benchmarked CompreSSM head-to-head against both alternatives. Compared to Hankel nuclear norm regularization, a recently proposed spectral technique for encouraging compact state-space models, CompreSSM was more than 40 times faster, while also achieving higher accuracy. The regularization approach slowed training by roughly 16 times because it required expensive eigenvalue computations at every single gradient step, and even then, the resulting models underperformed. Against knowledge distillation on CIFAR-10, CompressSM held a clear advantage for heavily compressed models: At smaller state dimensions, distilled models saw significant accuracy drops, while CompreSSM-compressed models maintained near-full performance. And because distillation requires a forward pass through both the teacher and student at every training step, even its smaller student models trained slower than the full-sized baseline.

The researchers proved mathematically that the importance of individual model states changes smoothly during training, thanks to an application of Weyl's theorem, and showed empirically that the relative rankings of those states remain stable. Together, these findings give practitioners confidence that dimensions identified as negligible early on won't suddenly become critical later.

The method also comes with a pragmatic safety net. If a compression step causes an unexpected performance drop, practitioners can revert to a previously saved checkpoint. "It gives people control over how much they're willing to pay in terms of performance, rather than having to define a less-intuitive energy threshold," Chahine explains.

There are some practical boundaries to the technique. CompreSSM works best on models that exhibit a strong correlation between the internal state dimension and overall performance, a property that varies across tasks and architectures. The method is particularly effective on multi-input, multi-output (MIMO) models, where the relationship between state size and expressivity is strongest. For per-channel, single-input, single-output architectures, the gains are more modest, since those models are less sensitive to state dimension changes in the first place.

The theory applies most cleanly to linear time-invariant systems, although the team has developed extensions for the increasingly popular input-dependent, time-varying architectures. And because the family of state-space models extends to architectures like linear attention, a growing area of interest as an alternative to traditional transformers, the potential scope of application is broad.

Chahine and his collaborators see the work as a stepping stone. The team has already demonstrated an extension to linear time-varying systems like Mamba, and future directions include pushing CompreSSM further into matrix-valued dynamical systems used in linear attention mechanisms, which would bring the technique closer to the transformer architectures that underpin most of today's largest AI systems.

"This had to be the first step, because this is where the theory is neat and the approach can stay principled," Chahine says. "It's the stepping stone to then extend to other architectures that people are using in industry today."

"The work of Chahine and his colleagues provides an intriguing, theoretically grounded perspective on compression for modern state-space models (SSMs)," says Antonio Orvieto, ELLIS Institute Tübingen principal investigator and MPI for Intelligent Systems independent group leader, who wasn't involved in the research. "The method provides evidence that the state dimension of these models can be effectively reduced during training and that a control-theoretic perspective can successfully guide this procedure. The work opens new avenues for future research, and the proposed algorithm has the potential to become a standard approach when pre-training large SSM-based models."

The work, which was accepted as a conference paper at the International Conference on Learning Representations 2026, will be presented later this month. It was supported, in part, by the Max Planck ETH Center for Learning Systems, the Hector Foundation, Boeing, and the U.S. Office of Naval Research.

On Microsoft’s Lousy Cloud Security

Schneier on Security - Thu, 04/09/2026 - 6:51am

ProPublica has a scoop:

In late 2024, the federal government’s cybersecurity evaluators rendered a troubling verdict on one of Microsoft’s biggest cloud computing offerings.

The tech giant’s “lack of proper detailed security documentation” left reviewers with a “lack of confidence in assessing the system’s overall security posture,” according to an internal government report reviewed by ProPublica.

Or, as one member of the team put it: “The package is a pile of shit.”

For years, reviewers said, Microsoft had tried and failed to fully explain how it protects sensitive information in the cloud as it hops from server to server across the digital terrain. Given that and other unknowns, government experts couldn’t vouch for the technology’s security...

At climate contrarian gathering, allies urge Trump to keep Zeldin at EPA

ClimateWire News - Thu, 04/09/2026 - 6:27am
At a Heartland Institute conference, climate contrarians celebrated the rollback of regulations under EPA chief Lee Zeldin while urging President Donald Trump not to elevate him to attorney general, fearing it would stall their agenda.

Alaskan tribes, enviros sue over endangerment finding repeal

ClimateWire News - Thu, 04/09/2026 - 6:26am
The administration's refusal to allow EPA to regulate climate pollution is "akin to a fire department refusing to fight fires," one group said.

Montana drag ban defeat fuels youth fight against Trump energy orders

ClimateWire News - Thu, 04/09/2026 - 6:25am
Climate activists say there is a path for a federal court to hand them even a partial victory against the government's promotion of fossil fuels.

Maine takes half-step toward climate superfund

ClimateWire News - Thu, 04/09/2026 - 6:24am
The statehouse is expected to pass legislation that would assess how much climate impacts are costing Maine. It comes as Vermont and New York defend their own climate superfund laws in court.

Trump admin to renew Biden heat safety program

ClimateWire News - Thu, 04/09/2026 - 6:23am
The move comes as Democratic lawmakers urged OSHA to extend the initiative that led to Biden-era heat-related inspections at workplaces.

March smashes US record as most abnormally hot month, NOAA says

ClimateWire News - Thu, 04/09/2026 - 6:22am
Not only was it the hottest March on record for the U.S., but the amount it was above normal beat any other month in history for the Lower 48 states.

Wildfire report ups pressure on California to overhaul insurance, utilities

ClimateWire News - Thu, 04/09/2026 - 6:21am
The state-commissioned study lays out options from liability overhaul to state-backed insurance.

Emissions reduction bill draws opposition from California homebuilders

ClimateWire News - Thu, 04/09/2026 - 6:21am
SB 1075 would ban local land-use decisions that contribute to poor air quality in disadvantaged communities.

Energy giant Drax pulls out of UK climate plan

ClimateWire News - Thu, 04/09/2026 - 6:20am
Ministers were told to lock the biomass giant into an agreement on carbon capture. The government ignored them — and now Drax has walked away.

Pages