Feed aggregator

The tech revolution that wasn’t

MIT Latest News - Tue, 05/05/2026 - 12:00am

In 1960, engineers at India’s Tata Institute of Fundamental Research (TIFR) built what they called an “Automatic Calculator,” the country’s first working computer. It had the same type of ferrite-core memory as IBM’s world-leading machines, and at a glance, appeared to herald a new age of tech advances in India.

Constructed with a fraction of the resources Western computer engineers had, the TIFRAC, as they called it, was a remarkable feat.

“The people working on it had never really seen an actual functioning computer,” says Dwai Banerjee, an associate professor of science, technology, and society, and the author of a new book about computing in India. “You had this ambitious group of engineers building a state-of-the-art machine with very, very, limited resources. The fact they could build this is staggering.”

However, the TIFRAC was never even replicated, let alone produced at scale. The visionaries behind it wanted to turn India into an independent computing nation: a place that would produce its own equipment and become an industry power. Instead, the TIFRAC became a technological cul-de-sac, and India’s tech industry took on a very different shape. Instead of exporting equipment, it exports talent, sending skilled engineers and executives around the globe.

Now Banerjee explores those issues in the book, “Computing in the Age of Decolonization: India’s Lost Technological Revolution,” published by Princeton University Press. In it, he examines the country’s pursuit of technological self-sufficiency, and the global forces that prevailed against this vision. As a result, the country is “the world’s leading provider of inexpensive outsourcing and offshoring services, yet enjoys minimal benefits from more profitable advances in research, manufacturing, and development,” Banerjee writes.

“This book is about understanding how the current landscape of technological power came to be and the unequal way in which power is distributed across the world when it comes to anything to do with computing,” Banerjee says. “Basically, the historical conditions of the mid-20th century period are essential to understanding why the world of computing looks the way it does today.”

Computing and the geopolitics of knowledge

When India became a sovereign nation in 1947, many of its leaders believed “rapid technology-driven industrialization was the only way out of centuries of colonial underdevelopment,” as Banerjee writes. Some leapt into action, such as the remarkable nuclear physicist Homi J. Bhabha, who helped establish the TIFR.

Initially, Indian leaders hoped to gain cooperation for the U.S. and international organizations in making technological advances, but quickly ran into Cold War politics. Computing was heavily bound up with defense matters; India was not always fully aligned with U.S. political interests, so the flow of knowledge from the U.S. to India was distinctly limited.

“This is very much an external constraint story,” Banerjee says. “You need blueprints and not just working papers, and that’s what was guarded by the U.S. for a very long time.”

Still, the TIFR research team toiled away as its computing projects until the TIFRAC was up and running — making national headlines.

“The achievement it represents is mind-boggling,” Banerjee emphasizes. “A computer in the U.S. would have cost more to run than this entire institute in India.”

As Banerjee details in the book, the TIFRAC machine was built to grow. Its engineers matched the speed of IBM machines and planned to import larger ferrite-core memory stacks as their workload expanded. But when IBM released the FORTRAN programming language in 1957, it required four times the memory the TIFRAC machine was equipped with. India’s 1958 foreign exchange crisis then shaped the machine’s fate: The World Bank convened a U.S.-led creditor consortium that conditioned rescue loans on the opening of Indian markets to Western capital. Importing larger memory stacks became unaffordable, rendering the TIFRAC obsolete almost as soon as it was completed.

“It’s a geopolitics-of-knowledge question, not that they made a mistake,” Banerjee says of the Indian engineers. “They didn’t know IBM was about to reshape software.”

Exit IBM, enter services

Though IBM’s jump forward after the release of Fortran left the TIFRAC project stalled out, Indian advocates for computer manufacturing did not give up their dream. For one thing, they looked around for partnerships and other ways of moving their domestic tech industry forward. And then in 1978, India, uniquely, banned IBM from the country, on account of its business practices.

That might have set the stage for India’s computer manufacturing industry to flourish. But at the same moment, countervailing forces took hold, including a widespread turn toward the private sector as an increasing source of activity, rather than public-private enterprises.

“For a moment you have this imagination come to a sort of fruition,” Banerjee observes. “But by the late 1970s and 1980s, there is a new group of people arguing for quick profits through software services, saying that this route feels less painful than setting up manufacturing, R&D, and firms for a decade or more.”

This turn toward private-sector services rather than government-involved manufacturing ultimately became a decisive factor in shaping India’s tech-sector trajectory. Rather than seeking to make machines domestically, the country became part of the global tech-services sector, while many of its engineers migrated to Silicon Valley and other tech hotspots. Global tech firms used their reach to advance the idea that many countries would develop independent industries. This is not the outcome India’s leaders and technologists once envisioned.

“It still surprises me because of the one thing India did that no other country in the world managed to do, and that’s kick out IBM,” Banerjee says. “The fact that this vision fades is part of changing government ambition.”

Beyond the mavericks

In writing the book, Banerjee has multiple goals. One is simply shedding more light on the rich details of India’s initial computing efforts. Another is contesting the idea that India somehow naturally found a role providing services and exporting talent; that is not what many people once hoped.

Still another motif in Banerjee’s work is that the history of computing too often centers on innovators who are cast as mavericks, shrugging off conventions to upend business and society — whereas the large-scale forces of global capital and geopolitics matter greatly in technological development.

“This book suggests we often overplay those stories of individual genius, because you can be a genius with all the right ideas, but if you don’t have all the institutions supporting you, it means nothing,” Banerjee says.

Other scholars have praised “Computing in the Age of Decolonization.” Matthew L. Jones, a professor of history at Princeton University, has stated that Banerjee’s book is a “scrupulous accounting of ultimately failed Indian efforts to secure technological sovereignty in the wake of independence,” which “joins the best recent accounts of computing worldwide and transforms how we think through diverse national trajectories through the Cold War and beyond.”

For his part, Banerjee hopes a wide variety of readers will be interested in the book — and recognize that the specific case of India and computing can tell us a lot about the challenges of new types of economic growth in many places.

“India stands in for a lot of countries in the mid-20th century that had recently gained formal political independence and were thinking of ways to catch up with the rest of the advanced industrialized world,” Banerjee says. “But the power structures tied to technological and scientific advancement did not disappear. They were replaced by newer structures, including foreign policy with very specific ideas about what different countries should be doing with regard to technology. That’s where the story starts.”

Biologist Joey Davis explores how cells build complex structures

MIT Latest News - Tue, 05/05/2026 - 12:00am

Ribosomes, the cellular machines that assemble proteins, are made from dozens of proteins and RNA molecules. Putting all of those pieces together is a complex puzzle — one that MIT Associate Professor Joey Davis PhD ’10 revels in trying to solve.

Understanding how these structures form and later break down could help researchers learn more about how disruptions of these fundamental processes can lead to disease. But, as Davis points out, it’s also an interesting biological question.

“Our long-term goal is to really understand how the natural world assembles these huge complexes rapidly and efficiently. It’s a fundamentally interesting question to think about how these things get put together,” he says.

His work has helped reveal that unlike building a house, which happens in a prescribed sequence of steps — pouring the foundation, building the frame, putting on the roof, then doing electrical and plumbing work — ribosomes can be assembled in a more flexible way. Cells can even skip an assembly step and then come back to it later.

“In these natural systems, it seems like the assembly pathways are much more dynamic and flexible,” he says. “It appears that evolution has selected pathways that aren’t strictly ordered in the way we would think about an assembly line, where you always put in one component, then the next, and then the next. We’re excited to understand the selective advantages of such approaches.”

A love of discovery

Davis’ interest in how things are put together developed early in life, inspired by his father, a carpenter who framed houses. During the mid-1980s, the family moved from Colorado to Southern California, where his father worked in construction during a housing boom there.

“I was always interested in building things, which I think probably came from being around my dad and other builders,” Davis says.

As an undergraduate at the University of California at Berkeley, where he majored in computer science and biological engineering, Davis’ interests turned toward smaller scales, in the realm of cells and molecules. During his junior year, he started working in the lab of chemistry professor Michael Marletta, who studies molecular-level biological interactions.

In the lab, Davis investigated how enzymes that contain heme are able to preferentially bind to either oxygen or nitric oxide, two gases that are very similar in structure. That work kindled a love of studying the natural world and pursuing discoveries in fundamental science.

“Being in the Marletta lab and seeing students and postdocs that were really passionate about these problems had a big impact on me,” Davis says. “The goal was to understand the fundamentals of how molecular discrimination works, and the idea of discovery for the sake of discovery was thrilling.”

After graduating from Berkeley, Davis spent another year working in Marletta’s lab, and then a year working odd jobs, before heading to MIT to pursue a PhD in biology. There, he worked with Professor Bob Sauer, now emeritus, who studied the relationship between protein structure and function, with a particular focus on the molecular machines that degrade or remodel proteins.

Davis’ thesis research centered on enzymes called AAA proteases, which remove damaged proteins from cellular membranes and send them to cell organelles that break them down. In addition to studying the structure and function of the proteases, Davis worked on ways to engineer them to tag specific proteins for destruction.

That work led him into synthetic biology, which he used to develop genetic parts that drive production of proteins of interest. Some of those parts ended up being used by the biotech startup Ginkgo Bioworks, where Davis took a job as a senior scientist after graduating.

Working at Ginkgo Bioworks allowed Davis to stay in Boston while his partner finished her PhD. The couple then moved back to California, where Davis worked as a postdoc at Scripps Research, which was home to one of the first direct electron detection cameras for cryo-electron microscopy (cryo-EM). These detectors allow researchers to generate structures with near atomic resolution. At Scripps, Davis began using them to study ribosomes as they were being assembled.

Peering into the ribosome

After joining the MIT faculty in 2017, Davis continued his work on ribosomes and assembled a lab group that includes students from a variety of backgrounds who work together to develop new ways to explore biological phenomena.

“I have a mix of method developers and biologists in the group, and the work from each of them informs each other,” Davis says. “My lab goes back and forth between building sets of tools to answer biological questions, and then as we’re answering those questions, it motivates the next generation of tool development.”

During ribosome assembly, RNA molecules fold themselves into the correct shapes, creating docking sites for proteins to attach. Then, more RNA molecules come in and fold themselves into the structure.

“It’s a beautifully coupled process by which the cell folds hundreds of RNA helices and binds on the order of 50 proteins, and it does it in two minutes from start to finish. E. coli does this 100,000 times per hour, and it’s amazing how rapid and efficient the process is,” Davis says.

Cryo-EM allows scientists to capture this process in minute detail. It can be used to take hundreds of thousands of two-dimensional images of ribosome samples frozen in a thin layer of ice, from different angles. Computer algorithms then piece together these images into a three-dimensional representation of the ribosome.

To gain insight into how ribosomes are assembled, researchers can stall the process at different points and then analyze the resulting structures. In 2021, Davis’s lab developed a new method called CryoDRGN, which uses neural networks to analyze cryo-EM data and generate the full ensemble of structures that were present in the sample.

This work has shown that when certain steps of ribosome assembly are blocked, many different structures result, suggesting that the assembly can occur in a variety of ways.

In future work, Davis aims to dramatically increase the throughput of cryo-EM to generate datasets of protein structures that could help improve the AI-based models that are now used to predict protein structures.

“There are still huge swaths of sequence space that these models are very poor at predicting, but if we could collect data on those sequences en masse, that could potentially serve as key training data for a next-generation protein structure prediction method that could fill out that space,” he says.

EFF Submission to UK Consultation on Digital ID

EFF: Updates - Mon, 05/04/2026 - 2:35pm

Last September, the United Kingdom’s Prime Minister Keir Starmer announced plans to introduce a new digital ID scheme in the country. The scheme aims to make it easier for people to prove their identities by creating a virtual ID on personal devices with information like names, date of birth, nationality or residency status, and a photo to verify their right to live and work in the country. 

Since then, EFF has joined UK-based civil society organizations in urging the government to reconsider this proposal. In one joint letter from December, ahead of Parliament’s debate around a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s proposal.

Nevertheless, politicians have continued to explore ways to build out a digital ID system in the country, often fluctuating between different ideas and conceptualisations for such a scheme. In their search for clarity, the government launched a consultation, ‘Making public services work for you with your digital identity,’ seeking views on a proposed national digital ID system in the UK. 

EFF submitted comments to this consultation, focusing on six interconnected issues:

  1. Mission creep
  2. Infringements on privacy rights 
  3. Serious security risks
  4. Reliance on inaccurate and unproven technologies
  5. Discrimination and exclusion
  6. The deepening of entrenched power imbalances between the state and the public.

Even the strongest recommended safeguards cannot resolve these issues, and the fundamental core problem that a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences. 

No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. It is essential that the UK government listen to people in the country and say no to digital ID. 

Read our submission in full here.

Rett syndrome study highlights potential for personalized treatments

MIT Latest News - Mon, 05/04/2026 - 2:00pm

Although many studies approach the developmental disorder Rett syndrome as a single condition arising from general loss of function in the gene MECP2, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT shows that two different mutations of the gene caused many distinct abnormalities in lab cultures. Moreover, correcting key differences made by each mutation required different treatments.

“Individual mutations matter,” says Mriganka Sur, senior author of the new open-accdess study in Nature Communications and the Newton Professor in the Picower Institute and the Department of Brain and Cognitive Sciences. “This is an approach to personalizing treatment, even for a single-gene disorder.”

The study employed advanced 3D human brain tissue cultures called “organoids” or “minibrains” derived from skin cells or blood cells donated by Rett syndrome patients with each mutation. Lead author Tatsuya Osaki, a Picower Institute research scientist, says that the organoids’ ability to model the specific consequences of each mutation enabled him to gain mutation-specific insights that haven’t emerged in prior studies, where scientists just knocked out MECP2 overall. The organoids also provided a novel opportunity to understand how each mutation affected different cell types and their interactions.

Distinct effects

More than 800 mutations in MECP2 can cause Rett syndrome, but just eight account for more than 60 percent of cases. Sur and Osaki chose one of these, R306C, which involves a difference of just one DNA base pair (916C>T), because it represents 7-8 percent of Rett syndrome cases. The other mutation they chose, V247X, is much more rare and severe because it cuts off production of the gene’s protein product by a single DNA base deletion (705Gdel), leaving the protein not just errant, but incomplete.

In organoids cultured for three months, each mutation produced some common but also sometimes distinct consequences compared to control organoids with non-mutated MECP2. For many of their experiments, the team used “three-photon” microscopes capable of cellular-level resolution all the way through the organoids’ approximate 1 millimeter thickness, resolving both their structure (via “third-harmonic generation” imaging), and the live activity patterns of their neurons (via calcium fluorescence).

For instance, the scientists observed that the V247X organoids exhibited several structural differences from their controls — they were larger and had different thicknesses of various layers — but the R306C ones were much more like their controls. Organoids harboring either mutation exhibited less-developed axon projections from their neurons, compared to their control comparators.

Looking at properties of neural activity and connectivity in the organoids, the scientists found some similar deficits across both mutations. Both showed reduced spiking activity and synchronicity between neurons compared to in their controls.

But when the scientists looked at other properties, the organoids started to diverge from each other. In particular, an indication of the efficiency of their network structure called “small-world propensity” (SWP) was decreased in R306C organoids, and increased in V247X ones, compared to controls. This means that both mutations altered the development of typical network structures for information processing, but in different directions.

To ensure that their results were meaningful for Rett syndrome patients, the team collaborated with Charles Nelson at Boston Children’s Hospital, whose team measured EEG in several children with different Rett mutations. Although the sample was small, the researchers measured indications that the SWP property in the EEG readings was altered in the volunteers, much like in the organoids.

Finally, by labeling excitatory neurons to flash in one color and inhibitory neurons to flash in a different color, the scientists were able to see that connectivity between the different neural types differed significantly from controls in the V247X organoids.

Treatment tests

All the testing showed that each mutation caused several changes in organoid structure, activity, and connectivity, and that the deviations were often particular to the specific mutation.

To understand how these differences emerged, and how they might be corrected, Sur and Osaki’s team turned to examining how the cells in each kind of organoid might be expressing their genes differently than controls. Differences in gene expression often lead to alterations of key molecular pathways in cells that can disrupt their activity and function. Analysis with a technique called single cell RNA sequencing indeed yielded hundreds of differences in each organoid type, where some genes were expressed more than in controls while others were underexpressed.

For instance, the analyses revealed that in R306C organoids a gene called HDAC2 was overexpressed. That protein is known for repressing expression of other genes. Meanwhile, in the V247X organoids, the scientists found reduced expression of genes for some receptors of the inhibitory neurotransmitter GABA. These organoids also showed defects in the function of astrocyte cells, which support many aspects of neural function.

Organoids with either mutation also exhibited aberrations in molecular pathways that enable the development of circuit connections between neurons, called synapses.

Given the specific defects they observed, the scientists decided to treat the organoids with a drug that can inhibit HDAC2 activity and another that increases GABA’s efficacy. The HDAC2 inhibitor restored neuronal activity and SWP to normal levels in the R306C organoids, and the GABA “agonist” baclofen restored SWP to control levels in the V247X organoids.

Osaki notes each of the treatment drugs has already been studied in other disease contexts, meaning they are well-understood drugs that could be repurposed.

Now that the researchers have developed an organoid platform for dissecting individual mutations’ consequences, identifying both their roots and testing treatments, they plan to apply it to studying four more mutations, Sur says, comparing all of them against a standardized control organoid.

In addition to Sur, Osaki, and Nelson, the paper’s other authors are Chloe Delepine, Yuma Osako, Devorah Kranz, April Levin, and Michela Fagiolini.

The National Institutes of Health, a MURI grant, The Freedom Together Foundation, and the Simons Foundation provided support for the research.

Powering 160,000 hours of discovery at MIT.nano

MIT Latest News - Mon, 05/04/2026 - 1:50pm

Each year, more than 1,500 researchers rely on over 200 tools and instruments at MIT.nano to pursue experiments that span MIT’s disciplines, collectively generating 160,000 hours of work across 88,000 instances of tool use. Behind this activity is an operational framework that must discretely coordinate access, maintain fairness, and keep research moving without friction.

Managing such a dynamic environment requires more than a scheduling calendar. An automated reservation system serves as the connective tissue of the facility, balancing demand across diverse user needs while supporting the practical realities of a shared lab space. Researchers arrive at MIT.nano with different workflows, safety requirements, and administrative needs, yet the system must present a seamless experience. Integration with MIT’s broader digital infrastructure, from onboarding and authentication to safety training and billing, ensures that access is both efficient and compliant, reducing barriers so researchers can focus on their work.

A system for the modern era

Over the past three years, during a period of rapid growth in both equipment and facility usage, MIT.nano undertook a transition to a new platform designed to scale with demand while maintaining operational continuity. The effort reflects an ongoing commitment to evolving infrastructure that supports the pace, complexity, and collaborative spirit of modern research.

The importance of robust laboratory management systems has long been recognized at MIT. For decades, researchers in the Microsystems Technology Laboratories (MTL) and the Materials Research Laboratory relied on the CORAL lab management platform to reserve and manage shared instrumentation. Jointly developed by MIT and Stanford University and introduced in 2003, CORAL represented a significant advance over the text-based system it replaced. But by the time MIT.nano adopted CORAL in 2018, active development had slowed, and the platform was beginning to show its age, most visibly through the absence of modern web and mobile interfaces expected by today’s users.

To address these limitations, MIT.nano has transitioned to NEMO, an open-source laboratory management system originally developed at the National Institute of Standards and Technology. NEMO centralizes scheduling, communication, and operational logistics into a single platform that manages tool reservations and user access while supporting facility growth. Its modular architecture and plugin framework allow for extensive customization, enabling the system to evolve alongside the needs of a large, shared research environment.

“Over time, NEMO was replicating core functionalities of CORAL while introducing new features that CORAL simply could not support,” explains Thomas Lohman, senior software and systems manager at MTL and a long-time contributor to CORAL’s development. “The question became whether to continue patching the old system or adopt this new platform that already had a lot of the features we use daily, as well as an active community continually improving it.”

For MIT.nano leadership, modernization was about more than replacing an aging tool. “We needed a system that centralizes everything a facility user depends on — policies, tool documentation, training workflows, and communications — within a user-friendly, mobile-accessible environment,” says Anna Osherov, associate director for Characterization.nano, who led the evaluation and transition effort. “Just as important was making sure the platform enhances the experience for both users and staff.”

Collaborating at MIT and with shared access facilities

MIT.nano collaborated closely with Mathieu Rampant, NEMO project lead and CEO of Atlantis Labs, to adopt the community edition of NEMO, an extended version enriched by contributions from a growing global user base. The open-source model ensures that improvements developed at MIT.nano benefit the broader research community, reinforcing a shared ecosystem of innovation. “The NEMO community is expanding rapidly, and many new features originate directly from facility users and administrators,” says Rampant. “That collaborative model allows improvements to propagate quickly while giving institutions a sense of ownership in the platform’s evolution.”

NEMO introduces modern features long requested by MIT.nano researchers, including mobile access, improved transparency, and streamlined workflows. Facility users can now monitor their own tool usage and consumables, customize notifications, register for training, join real-time equipment waitlists, report issues, and communicate with staff, all through a unified dashboard. What was once distributed across multiple systems is now centralized, reducing friction in day-to-day lab operations.

Launching a new platform at the scale of MIT.nano required careful planning and sustained collaboration. The system needed to support multiple facility types, integrate with existing MIT infrastructure, and accommodate a diverse set of instrumentation workflows. “Features that work well in a typical characterization lab can quickly become a burden in a more chemically active environment like the cleanroom,” explains Jorg Scholvin, associate director of Fab.nano. “Relying on researchers to log in using personal devices and Duo authentication, for example, would be impractical in that setting.”

To address these challenges, MIT.nano collaborated with MIT Information Systems and Technology Associate Vice President Olu Brown and Senior Director for Infrastructure Operations Marco Gomes and their teams to streamline integration between MIT systems and NEMO for cleanroom users. “The availability of modern APIs allowed us to connect very different systems efficiently and deliver a convenient, seamless, and productive experience in the lab,” says Scholvin.

The result is a platform that now processes thousands of reservations, communications, and operational actions daily. “We truly value the partnership with MIT.nano and appreciate the collaboration throughout this effort,” says Gomes. “It’s been a great example of teams working together to deliver something meaningful for the research community.”

As one of the largest shared-access facilities deploying NEMO, MIT.nano has played a central role in advancing the platform’s capabilities, both by helping shape its development and by demonstrating a model that is scalable and effective for other facilities and research centers nationwide. Enhancements first created to meet MIT.nano’s needs are now leveraged by other facilities adopting NEMO across the globe. 

It took 40 years for technology to catch up to this zipper design

MIT Latest News - Mon, 05/04/2026 - 1:45pm

In 1985, the Innovative Design Fund placed an ad in Scientific American offering up to $10,000 to support clever prototypes for clothing, home decor, and textiles. William Freeman PhD ’92, then an electrical engineer at Polaroid and now an MIT professor, saw it and submitted a novel idea: a three-sided zipper. Instead of fastening pants, it’d be like a switch that seamlessly flips chairs, tents, and purses between soft and rigid states, making them easier to pack and put together.

Freeman’s blueprint was much like a regular zipper, except triangular. On each side, he nailed a belt to connect narrow wooden “teeth” together. A slider wrapping around the device could be moved up to fasten the three strips into place, straightening them into a triangular tube. His proposal was rejected, but Freeman patented his prototype and stored it in his garage in the hopes it might come in handy one day.

Nearly 40 years later, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers wanted to revive the project to create items with “tunable stiffness.” Prior attempts to adjust that weren’t easily reversible or required manual assembly, so CSAIL built an automated design tool and adaptable fastener called the “Y-zipper.” The scientists’ software program helps users customize three-sided zippers, which it then builds on its own in a 3D printer using plastics. These devices can be attached or embedded into camping equipment, medical gear, robots, and art installations for more convenient assembly.

“A regular zipper is great for closing up flat objects, like a jacket, but Freeman ideated something more dynamic. Using current fabrication technology, his mechanism can transform more complex items,” says MIT postdoc and CSAIL researcher Jiaji Li, who is a lead author on an open-access paper presenting the project. “We’ve developed a process that builds objects you can rapidly shift from flexible to rigid, and you can be confident they’ll work in the real world.”

Why zippers?

Users can customize how the fasteners look when they’re zipped up in CSAIL’s software program; they can select the length of each strip, as well as the direction and angle at which they’ll bend. They can also choose from one of four motion “primitives” to select how the zipper will appear when it’s zipped up: straight, bent (similar to an arch), coiled (resembling a spring), or twisted (looks like screws).

The Y-zipper that results will appear to “shape-shift” in the real world. When unzipped, it can look like a squid with three sprawling tentacles, and when you close it up, it becomes a more compact structure (like a rod, for instance). This flexibility could be useful when you’re traveling — take pitching a tent, for example. The process can take up to six minutes to do alone, but with the Y-zipper’s help, it can be done in one minute and 20 seconds. You simply attach each arm to a side of the tent, supporting the structure from the top so that the zipper seemingly pops the canopy into place. 

This seamless transition could also unlock more flexible wearables, often useful in medical scenarios. The team wrapped the Y-zipper around a wrist cast, so that a user could loosen it during the day, and zip it up at night to prevent further injuries. In turn, a seemingly stiff device can be made more comfortable, adjusting to a patient’s needs.

The system can also aid users in crafting technology that moves at the push of a button. One can attach a motor to the Y-zipper after fabrication to automate the zipping process, which helps build things like an adaptive robotic quadruped. The robot could potentially change the size of its legs, tightening up into taller limbs and unzipping when it needs to be lower to the ground. Eventually, such rapid adjustments could help the robot explore the uneven terrain of places like canyons or forests. Actuated Y-zippers can also build dynamic art installations — for example, the team created a long, winding flower that “bloomed” thanks to a static motor zipping up the device.

Mastering the material

While Li and his colleagues saw the creative potential of the Y-zipper, it wasn’t yet clear how durable it would be. Could they sustain daily use?

The team ran a series of stress tests to find out. First, they evaluated the strength and flexibility of polylactic acid (PLA) and thermoplastic polyurethane (TPU), two plastics commonly used in 3D printing. Using a machine that bent the Y-zippers down, they found that PLA could handle heavier loads, while TPU was more pliable.

In another experiment, CSAIL researchers used an actuator to continuously open and close the Y-zipper to see how long it’d take to snap. Some 18,000 cycles of zipping and unzipping later, they finally broke. Y-zipper’s secret to durability, according to 3D simulations: its elastic structure, which helps distribute the stress of heavy loads.

Despite these findings, Li envisions an even more durable three-sided zipper using stronger materials, like metal. They may also make the zippers bigger for larger-scale projects, but that’s not yet possible with their current 3D printing platform.

Jiaji also notes that some applications remain unexplored, like space exploration, wherein Y-zipper’s tentacles could be built into a spacecraft to grab nearby rock samples. Likewise, the zippers could be embedded into structures that can be assembled rapidly, helping relief workers quickly set up shelters or medical tents during natural disasters and rescues.

“Reimagining an everyday zipper to tackle 3D morphological transitions is a brilliant approach to dynamic assembly,” says Zhejiang University assistant professor Guanyun Wang, who wasn’t involved in the paper. “More importantly, it effectively bridges the gap between soft and rigid states, offering a highly scalable and innovative fabrication approach that will greatly benefit the future design of embodied intelligence.”

Li and Freeman wrote the paper with Tianjin University PhD student Xiang Chang and MIT CSAIL colleagues: PhD student Maxine Perroni-Scharf; undergraduate Dingning Cao; recent visiting researchers Mingming Li (Zhejiang University), Jeremy Mrzyglocki (Technical University of Munich), and Takumi Yamamoto (Keio University); and MIT Associate Professor Stefanie Mueller, who is a CSAIL principal investigator and senior author on the work. Their research was supported, in part, by a postdoctoral research fellowship from Zhejiang University and the MIT-GIST Program.

The researchers’ work was presented at the ACM’s ​​Computer-Human Interaction (CHI) conference on Human Factors in Computing Systems in April.

Getting Digital Fairness Right: EFF's Recommendations for the EU's Digital Fairness Act

EFF: Updates - Mon, 05/04/2026 - 11:33am
Digital Fairness in the EU

The next few years will be decisive for EU digital policymaking. With major laws like the Digital Services Act, the Digital Markets Act, and the AI Act now in place, the EU is entering an enforcement era that will show whether these rules are rights-respecting or drift toward overreach and corporate control. With the proposed EU’s Digital Fairness Act (DFA), the Commission is now turning to increasingly visible risks for users, such as dark patterns and exploitative personalization. Its “Digital Fairness Fitness Check” makes clear that existing consumer rules need updating to reflect how digital markets operate today. 

But not all proposed solutions point in the right direction. Regulators are already flirting with measures that rely on expanded surveillance, such as age verification mandates—surface-level fixes that risk undermining fundamental rights while offering little more than a false sense of protection. 

For EFF, digital fairness means addressing the root causes of harm, not requiring platforms to exert more control over their users. It means safeguarding privacy, freedom of expression, and the rights of users and developers.

If the DFA is to make a real difference, it must tackle structural imbalances. Lawmakers should focus on two interlocking principles. First, prioritize privacy. Reforms should address harms driven by surveillance-based business models, alongside deceptive design practices that impair informed choices. Second, strengthen user sovereignty, which is also a necessary precondition for European digital sovereignty more broadly. Strengthening user sovereignty means taking measures that address user lock-in, coercive contract terms, and manipulative defaults that limit users’ ability to freely choose how they use digital products and services.

Together, these principles would support the EU’s objectives of consistent consumer protection, fair markets, and a more coherent legal framework. If implemented properly, the EU could address power imbalances and build trust in Europe’s digital economy. 

Ban Dark Patterns  

Dark patterns are practices that impair users’ ability to make informed and autonomous decisions. Many companies deploy these tactics through interface design to steer choices and influence behavior. Their impact goes beyond poor consumer decisions. Dark patterns push users to share personal data they would not otherwise disclose and undermine autonomy by making alternatives harder to access. 

The DFA should address this by clearly prohibiting misleading interfaces that distort user choice in commercial contexts. While the Digital Services Act introduced a definition, it only partially bans such practices and leaves gaps across existing consumer law rules. The DFA should close these gaps by, at the very least, introducing explicit prohibitions and clearer enforcement rules, without resorting to design mandates. 

Tackle Commercial Surveillance 

At the core of digital unfairness lies the pervasive collection and use of personal data. Surveillance and profiling drive many of the harms regulators are trying to address, from dark patterns to exploitative personalization. The DFA should tackle these incentives directly by reducing reliance on surveillance-based business models. These practices are fundamentally incompatible with privacy and fairness, and they distort digital markets by rewarding data exploitation rather than quality of service. At a minimum, the DFA should address unfair profiling and surveillance advertising by strengthening privacy rights and banning pay-for-privacy schemes. Users should not have to trade their data or pay extra to avoid being tracked. Accordingly, the DFA should support the recognition of automated privacy signals by web browsers and mobile operating systems, which give users a better way to reject tracking and exercise their rights. Practices that override such signals through banners or interface design should be considered unfair. 

Addressing surveillance and profiling also protects children, since many online harms are tied to the collection and exploitation of their data. Systems that serve ads or curate content often rely on intrusive profiling practices, raising concerns about privacy and fairness, particularly when applied to minors. Rather than turning to invasive age verification, the focus should be on limiting data use by default.

Strengthen User Sovereignty  

There is a major gap in how EU law addresses user autonomy in digital markets: Many digital products and services still restrict what people can do with what they pay for through opaque or one-sided licensing terms, technical protection measures, and remote controls. These mechanisms increasingly limit lawful use, modification, or access after purchase, allowing providers to revoke access, disable functionalities, or degrade performance over time. In practice, this turns ownership into a conditional rental.  

Consumers must be able to use and resell digital goods without hidden limitations and with clear licensing terms. Too often, technical and contractual lock-ins, including remote lockouts and unilateral restrictions on functionality, erode that control. Recent legal reforms show that progress is possible. Rules such as those under the Digital Markets Act have begun to curb technical and contractual barriers and promote user choice. However, many restrictions persist.

The DFA must address these practices by targeting unfair post-sale restrictions and strengthening users’ ability to control and switch services. This means setting clear limits on unfair terms and misleading practices, alongside robust transparency on how digital services function over time. It should also strengthen interoperability and support user control, allowing people to access third-party applications and to let trusted applications act on their behalf, reducing lock-in and expanding meaningful choice in how users interact with digital services. 

Adaptation was supposed to be safe under Zeldin. Wrong.

ClimateWire News - Mon, 05/04/2026 - 6:19am
The EPA administrator once touted the importance of climate adaptation programs. Now that he's in charge, he's taking a different approach.

Dominion to open nation’s biggest offshore wind farm next year

ClimateWire News - Mon, 05/04/2026 - 6:18am
The Virginia utility also expressed optimism about battery storage and nuclear power.

Berkshire CEO threatens to exit states with costly clean energy mandates

ClimateWire News - Mon, 05/04/2026 - 6:16am
Chief Executive Greg Abel said "we'll find a better path" if states impose expensive climate requirements on utilities owned by Berkshire.

A state renowned for climate funding hesitates over affordability

ClimateWire News - Mon, 05/04/2026 - 6:16am
A New Jersey lawmaker wants voters to approve $5 billion in borrowing for climate resilience. But he worries if there's political support.

Nations preserve a plan to adopt a global fee on shipping emissions

ClimateWire News - Mon, 05/04/2026 - 6:12am
The countries also agreed to continue discussing alternative proposals and entertain new ones.

India’s heat exposes a fragile grid as energy crunch deepens

ClimateWire News - Mon, 05/04/2026 - 6:12am
Heat waves are forecast to persist for longer than usual in densely populated states of western and eastern India, the country’s weather forecaster said.

Mexico City is sinking so quickly, it can be seen from space

ClimateWire News - Mon, 05/04/2026 - 6:09am
Extensive groundwater pumping and urban development have dramatically shrunk the aquifer, meaning that Mexican capital has been sinking for more than a century.

Africa’s cellphone towers turn to solar as diesel costs surge

ClimateWire News - Mon, 05/04/2026 - 6:08am
The transition to cleaner power for the towers that provide cellphone service is driven by cost pressures and climate goals.

Hacking Polymarket

Schneier on Security - Mon, 05/04/2026 - 5:46am

Polymarket is a platform where people can bet on real-world events, political and otherwise. Leaving the ethical considerations of this aside (for one, it facilitates assassination), one of the issues with making this work is the verification of these real-world events. Polymarket gamblers have threatened a journalist because his story was being used to verify an event. And now, gamblers are taking hair dryers to weather sensors to rig weather bets.

There’s also insider trading: a lot of it.

How chromatin movement helps control gene expression

MIT Latest News - Mon, 05/04/2026 - 5:00am

Gene expression is controlled, in part, by the interactions between genes and regulatory elements located along the genome. Those interactions depend on the ability of chromatin — a mix of DNA and proteins — to move around within a crowded space.

In a new study, MIT researchers have measured chromatin movement at timescales ranging from hundreds of microseconds to hours, allowing them to rigorously quantify those dynamics for the first time.

Their analysis revealed that chromatin can exist in two different categories: In one, chromatin moves in a constrained way that allows it to primarily contact only neighboring regions of the genome; in the other, chromatin moves more freely and contacts regions that are farther away, but only over longer timescales.

The findings offer insight into how gene expression is regulated, as well as how chromatin segments come together for other processes such as DNA repair, the researchers say.

“Because we were able to look at chromatin dynamics for the first time at these very fast timescales, and also for the first time across the full dynamic range, we were able to observe chromatin motion over a range that just wasn’t possible before,” says Anders Sejr Hansen, an associate professor of biological engineering at MIT and the senior author of the new study, which appears today in Nature Structural and Molecular Biology.

The paper’s lead authors are MIT postdoc Matteo Mazzocca, Domenic Narducci PhD ’25, and Simon Grosse-Holz PhD ’23. Jessica Matthias, chief commercial officer of Abberior Instruments, and Tatiana Karpova, manager of the National Cancer Institute Optical Microscopy Core, are also authors of the paper.

Constrained movement

In textbooks, chromatin is often depicted as a static structure within the cell nucleus, but in reality, it is constantly moving. Those movements are necessary for genes to interact with DNA regulatory sequences such as enhancers, which can be as far as 1 million base pairs away. They also ensure that when DNA breaks occur, the two ends of DNA can encounter each other to be repaired.

“Chromatin dynamics are foundational to all processes in the nucleus, and especially processes that involve two things finding each other. That’s important in DNA repair, gene regulation, recombination, or moving a particular gene to the right compartment of the nucleus,” Hansen says.

The movement of any particular location on the genome, or locus, is constrained by the fact that DNA is a polymer. After moving in any direction, a locus will be pulled back by the DNA on either side of it.

“Chromosomes are polymers. They’re held together by many nucleotides of DNA. Being part of DNA is a little bit like running while holding hands with other people. If a hundred people are holding hands and you, in the middle of the chain, try to run in one direction, you’ll get pulled back,” Hansen says.

This type of behavior is known as subdiffusive movement. Previous studies have yielded conflicting reports on how subdiffusive chromatin is, mainly because the studies were not able to track the movement over a long enough period of time to obtain statistically robust measurements. Because the movements are so small, on the order of nanometers, data needs to be obtained over long dynamic ranges — from milliseconds to hours.

In those earlier studies, researchers used imaging techniques that can track the position of a single molecule over time by comparing images frame by frame. These are useful but can only be used over a small dynamic range because of the limitations of conventional microscopy.

To generate more statistically robust data, the MIT team used MINFLUX — a super-resolution light microscopy technique that can track the movement of tiny objects such as proteins over longer periods of time. This technique was recently developed by Stefan Hell of the Max Planck Institute, a Nobel laureate for his work in super resolution microscopy. In this study, the MIT team became the first to apply this technique to chromatin in living cells.

“MINFLUX allowed us to get around the limitations of conventional microscopy, letting us measure chromatin movement faster and for a longer period of time than ever before,” Narducci says. “To our knowledge, it’s the first time this technique has been used this way.”

Using MINFLUX, the researchers were able to study cells over timescales that covered four orders of magnitude — from 200 microseconds to 10 seconds. And by combining MINFLUX with two traditional imaging techniques, they could track chromatin movement over seven orders of magnitude across time, from hundreds of microseconds to several hours.

“Region of influence”

These studies, performed across several different mouse and human cell types, allowed the researchers to identify two distinct classes of chromatin dynamics. In both classes, over short and intermediate timescales (up to 200 seconds), any given locus tends to move only within about 200 nanometers. This suggests that the subdiffusive pull is stronger than had been previously thought.

“One of the main takeaways is that you have this region of influence where a genomic locus has access to other genomic loci, and this is roughly a couple hundred nanometers large,” Grosse-Holz says. “If loci are much closer together than a couple hundred nanometers, they’re effectively in contact all the time. You get a cutoff at a couple hundred nanometers where everything within that region around a given locus can see that locus, and everything outside cannot.”

This constant contact is likely beneficial for DNA repair, as the broken strands remain in close proximity to each other. The findings also suggest that for genes and regulatory elements that are within about 100,000 base pairs, they don’t need any extra help to find each other — they will do so routinely through their normal movement.

“If they are closer than 100,000 bases, and most regulatory elements are, then those elements are going to find their target gene within a few milliseconds or a few minutes,” Mazzocca says. “These are timescales that are completely consistent with transcription.”

In the other class of chromatin dynamics that the researchers identified, chromatin is able to move over a wider range, but only at longer timescales (a few minutes to hours). This class of chromatin appeared in some types of cells but not others, for reasons that are not yet understood.

“It would be reasonable to assume that the behavior would be more or less the same in all cell types, but that’s not at all what we found,” Hansen says. “It’s very different in different cell types, with no obvious way of categorizing things.”

He adds that the strength of the subdiffusive pull that the researchers found in this study can’t be explained with existing models that have been developed to study chromatin dynamics — the Rouse model and the fractal globule model. This suggests that the models may need to incorporate factors that were previously left out, such as the interactions between chromatin and the crowded nucleoplasm it sits within.

“These findings are significant for two key reasons,” says Luca Giorgetti, a group leader at the Friedrich Miescher Institute for Biomedical Research in Switzerland, who was not involved in the study. “First, they rigorously confirm longstanding but anecdotal observations that chromatin motion is strongly subdiffusive. Second, they demonstrate that this behavior is consistent across multiple cell types and persists across all measured timescales.”

The research was funded, in part, by the National Institutes of Health, a National Science Foundation CAREER Award, a Pew-Stewart Scholar for Cancer Research Award, and the Bridge Project, a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center.

Small plastics with large warming potential

Nature Climate Change - Mon, 05/04/2026 - 12:00am

Nature Climate Change, Published online: 04 May 2026; doi:10.1038/s41558-026-02616-x

Microplastics and nanoplastics are moving in the atmosphere worldwide. Now, research shows that they can interact with sunlight and influence the climate system.

Atmospheric warming contributions from airborne microplastics and nanoplastics

Nature Climate Change - Mon, 05/04/2026 - 12:00am

Nature Climate Change, Published online: 04 May 2026; doi:10.1038/s41558-026-02620-1

The radiative impact of microplastic and nanoplastic particles in the atmosphere is not well understood. Here the authors quantify their radiative forcing, finding that they can exceed that of black carbon regionally.

Found Industries aims to strengthen America’s industrial supply chains

MIT Latest News - Sun, 05/03/2026 - 12:00am

Found Industries has gone through several distinct phases in the four years since it was originally formed as Found Energy. There was the scrappy startup stage, in which the company was primarily housed in the basement of founder Peter Godart ’15, SM ’19, PhD ’21. Then there was the demonstration phase, in which the company worked to productize its technology for transforming aluminum into high-density fuel for industrial operations.

Now, after confronting supply chain vulnerabilities related to critical metals in its aluminum fuel business, the company is launching a new division, Found Metals, to extract the critical metal gallium from mineral refineries — a move that builds on its original technology while addressing a major national security need.

Gallium is a critical material in the defense, semiconductor, and energy sectors. In 2024, China produced 99 percent of the world’s primary supply — market dominance the country takes advantage of through export controls.

Godart’s company developed an electrochemical gallium extraction technology for internal use after realizing how dependent it would be on China for the catalyst material at the center of its aluminum fuel reactors. Now, with support from the U.S. Department of Energy, Found is hoping to use that technology to create a new domestic supply chain for gallium and a host of other important metals.

Found Industries is still committed to its aluminum fuel operations, now under its Found Energy division. It is already running a 100-kilowatt-class demonstration plant and is preparing for industrial pilot deployments next year. But with its expansion, which was announced April 21, the company is also working to meet the moment for critical metals production.

“Gallium is the world’s most critical metal, as it’s 99 percent controlled by China,” Godart says. “When you produce 99 percent of something, you also produce 99 percent of the tools required to extract it. We couldn’t get our hands on some of those tools, so we were forced to come up with a new technology. Now we believe we can deploy this at scale to become one the first major Western suppliers of these metals.”

From fuel to metals

Godart focused on robotics as an undergraduate in MIT’s Department of Mechanical Engineering and Department of Electrical Engineering and Computer Science. Following graduation, he worked at NASA’s Jet Propulsion Laboratory, where he explored systems for tapping into high-density fuels like aluminum on other planets.

“I had this crazy idea that you could use aluminum, which is already a common construction material for aerospace, as a fuel on other planets,” Godart says. “You don’t need most of the aluminum on a spacecraft once you land on another planet. Aluminum is around 40 times more energy-dense than lithium-ion batteries, and if you have an oxidizer, like water on an icy moon for example, then you can react that aluminum with water and extract energy as heat and hydrogen.”

Luckily for people who might spill water on aluminum while cooking, the metal is normally very stable when exposed to air. In order to tap into aluminum’s stored energy, it needs to undergo a chemical reaction. Godart began exploring catalyst materials to create that reaction at NASA. He continued that work with professor of mechanical engineering Douglas Hart when he returned to MIT in 2017, this time for applications a little closer to home.

“If we want to think about moving humanity to other planets, we have some problems to solve here first,” Godart says. “That was the impetus for me to go back to MIT to study using aluminum as a fuel for energy distribution on Earth.”

Around 70 million tons of aluminum are already transported around the globe every year. Godart says that gives aluminum an easier path to scale. During his PhD, he created a process for coating aluminum with a gallium-containing alloy to help tap into aluminum’s embodied energy.

“We found a catalyst that, when mixed with aluminum scraps, enabled aluminum to react with water very rapidly and at orders of magnitude higher power density than what had been possible before,” Godart says. “That meant you could use aluminum as a fuel and get megawatt-scale power from compact reactor systems.”

By the time he finished his PhD in 2021, Godart and his collaborators had developed a system that mixes aluminum fuel with those catalysts to continuously produce electricity at the kilowatt scale through a hydrogen fuel cell.

Godart launched Found Energy in 2022, licensing part of his research from MIT’s Technology License Office and receiving support from MIT’s Venture Mentoring Service. The company received an Activate fellowship, and after quickly outgrowing Godart’s basement, moved into its current 20,000 square foot facility in Charlestown, Massachusetts.

Today, Found Energy is working with industrial companies that have abundant aluminum scrap.

“When you invent a fuel, you then have to invent the engine,” Godart says. “Our engine is called a catalyzed aluminum water reactor. You feed in aluminum that’s been treated with the catalyst and water, and you get a steam-hydrogen gas mixture. We call that our power stream. We use it to cogenerate industrial heat and electricity. The reaction byproduct is a hydrated aluminum oxide that can be sold into various industries or recycled back into aluminum, which is the long-term vision.”

As Godart worked to build more of the systems, he became concerned about Found’s reliance on Chinese supply chains for its catalyst material. So, in 2024, he developed a new way to extract gallium from Bayer liquor, an industrial process stream used to produce aluminum. Traditional methods for extracting gallium rely on foreign-controlled organic chemicals or resins to bind and concentrate the gallium.

Found uses a continuous electrochemical process to recover the gallium directly from Bayer liquor and other industrial feedstocks, even at low concentrations.

“We thought of it as a way to future-proof what we were doing,” Godart says. “Necessity was the mother of invention.”

Then, toward the end of 2024, China began restricting the export of critical metals including gallium.

“We realized we had already developed a technique for producing these restricted metals that could be very quickly adapted,” Godart recalls.

Scaling for national security

On April 14, the Department of Energy’s Office of Critical Minerals and Energy Innovation selected Found as part of its $5.4 million program to recover gallium from domestic feedstocks. The company plans to start extracting gallium, along with other critical metals like indium and germanium, by the end of 2027.

Meanwhile, Found is already running a 100-kilowatt-class aluminum fuel demonstration system in Charlestown and is working through a orders of several megawatts from large public companies.

“For our fuel technology, the vision is to go as big as possible,” Godart says. “We envision major power plants. Aluminum refineries today, for example, consume hundreds of megawatts of continuous thermal power. That’s what we aim to deliver.”

Godart says he spends most of his time now on gallium extraction, but both branches of the business could make supply chains more secure in the future.

“The big focus now is critical metals, because the government needs this,” Godart says. “We’re also making these metals for ourselves, so we’re vertically integrating our own supply chain, which is table stakes now for companies that deal in physical goods. You need to be able to control your inputs. By focusing on metals, it improves the likelihood of success for our aluminum fuel business.”

Pages