Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Games people — and machines — play: Untangling strategic reasoning to advance AI
Gabriele Farina grew up in a small town in a hilly winemaking region of northern Italy. Neither of his parents had college degrees, and although both were convinced they “didn’t understand math,” Farina says, they bought him the technical books he wanted and didn’t discourage him from attending the science-oriented, rather than the classical, high school.
By around age 14, Farina had focused on an idea that would prove foundational to his career.
“I was fascinated very early by the idea that a machine could make predictions or decisions so much better than humans,” he says. “The fact that human-made mathematics and algorithms could create systems that, in some sense, outperform their creators, all while building on simple building blocks, has always been a major source of awe for me.”
At age 16, Farina wrote code to solve a board game he played with his 13-year-old sister.
“I used game after game to compute the optimal move and prove to my sister that she had already lost long before either of us could see it ourselves,” Farina says, adding that his sister was less enthralled with his new system.
Now an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS), Farina combines concepts from game theory with such tools as machine learning, optimization, and statistics to advance theoretical and algorithmic foundations for decision-making.
Enrolling at Politecnico di Milano for college, Farina studied automation and control engineering. Over time, however, he realized that what activated his interest was not “just applying known techniques, but understanding and extending their foundations,” he says. “I gradually shifted more and more toward theory, while still caring deeply about demonstrating concrete applications of that theory.”
Farina’s advisor at Politecnico di Milano, Nicola Gatti, professor and researcher in computer science and engineering, introduced Farina to research questions in computational game theory and encouraged him to apply for a PhD. At the time, being the first in his immediate family to earn a college degree and living in Italy, where doctoral degrees are handled differently, Farina says he didn’t even know what a PhD was.
Nevertheless, one month after graduating with his undergraduate degree, Farina began a doctoral degree in computer science at Carnegie Mellon University. There, he won distinctions for his research and dissertation, as well as a Facebook Fellowship in Economics and Computation.
As he was finishing his doctorate, Farina worked for a year as a research scientist in Meta’s Fundamental AI Research Labs. One of his major projects was helping to develop Cicero, an AI that was able to beat human players in a game that involves forming alliances, negotiating, and detecting when other players are bluffing.
Farina says, “when we built Cicero, we designed it so that it would not agree to form an alliance if it was not in its interest, and it likewise understood whether a player was likely lying, because for them to do as they proposed would be against their own incentives.”
A 2022 article in the MIT Technology Review said Cicero could represent advancement toward AIs that can solve complex problems requiring compromise.
After his year at Meta, Farina joined the MIT faculty. In 2025, he was distinguished with the National Science Foundation CAREER Award. His work — based on game theory and its mathematical language describing what happens when different parties have different objectives, and then quantifying the “equilibrium” where no one has a reason to change their strategy — aims to simplify massive, complex real-world scenarios where calculating such an equilibrium could take a billion years.
“I research how we can use optimization and algorithms to actually find these stable points efficiently,” he says. “Our work tries to shed new light on the mathematical underpinnings of the theory, better control and predict these complex dynamical systems, and uses these ideas to compute good solutions to large multi-agent interactions.”
Farina is especially interested in settings with “imperfect information,” which means that some agents have information that is unknown to other participants. In such scenarios, information has value, and participants must be strategic about acting on the information they possess so as not to reveal it and reduce its value. An everyday example occurs in the game of poker, where players bluff in order to conceal information about their cards.
According to Farina, “we now live in a world in which machines are far better at bluffing than humans.”
A situation with “massive amounts of imperfect information,” has brought Farina back to his board-game beginnings. Stratego is a military strategy game that has inspired research efforts costing millions of dollars to produce systems capable of beating human players. Requiring complex risk calculation and misdirection, or bluffing, it was possibly the only classical game for which major efforts had failed to produce superhuman performance, Farina says.
With new algorithms and training costing less than $10,000, rather than millions, Farina and his research team were able to beat the best player of all time — with 15 wins, four draws, and one loss. Farina says he is thrilled to have produced such results so economically, and he hopes “these new techniques will be incorporated into future pipelines,” he says.
“We have seen constant progress towards constructing algorithms that can reason strategically and make sound decisions despite large action spaces or imperfect information. I am excited about seeing these algorithms incorporated into the broader AI revolution that’s happening around us.”
MIT marks first Robert R. Taylor Day with Tuskegee University
On April 10, MIT marked its first official Robert R. Taylor Day with a program centered on the life and work of Robert Robinson Taylor (Class of 1892), the Institute’s first Black graduate and the first academically trained Black architect in the United States.
After graduating from MIT, Taylor joined Tuskegee Institute (now Tuskegee University), where he designed campus buildings, developed a curriculum, and helped establish an approach to architectural education grounded in making and community life — an orientation that continues to shape the relationship between MIT and Tuskegee today.
Taylor returned to MIT on April 10, 1911, to speak at the 50th anniversary of the Institute’s founding — the date now observed as Robert R. Taylor Day. Reflecting on his education, he credited MIT with the “methods and plans” he carried to Tuskegee Institute. “Certainly the spirit,” he said, was found “in the love of doing things correctly, of putting logical ways of thinking into the humblest task … to build up the immediate community in which the persons live.”
One hundred fifteen years later, at the MIT Museum, students and faculty gathered around Taylor’s original thesis, “A Soldiers Home.” The work was presented alongside archival materials from Taylor’s time at MIT by Jonathan Duval, assistant curator of architecture and design. Rather than framing Taylor as a distant historical figure, the encounter with the work itself — its drawings, assumptions, and ambitions — set the terms for the day, bringing forward not only his accomplishments but the ideas and methods that continue to inform teaching and collaboration today. Attendees then gathered for a lunch-and-learn session including a hybrid panel involving MIT and Tuskegee University faculty.
“It is so important to continue to develop the MIT-Tuskegee relationship begun by Robert R. Taylor,” says Kwesi Daniels, associate professor and head of the architecture department at Tuskegee University. “MIT students are provided an opportunity to experience the campus Taylor designed and his ethos of social architecture. For the Tuskegee students, they are able to appreciate the foundation Taylor received at MIT. The engagement epitomizes the ‘mind and hand’ philosophy of MIT and the head, hand, heart philosophy of Tuskegee.”
An ongoing exchange
Student and faculty exchanges, launched by the architecture departments at both institutions, have extended these connections in recent years. MIT students travel to Tuskegee for work in historic preservation and community engagement, sampling Daniels’ scanning and drone equipment, while Tuskegee students come to MIT to engage with digital fabrication and entrepreneurship.
For Nicholas de Monchaux, professor and head of the Department of Architecture at MIT, the relationship reflects continuity. “We are not uniting. We’re reuniting,” he says. “This year’s celebration should really be seen as the kickoff of a year of reflecting on Robert Taylor’s legacy and imagining what the day, and his legacy, can become over time.”
The day’s program — the vision for which originally emerged from a suggestion made by MIT literature professor Joshua Bennett during a meeting at Tuskegee with de Monchaux, Daniels, and Tuskegee President Mark Brown — moved into a broader effort among faculty and collaborators across architecture, history, and the humanities. As Bennett put it, “The primary aim of Robert R. Taylor Day is to lift up not only Taylor’s accomplishments, but his ideas — and the fact that his ideas live on in those of us who have inherited his legacy.”
That emphasis is also visible in the dedicated coursework and research that has accompanied the exchange since 2022. In class 4.s12 (Brick x Brick: Drawing a Particular Survey), taught by Carrie Norman, assistant professor in architecture at MIT, students document buildings on the Tuskegee campus through measured drawings and archival interpretation. Working from limited historical material, they reconstruct both form and intent.
“My role has been to structure this work pedagogically,” Norman says, “guiding students in methods of close looking, measured drawing, and archival interpretation.” She describes Taylor’s work as “an ongoing research agenda,” adding that “the broader aim is not only to deepen engagement with Taylor’s legacy, but to build on it through new forms of design research.”
Related work has contributed to a recent exhibition on the Tuskegee Chapel at the National Building Museum, curated by Helen Bechtel of the Yale School of Architecture. Building on research conducted in Norman’s course, students developed large-scale models that form part of the exhibition. New 3D fabrications use a limited set of archival materials to reconstruct the chapel originally designed by Taylor as the first electrified building in Alabama’s Macon County, which was destroyed by fire in 1957.
Looking ahead
Timothy Hyde, professor in the MIT Department of Architecture, has also been involved in the ongoing MIT–Tuskegee collaboration and in efforts to situate Taylor’s work within a broader historical context. He notes that Taylor’s training at MIT helped shape the curriculum he later developed at Tuskegee. “The other influence I would like to mention is the city of Boston itself,” Hyde adds. “Boston was a prosperous city with a wealth of civic architecture that Taylor would have seen and studied.”
A documentary project on Taylor’s life, supported by the MIT Human Insight Collaborative and led by Hyde and historian Christopher Capozzola, senior associate dean for MIT Open Learning, is currently in development.
For some students, these encounters shape longer trajectories. As an undergraduate at Tuskegee, Myles Sampson participated in the MIT Summer Research Program (MSRP), where he began to connect architecture with a growing interest in computation. He later enrolled in MIT’s Master of Science in Architecture Studies (SMArchS) computation program, working with Professor Larry Sass, who introduced him to robotic fabrication.
“I never looked back,” Sampson says. “Without that hands-on research experience, I would never have looked past contemporary architectural practice.” He is now pursuing a doctorate in computational design at Carnegie Mellon University, focused on the role of automation in architecture and construction.
Sampson contributed significant work to the National Building Museum’s exhibition. His installation, Brick Parable, brings together historical reference and robotic construction. As de Monchaux notes, the project reflects the long arc of Taylor’s legacy: “bricks were fired by students as part of Taylor’s training program … Myles [Sampson]’s piece, made with a robotic assembly of bricks, explores the architectural idea of the chapel in contemporary form.”
For Daniels, the continued circulation of students between the two institutions remains central. Viewing Taylor’s thesis in particular offers a shared point of reference. “Whether the student is from Tuskegee or MIT, they are able to appreciate the quality of work Taylor completed as a student,” he says, “and how he built on that work by creating a college campus, beginning at age 25.”
Across these activities, Taylor’s work is approached not as a fixed legacy, but as a set of methods and commitments that continue to be tested. As Catherine Armwood, dean of Tuskegee University Robert R. Taylor School of Architecture and Construction Science, describes it: “While our students leverage [the design and entrepreneurship program] MITdesignX to turn architectural concepts into social enterprises through advanced fabrication and venture mentorship, MIT students come to Tuskegee for an immersion in historic preservation. By surveying buildings handcrafted by our founding students, they learn a legacy of self-reliance and community impact that can’t be found anywhere else,” Armwood says. “Together, we are bridging technical innovation with deep-rooted heritage to train a new generation of visionary leaders.”
DarkSword Malware
DarkSword is a sophisticated piece of malware—probably government designed—that targets iOS.
Google Threat Intelligence Group (GTIG) has identified a new iOS full-chain exploit that leveraged multiple zero-day vulnerabilities to fully compromise devices. Based on toolmarks in recovered payloads, we believe the exploit chain to be called DarkSword. Since at least November 2025, GTIG has observed multiple commercial surveillance vendors and suspected state-sponsored actors utilizing DarkSword in distinct campaigns. These threat actors have deployed the exploit chain against targets in Saudi Arabia, Turkey, Malaysia, and Ukraine...
EFF and 18 Organizations Urge UK Policymakers to Prioritize Addressing the Roots of Online Harm
EFF joins 18 organizations in writing a letter to UK policymakers urging them to address the root causes of online harm—rather than undermining the open web through blunt restrictions.
The coalition, which includes Mozilla, Tor Project, and Open Rights Group, warns that proposed measures following the passage of the Children’s Wellbeing and Schools Bill risk fundamentally reshaping the internet in harmful ways. Chief among these proposals are sweeping age-gating requirements and access restrictions that would apply not only to young people, but effectively to all users.
While framed as efforts to protect children online, these policies rely heavily on age assurance technologies that are either inaccurate, privacy-invasive, or both. As the letter notes, mandating such systems across a wide range of services—from social media and video games to VPNs and even basic websites—would force users to verify their identity simply to access the web. This creates serious risks, including expanded surveillance, data breaches, and the erosion of anonymity.
Beyond privacy concerns, the signatories argue that these measures threaten the core architecture of the open internet. Age-gating at scale could fragment the web into a patchwork of restricted jurisdictions, limit access to information, and entrench the dominance of powerful gatekeepers like app stores and platform ecosystems. In doing so, policymakers risk weakening the very qualities—interoperability, accessibility, and openness—that have made the internet a global public resource.
The letter also emphasizes what’s missing from the current policy approach: meaningful efforts to address the underlying drivers of online harm. Many digital platforms are designed to maximize engagement and profit through pervasive data collection and targeted advertising, often at the expense of user safety and autonomy. Rather than imposing access bans, the coalition calls on UK policymakers to hold companies accountable for these systemic practices and to prioritize user rights by design.
Importantly, the signatories highlight that the internet remains a vital space for young people: offering access to information, support networks, and opportunities for expression that may not exist offline. Policies that restrict access risk cutting off these lifelines without meaningfully reducing harm.
The message is clear: protecting users online requires more than heavy-handed restrictions. It demands thoughtful, rights-respecting policies that tackle the business models and design choices driving harm, while preserving the open, global nature of the web.
The Trump admin is trying to stop state climate lawsuits. It isn’t working.
Iowa joins movement of states blocking climate lawsuits
High electricity prices and heat could combine for deadly summer
Pentagon blocking 160 wind farms, industry group says
Gallego’s energy plan: Pull the Democrats to the center
California investigates Trump admin deal to cancel offshore wind project
States across wildfire-prone western US are using AI for early detection
Heat-trapping microplastics found to play role in climate change
Brazil corn ethanol clears IMO regulatory step for maritime use
Shut Down Turnkey Totalitarianism
William Binney, the NSA surveillance architect-turned-whistleblower, called it the "turnkey totalitarian state." Whoever sits in power gains access to a boundless surveillance empire that scorns privacy and crushes dissent. Politicians will come and go, but you can help us claw the tools of oppression out of government hands.
Become a Monthly Sustaining Donor
We must stand strong to uphold your privacy and free expression as democratic principles. With members around the world, EFF is empowered to use its trusted voice and formidable advocacy to protect your rights online. Whether giving monthly or one-time donations, members have helped EFF:
-
Sue to stop warrantless searches of Automated License Plate Reader (ALPR) records, which reveal millions of drivers’ private habits, movements, and associations.
-
Launch Rayhunter, an open source tool that empowers you to help search out cell-site simulators capable of tracking the movements of protestors, journalists, and more.
-
Help journalists see through the spin of "copaganda" by breaking down how policing technology companies often market their tools with misleading claims with our Selling Safety report.
Right now, U.S. Congress is on the edge of renewing the international mass spying program known as Section 702, affecting millions. EFF is rallying to cut through the politics and give ordinary people a chance to stop this oppressive surveillance. It’s only possible with help from supporters like you, so join EFF today.
The New EFF Member GearGet this year’s new member t-shirt when you join EFF. Aptly titled "Claw Back," the design features an orange boy swatting at the street-level surveillance equipment multiplying in our communities. You might empathize with him, but there’s a better way. Let’s end the law enforcement contracts, harmful practices, and twisted logic that enable mass spying in the first place.
You can also get brand new set of eleven soft and supple polyglot puffy stickers as a token of thanks. Whether you're a kid or a kid at heart, these nostalgic stickers are perfect for digital devices, lunchboxes, and notebooks alike. Our little Ghostie protects privacy in six languages: Arabic, English, Japanese, Persian, Russian, and Spanish.
And for a limited time, get a Privacy Badger Crewneck sweater to help you browse the web with confidence. The embroidered Privacy Badger mascot appears above characters that say "privacy” because human rights are universal. Millions of people around the world use Privacy Badger, EFF's free tool that devours devious scripts and cookies that twist your web browsing into a commodity for Big Tech, advertisers, and scammers.
Privacy is a human right because it gives you a fundamental measure of security and freedom. We owe it to ourselves to fight the mass surveillance used to control and intimidate people. Let’s do this. Join EFF today with a monthly donation or one-time donation and help claw back your privacy.
____________________
EFF is a member-supported U.S. 501(c)(3) organization. We've received top ratings from the nonprofit watchdog Charity Navigator since 2013! Your donation is tax-deductible as allowed by law.
Astronomers pin down the origins of a planetary odd couple
Across the Milky Way galaxy, a planetary odd couple is circling a star some 190 light years from Earth. A normally “lonely” hot Jupiter is sharing space with a mini-Neptune, in a rare and unlikely pairing that’s had astronomers puzzled since the system’s discovery in 2020.
Now MIT scientists have caught a glimpse into the atmosphere of the mini-Neptune, which is circling inside the orbit of its Jupiter-sized companion, and discovered clues to explain the origins of this unusual planetary system.
In a study appearing today in Astrophysical Journal Letters, the scientists report on new measurements of the mini-Neptune’s atmosphere, made using NASA’s James Webb Space Telescope (JWST). It is the first time astronomers have measured the composition of a mini-Neptune that resides inside the orbit of a hot Jupiter.
Their measurements reveal that the smaller planet has a “heavy” atmosphere that is rich with water vapor, carbon dioxide, sulfur dioxide, and hints of methane. Such a heavy atmosphere would not have been acquired by the planet if it had formed in its current location, very close to its star.
Instead, the scientists say their findings point to an alternate origin story: Both the mini-Neptune and the hot Jupiter may have formed much farther away, in the colder region of the protoplanetary disk. There, the planets could slowly build up atmospheres of ice and other volatiles. Over time, the planets were likely drawn in toward the star in a gradual process that kept them close, with their atmospheres intact.
The team’s results are the first to show that mini-Neptunes can form beyond a star’s “frost line.” This boundary refers to the minimum distance from a star where the temperature is low enough that water instantly condenses into ice.
“This is the first time we’ve observed the atmosphere of a planet that is inside the orbit of a hot Jupiter,” says Saugata Barat, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research and the lead author of the study. “This measurement tells us this mini-Neptune indeed formed beyond the frost line, giving confirmation that this formation channel does exist.”
The team consists of astronomers around the world, including Andrew Vanderburg, a visiting assistant professor at MIT, and co-authors from multiple other institutions including the Harvard and Smithsonian Center for Astrophysics, the University of South Queensland, the University of Texas at Austin, and Lund University.
A “one-of-a-kind” system
As their name implies, mini-Neptunes are planets that are less massive than Neptune. They are considered to be gas dwarfs, which are made mostly of gas, with an inner, rocky core. Mini-Neptunes are the most commonly found planet in the Milky Way, though, interestingly, no such world exists in our own solar system. Astronomers have observed many planets circling a wide variety of stars in a range of planetary systems. Mini-Neptunes, then, are generally considered to be garden-variety planets.
But in 2020, Chelsea X. Huang, then a Torres Postdoctoral fellow at MIT (now on the faculty at University of South Queensland), discovered a mini-Neptune in a rare and puzzling circumstance: The planet appeared to be circling its star with an unlikely companion — a hot Jupiter.
The astronomers made their discovery using NASA’s Transiting Exoplanet Survey Satellite (TESS). They analyzed TESS’ measurements of TOI-1130, a star located 190 light years from Earth, and detected signs of a mini-Neptune and a hot Jupiter, orbiting the star every four and eight days respectively.
“This was a one-of-a-kind system,” says Huang. “Hot Jupiters are ‘lonely,’ meaning they don’t have companion planets inside their orbits. They are so massive, and their gravity is so strong, that whatever is inside their orbit just gets scattered away. But somehow, with this hot Jupiter, an inner companion has survived. And that raises questions about how such a system could form.”
A spot-on snapshot
The 2020 discovery of TOI-1130 and its odd planetary pair inspired Huang, Vanderburg, and their colleagues to take a closer look at the planets, and specifically, their atmospheres, with JWST. In its new study, the team reports its analysis of TOI-1130b — the inner-orbiting mini-Neptune.
Catching the planet at just the right time was their first challenge. Most planets circle their star with a regular, predictable period, like the tick of a clock. But the mini-Neptune and the hot Jupiter were found to be in “mean motion resonance,” meaning that each can affect the other’s motion, pulling and tugging, and slightly varying the time each takes to orbit their star. This made it tricky to predict when JWST could get a clear view.
The team, led by Judith Korth of Lund University, assembled as many past observations of the system as they could, and developed a model to predict when each planet would pass by the star at an angle that JWST could observe.
“It was a challenging prediction, and we had to be spot-on,” Barat says.
In the end, the team was able to catch a direct and detailed snapshot of both planets.
“The beauty of JWST is that it does not observe just in one color, but at different colors, or wavelengths,” Barat explains. “And the specific wavelengths that a planet absorbs can tell you a lot about the composition of its atmosphere.”
From JWST’s measurements, the team found that the planet absorbed wavelengths specifically for water, carbon dioxide, sulfur dioxide, and to a lesser degree, methane. These molecules are heavier than hydrogen and helium, which constitute lighter atmospheres. Astronomers had assumed that, if mini-Neptunes formed very close to their star, they should have light atmospheres.
But the team’s new results counter that assumption and offer a new way that mini-Neptunes could form. Since heavier molecules were found in the atmosphere of TOI-1130b, which resides very close to its star, the scientists say the only possible explanation for its composition is that the planet formed much farther out than its current location.
The planet likely accumulated its heavy atmosphere of water and other volatiles such as carbon dioxide and sulfur dioxide in the icy region beyond the star’s frost line. In this much colder environment, water condenses onto bits of dust to form icy pebbles, which an infant planet can draw into its atmosphere. The water evaporates as it slowly migrates in closer to its star.
Barat says the team’s detection of heavy molecules in the atmosphere of TOI-1130b confirms that the planet — and likely its hot Jupiter companion — formed in the outskirts of the system. Through gradual migration, the two planets would be able to stay close together and keep their atmospheres intact.
“This system represents one of the rarest architectures that astronomers have ever found,” Barat says. “The observations of TOI-1130b provide the first hint that such mini-Neptunes that form beyond the water/ice line are indeed present in nature.”
This work was supported, in part, by NASA.
The tech revolution that wasn’t
In 1960, engineers at India’s Tata Institute of Fundamental Research (TIFR) built what they called an “Automatic Calculator,” the country’s first working computer. It had the same type of ferrite-core memory as IBM’s world-leading machines, and at a glance, appeared to herald a new age of tech advances in India.
Constructed with a fraction of the resources Western computer engineers had, the TIFRAC, as they called it, was a remarkable feat.
“The people working on it had never really seen an actual functioning computer,” says Dwai Banerjee, an associate professor of science, technology, and society, and the author of a new book about computing in India. “You had this ambitious group of engineers building a state-of-the-art machine with very, very, limited resources. The fact they could build this is staggering.”
However, the TIFRAC was never even replicated, let alone produced at scale. The visionaries behind it wanted to turn India into an independent computing nation: a place that would produce its own equipment and become an industry power. Instead, the TIFRAC became a technological cul-de-sac, and India’s tech industry took on a very different shape. Instead of exporting equipment, it exports talent, sending skilled engineers and executives around the globe.
Now Banerjee explores those issues in the book, “Computing in the Age of Decolonization: India’s Lost Technological Revolution,” published by Princeton University Press. In it, he examines the country’s pursuit of technological self-sufficiency, and the global forces that prevailed against this vision. As a result, the country is “the world’s leading provider of inexpensive outsourcing and offshoring services, yet enjoys minimal benefits from more profitable advances in research, manufacturing, and development,” Banerjee writes.
“This book is about understanding how the current landscape of technological power came to be and the unequal way in which power is distributed across the world when it comes to anything to do with computing,” Banerjee says. “Basically, the historical conditions of the mid-20th century period are essential to understanding why the world of computing looks the way it does today.”
Computing and the geopolitics of knowledge
When India became a sovereign nation in 1947, many of its leaders believed “rapid technology-driven industrialization was the only way out of centuries of colonial underdevelopment,” as Banerjee writes. Some leapt into action, such as the remarkable nuclear physicist Homi J. Bhabha, who helped establish the TIFR.
Initially, Indian leaders hoped to gain cooperation for the U.S. and international organizations in making technological advances, but quickly ran into Cold War politics. Computing was heavily bound up with defense matters; India was not always fully aligned with U.S. political interests, so the flow of knowledge from the U.S. to India was distinctly limited.
“This is very much an external constraint story,” Banerjee says. “You need blueprints and not just working papers, and that’s what was guarded by the U.S. for a very long time.”
Still, the TIFR research team toiled away as its computing projects until the TIFRAC was up and running — making national headlines.
“The achievement it represents is mind-boggling,” Banerjee emphasizes. “A computer in the U.S. would have cost more to run than this entire institute in India.”
As Banerjee details in the book, the TIFRAC machine was built to grow. Its engineers matched the speed of IBM machines and planned to import larger ferrite-core memory stacks as their workload expanded. But when IBM released the FORTRAN programming language in 1957, it required four times the memory the TIFRAC machine was equipped with. India’s 1958 foreign exchange crisis then shaped the machine’s fate: The World Bank convened a U.S.-led creditor consortium that conditioned rescue loans on the opening of Indian markets to Western capital. Importing larger memory stacks became unaffordable, rendering the TIFRAC obsolete almost as soon as it was completed.
“It’s a geopolitics-of-knowledge question, not that they made a mistake,” Banerjee says of the Indian engineers. “They didn’t know IBM was about to reshape software.”
Exit IBM, enter services
Though IBM’s jump forward after the release of Fortran left the TIFRAC project stalled out, Indian advocates for computer manufacturing did not give up their dream. For one thing, they looked around for partnerships and other ways of moving their domestic tech industry forward. And then in 1978, India, uniquely, banned IBM from the country, on account of its business practices.
That might have set the stage for India’s computer manufacturing industry to flourish. But at the same moment, countervailing forces took hold, including a widespread turn toward the private sector as an increasing source of activity, rather than public-private enterprises.
“For a moment you have this imagination come to a sort of fruition,” Banerjee observes. “But by the late 1970s and 1980s, there is a new group of people arguing for quick profits through software services, saying that this route feels less painful than setting up manufacturing, R&D, and firms for a decade or more.”
This turn toward private-sector services rather than government-involved manufacturing ultimately became a decisive factor in shaping India’s tech-sector trajectory. Rather than seeking to make machines domestically, the country became part of the global tech-services sector, while many of its engineers migrated to Silicon Valley and other tech hotspots. Global tech firms used their reach to advance the idea that many countries would develop independent industries. This is not the outcome India’s leaders and technologists once envisioned.
“It still surprises me because of the one thing India did that no other country in the world managed to do, and that’s kick out IBM,” Banerjee says. “The fact that this vision fades is part of changing government ambition.”
Beyond the mavericks
In writing the book, Banerjee has multiple goals. One is simply shedding more light on the rich details of India’s initial computing efforts. Another is contesting the idea that India somehow naturally found a role providing services and exporting talent; that is not what many people once hoped.
Still another motif in Banerjee’s work is that the history of computing too often centers on innovators who are cast as mavericks, shrugging off conventions to upend business and society — whereas the large-scale forces of global capital and geopolitics matter greatly in technological development.
“This book suggests we often overplay those stories of individual genius, because you can be a genius with all the right ideas, but if you don’t have all the institutions supporting you, it means nothing,” Banerjee says.
Other scholars have praised “Computing in the Age of Decolonization.” Matthew L. Jones, a professor of history at Princeton University, has stated that Banerjee’s book is a “scrupulous accounting of ultimately failed Indian efforts to secure technological sovereignty in the wake of independence,” which “joins the best recent accounts of computing worldwide and transforms how we think through diverse national trajectories through the Cold War and beyond.”
For his part, Banerjee hopes a wide variety of readers will be interested in the book — and recognize that the specific case of India and computing can tell us a lot about the challenges of new types of economic growth in many places.
“India stands in for a lot of countries in the mid-20th century that had recently gained formal political independence and were thinking of ways to catch up with the rest of the advanced industrialized world,” Banerjee says. “But the power structures tied to technological and scientific advancement did not disappear. They were replaced by newer structures, including foreign policy with very specific ideas about what different countries should be doing with regard to technology. That’s where the story starts.”
Biologist Joey Davis explores how cells build complex structures
Ribosomes, the cellular machines that assemble proteins, are made from dozens of proteins and RNA molecules. Putting all of those pieces together is a complex puzzle — one that MIT Associate Professor Joey Davis PhD ’10 revels in trying to solve.
Understanding how these structures form and later break down could help researchers learn more about how disruptions of these fundamental processes can lead to disease. But, as Davis points out, it’s also an interesting biological question.
“Our long-term goal is to really understand how the natural world assembles these huge complexes rapidly and efficiently. It’s a fundamentally interesting question to think about how these things get put together,” he says.
His work has helped reveal that unlike building a house, which happens in a prescribed sequence of steps — pouring the foundation, building the frame, putting on the roof, then doing electrical and plumbing work — ribosomes can be assembled in a more flexible way. Cells can even skip an assembly step and then come back to it later.
“In these natural systems, it seems like the assembly pathways are much more dynamic and flexible,” he says. “It appears that evolution has selected pathways that aren’t strictly ordered in the way we would think about an assembly line, where you always put in one component, then the next, and then the next. We’re excited to understand the selective advantages of such approaches.”
A love of discovery
Davis’ interest in how things are put together developed early in life, inspired by his father, a carpenter who framed houses. During the mid-1980s, the family moved from Colorado to Southern California, where his father worked in construction during a housing boom there.
“I was always interested in building things, which I think probably came from being around my dad and other builders,” Davis says.
As an undergraduate at the University of California at Berkeley, where he majored in computer science and biological engineering, Davis’ interests turned toward smaller scales, in the realm of cells and molecules. During his junior year, he started working in the lab of chemistry professor Michael Marletta, who studies molecular-level biological interactions.
In the lab, Davis investigated how enzymes that contain heme are able to preferentially bind to either oxygen or nitric oxide, two gases that are very similar in structure. That work kindled a love of studying the natural world and pursuing discoveries in fundamental science.
“Being in the Marletta lab and seeing students and postdocs that were really passionate about these problems had a big impact on me,” Davis says. “The goal was to understand the fundamentals of how molecular discrimination works, and the idea of discovery for the sake of discovery was thrilling.”
After graduating from Berkeley, Davis spent another year working in Marletta’s lab, and then a year working odd jobs, before heading to MIT to pursue a PhD in biology. There, he worked with Professor Bob Sauer, now emeritus, who studied the relationship between protein structure and function, with a particular focus on the molecular machines that degrade or remodel proteins.
Davis’ thesis research centered on enzymes called AAA proteases, which remove damaged proteins from cellular membranes and send them to cell organelles that break them down. In addition to studying the structure and function of the proteases, Davis worked on ways to engineer them to tag specific proteins for destruction.
That work led him into synthetic biology, which he used to develop genetic parts that drive production of proteins of interest. Some of those parts ended up being used by the biotech startup Ginkgo Bioworks, where Davis took a job as a senior scientist after graduating.
Working at Ginkgo Bioworks allowed Davis to stay in Boston while his partner finished her PhD. The couple then moved back to California, where Davis worked as a postdoc at Scripps Research, which was home to one of the first direct electron detection cameras for cryo-electron microscopy (cryo-EM). These detectors allow researchers to generate structures with near atomic resolution. At Scripps, Davis began using them to study ribosomes as they were being assembled.
Peering into the ribosome
After joining the MIT faculty in 2017, Davis continued his work on ribosomes and assembled a lab group that includes students from a variety of backgrounds who work together to develop new ways to explore biological phenomena.
“I have a mix of method developers and biologists in the group, and the work from each of them informs each other,” Davis says. “My lab goes back and forth between building sets of tools to answer biological questions, and then as we’re answering those questions, it motivates the next generation of tool development.”
During ribosome assembly, RNA molecules fold themselves into the correct shapes, creating docking sites for proteins to attach. Then, more RNA molecules come in and fold themselves into the structure.
“It’s a beautifully coupled process by which the cell folds hundreds of RNA helices and binds on the order of 50 proteins, and it does it in two minutes from start to finish. E. coli does this 100,000 times per hour, and it’s amazing how rapid and efficient the process is,” Davis says.
Cryo-EM allows scientists to capture this process in minute detail. It can be used to take hundreds of thousands of two-dimensional images of ribosome samples frozen in a thin layer of ice, from different angles. Computer algorithms then piece together these images into a three-dimensional representation of the ribosome.
To gain insight into how ribosomes are assembled, researchers can stall the process at different points and then analyze the resulting structures. In 2021, Davis’s lab developed a new method called CryoDRGN, which uses neural networks to analyze cryo-EM data and generate the full ensemble of structures that were present in the sample.
This work has shown that when certain steps of ribosome assembly are blocked, many different structures result, suggesting that the assembly can occur in a variety of ways.
In future work, Davis aims to dramatically increase the throughput of cryo-EM to generate datasets of protein structures that could help improve the AI-based models that are now used to predict protein structures.
“There are still huge swaths of sequence space that these models are very poor at predicting, but if we could collect data on those sequences en masse, that could potentially serve as key training data for a next-generation protein structure prediction method that could fill out that space,” he says.
EFF Submission to UK Consultation on Digital ID
Last September, the United Kingdom’s Prime Minister Keir Starmer announced plans to introduce a new digital ID scheme in the country. The scheme aims to make it easier for people to prove their identities by creating a virtual ID on personal devices with information like names, date of birth, nationality or residency status, and a photo to verify their right to live and work in the country.
Since then, EFF has joined UK-based civil society organizations in urging the government to reconsider this proposal. In one joint letter from December, ahead of Parliament’s debate around a petition signed by 2.9 million people calling for an end to the government’s plans to roll out a national digital ID, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s proposal.
Nevertheless, politicians have continued to explore ways to build out a digital ID system in the country, often fluctuating between different ideas and conceptualisations for such a scheme. In their search for clarity, the government launched a consultation, ‘Making public services work for you with your digital identity,’ seeking views on a proposed national digital ID system in the UK.
EFF submitted comments to this consultation, focusing on six interconnected issues:
- Mission creep
- Infringements on privacy rights
- Serious security risks
- Reliance on inaccurate and unproven technologies
- Discrimination and exclusion
- The deepening of entrenched power imbalances between the state and the public.
Even the strongest recommended safeguards cannot resolve these issues, and the fundamental core problem that a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. They are pursued as a technological solution to offline problems but instead allow the state to determine what you can access, not just verify who you are, by functioning as a key to opening—or closing—doors to essential services and experiences.
No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. It is essential that the UK government listen to people in the country and say no to digital ID.
Read our submission in full here.
Rett syndrome study highlights potential for personalized treatments
Although many studies approach the developmental disorder Rett syndrome as a single condition arising from general loss of function in the gene MECP2, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT shows that two different mutations of the gene caused many distinct abnormalities in lab cultures. Moreover, correcting key differences made by each mutation required different treatments.
“Individual mutations matter,” says Mriganka Sur, senior author of the new open-accdess study in Nature Communications and the Newton Professor in the Picower Institute and the Department of Brain and Cognitive Sciences. “This is an approach to personalizing treatment, even for a single-gene disorder.”
The study employed advanced 3D human brain tissue cultures called “organoids” or “minibrains” derived from skin cells or blood cells donated by Rett syndrome patients with each mutation. Lead author Tatsuya Osaki, a Picower Institute research scientist, says that the organoids’ ability to model the specific consequences of each mutation enabled him to gain mutation-specific insights that haven’t emerged in prior studies, where scientists just knocked out MECP2 overall. The organoids also provided a novel opportunity to understand how each mutation affected different cell types and their interactions.
Distinct effects
More than 800 mutations in MECP2 can cause Rett syndrome, but just eight account for more than 60 percent of cases. Sur and Osaki chose one of these, R306C, which involves a difference of just one DNA base pair (916C>T), because it represents 7-8 percent of Rett syndrome cases. The other mutation they chose, V247X, is much more rare and severe because it cuts off production of the gene’s protein product by a single DNA base deletion (705Gdel), leaving the protein not just errant, but incomplete.
In organoids cultured for three months, each mutation produced some common but also sometimes distinct consequences compared to control organoids with non-mutated MECP2. For many of their experiments, the team used “three-photon” microscopes capable of cellular-level resolution all the way through the organoids’ approximate 1 millimeter thickness, resolving both their structure (via “third-harmonic generation” imaging), and the live activity patterns of their neurons (via calcium fluorescence).
For instance, the scientists observed that the V247X organoids exhibited several structural differences from their controls — they were larger and had different thicknesses of various layers — but the R306C ones were much more like their controls. Organoids harboring either mutation exhibited less-developed axon projections from their neurons, compared to their control comparators.
Looking at properties of neural activity and connectivity in the organoids, the scientists found some similar deficits across both mutations. Both showed reduced spiking activity and synchronicity between neurons compared to in their controls.
But when the scientists looked at other properties, the organoids started to diverge from each other. In particular, an indication of the efficiency of their network structure called “small-world propensity” (SWP) was decreased in R306C organoids, and increased in V247X ones, compared to controls. This means that both mutations altered the development of typical network structures for information processing, but in different directions.
To ensure that their results were meaningful for Rett syndrome patients, the team collaborated with Charles Nelson at Boston Children’s Hospital, whose team measured EEG in several children with different Rett mutations. Although the sample was small, the researchers measured indications that the SWP property in the EEG readings was altered in the volunteers, much like in the organoids.
Finally, by labeling excitatory neurons to flash in one color and inhibitory neurons to flash in a different color, the scientists were able to see that connectivity between the different neural types differed significantly from controls in the V247X organoids.
Treatment tests
All the testing showed that each mutation caused several changes in organoid structure, activity, and connectivity, and that the deviations were often particular to the specific mutation.
To understand how these differences emerged, and how they might be corrected, Sur and Osaki’s team turned to examining how the cells in each kind of organoid might be expressing their genes differently than controls. Differences in gene expression often lead to alterations of key molecular pathways in cells that can disrupt their activity and function. Analysis with a technique called single cell RNA sequencing indeed yielded hundreds of differences in each organoid type, where some genes were expressed more than in controls while others were underexpressed.
For instance, the analyses revealed that in R306C organoids a gene called HDAC2 was overexpressed. That protein is known for repressing expression of other genes. Meanwhile, in the V247X organoids, the scientists found reduced expression of genes for some receptors of the inhibitory neurotransmitter GABA. These organoids also showed defects in the function of astrocyte cells, which support many aspects of neural function.
Organoids with either mutation also exhibited aberrations in molecular pathways that enable the development of circuit connections between neurons, called synapses.
Given the specific defects they observed, the scientists decided to treat the organoids with a drug that can inhibit HDAC2 activity and another that increases GABA’s efficacy. The HDAC2 inhibitor restored neuronal activity and SWP to normal levels in the R306C organoids, and the GABA “agonist” baclofen restored SWP to control levels in the V247X organoids.
Osaki notes each of the treatment drugs has already been studied in other disease contexts, meaning they are well-understood drugs that could be repurposed.
Now that the researchers have developed an organoid platform for dissecting individual mutations’ consequences, identifying both their roots and testing treatments, they plan to apply it to studying four more mutations, Sur says, comparing all of them against a standardized control organoid.
In addition to Sur, Osaki, and Nelson, the paper’s other authors are Chloe Delepine, Yuma Osako, Devorah Kranz, April Levin, and Michela Fagiolini.
The National Institutes of Health, a MURI grant, The Freedom Together Foundation, and the Simons Foundation provided support for the research.
