MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 14 hours 55 min ago

Simulation-based pipeline tailors training data for dexterous robots

Fri, 07/11/2025 - 3:20pm

When ChatGPT or Gemini give what seems to be an expert response to your burning questions, you may not realize how much information it relies on to give that reply. Like other popular generative artificial intelligence (AI) models, these chatbots rely on backbone systems called foundation models that train on billions, or even trillions, of data points.

In a similar vein, engineers are hoping to build foundation models that train a range of robots on new skills like picking up, moving, and putting down objects in places like homes and factories. The problem is that it’s difficult to collect and transfer instructional data across robotic systems. You could teach your system by teleoperating the hardware step-by-step using technology like virtual reality (VR), but that can be time-consuming. Training on videos from the internet is less instructive, since the clips don’t provide a step-by-step, specialized task walk-through for particular robots.

A simulation-driven approach called “PhysicsGen” from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Robotics and AI Institute customizes robot training data to help robots find the most efficient movements for a task. The system can multiply a few dozen VR demonstrations into nearly 3,000 simulations per machine. These high-quality instructions are then mapped to the precise configurations of mechanical companions like robotic arms and hands. 

PhysicsGen creates data that generalize to specific robots and condition via a three-step process. First, a VR headset tracks how humans manipulate objects like blocks using their hands. These interactions are mapped in a 3D physics simulator at the same time, visualizing the key points of our hands as small spheres that mirror our gestures. For example, if you flipped a toy over, you’d see 3D shapes representing different parts of your hands rotating a virtual version of that object.

The pipeline then remaps these points to a 3D model of the setup of a specific machine (like a robotic arm), moving them to the precise “joints” where a system twists and turns. Finally, PhysicsGen uses trajectory optimization — essentially simulating the most efficient motions to complete a task — so the robot knows the best ways to do things like repositioning a box.

Each simulation is a detailed training data point that walks a robot through potential ways to handle objects. When implemented into a policy (or the action plan that the robot follows), the machine has a variety of ways to approach a task, and can try out different motions if one doesn’t work.

“We’re creating robot-specific data without needing humans to re-record specialized demonstrations for each machine,” says Lujie Yang, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate who is the lead author of a new paper introducing the project. “We’re scaling up the data in an autonomous and efficient way, making task instructions useful to a wider range of machines.”

Generating so many instructional trajectories for robots could eventually help engineers build a massive dataset to guide machines like robotic arms and dexterous hands. For example, the pipeline might help two robotic arms collaborate on picking up warehouse items and placing them in the right boxes for deliveries. The system may also guide two robots to work together in a household on tasks like putting away cups.

PhysicsGen’s potential also extends to converting data designed for older robots or different environments into useful instructions for new machines. “Despite being collected for a specific type of robot, we can revive these prior datasets to make them more generally useful,” adds Yang.

Addition by multiplication

PhysicsGen turned just 24 human demonstrations into thousands of simulated ones, helping both digital and real-world robots reorient objects.

Yang and her colleagues first tested their pipeline in a virtual experiment where a floating robotic hand needed to rotate a block into a target position. The digital robot executed the task at a rate of 81 percent accuracy by training on PhysicGen’s massive dataset, a 60 percent improvement from a baseline that only learned from human demonstrations.

The researchers also found that PhysicsGen could improve how virtual robotic arms collaborate to manipulate objects. Their system created extra training data that helped two pairs of robots successfully accomplish tasks as much as 30 percent more often than a purely human-taught baseline.

In an experiment with a pair of real-world robotic arms, the researchers observed similar improvements as the machines teamed up to flip a large box into its designated position. When the robots deviated from the intended trajectory or mishandled the object, they were able to recover mid-task by referencing alternative trajectories from their library of instructional data.

Senior author Russ Tedrake, who is the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT, adds that this imitation-guided data generation technique combines the strengths of human demonstration with the power of robot motion planning algorithms.

“Even a single demonstration from a human can make the motion planning problem much easier,” says Tedrake, who is also a senior vice president of large behavior models at the Toyota Research Institute and CSAIL principal investigator. “In the future, perhaps the foundation models will be able to provide this information, and this type of data generation technique will provide a type of post-training recipe for that model.”

The future of PhysicsGen

Soon, PhysicsGen may be extended to a new frontier: diversifying the tasks a machine can execute.

“We’d like to use PhysicsGen to teach a robot to pour water when it’s only been trained to put away dishes, for example,” says Yang. “Our pipeline doesn’t just generate dynamically feasible motions for familiar tasks; it also has the potential of creating a diverse library of physical interactions that we believe can serve as building blocks for accomplishing entirely new tasks a human hasn’t demonstrated.”

Creating lots of widely applicable training data may eventually help build a foundation model for robots, though MIT researchers caution that this is a somewhat distant goal. The CSAIL-led team is investigating how PhysicsGen can harness vast, unstructured resources — like internet videos — as seeds for simulation. The goal: transform everyday visual content into rich, robot-ready data that could teach machines to perform tasks no one explicitly showed them.

Yang and her colleagues also aim to make PhysicsGen even more useful for robots with diverse shapes and configurations in the future. To make that happen, they plan to leverage datasets with demonstrations of real robots, capturing how robotic joints move instead of human ones.

The researchers also plan to incorporate reinforcement learning, where an AI system learns by trial and error, to make PhysicsGen expand its dataset beyond human-provided examples. They may augment their pipeline with advanced perception techniques to help a robot perceive and interpret their environment visually, allowing the machine to analyze and adapt to the complexities of the physical world.

For now, PhysicsGen shows how AI can help us teach different robots to manipulate objects within the same category, particularly rigid ones. The pipeline may soon help robots find the best ways to handle soft items (like fruits) and deformable ones (like clay), but those interactions aren’t easy to simulate yet.

Yang and Tedrake wrote the paper with two CSAIL colleagues: co-lead author and MIT PhD student Hyung Ju “Terry” Suh SM ’22 and MIT PhD student Bernhard Paus Græsdal. Robotics and AI Institute researchers Tong Zhao ’22, MEng ’23, Tarik Kelestemur, Jiuguang Wang, and Tao Pang PhD ’23 are also authors. Their work was supported by the Robotics and AI Institute and Amazon.

The researchers recently presented their work at the Robotics: Science and Systems conference.

New AI system uncovers hidden cell subtypes, boosts precision medicine

Fri, 07/11/2025 - 2:40pm

In order to produce effective targeted therapies for cancer, scientists need to isolate the genetic and phenotypic characteristics of cancer cells, both within and across different tumors, because those differences impact how tumors respond to treatment.

Part of this work requires a deep understanding of the RNA or protein molecules each cancer cell expresses, where it is located in the tumor, and what it looks like under a microscope.

Traditionally, scientists have looked at one or more of these aspects separately, but now a new deep learning AI tool, CellLENS (Cell Local Environment and Neighborhood Scan), fuses all three domains together, using a combination of convolutional neural networks and graph neural networks to build a comprehensive digital profile for every single cell. This allows the system to group cells with similar biology — effectively separating even those that appear very similar in isolation, but behave differently depending on their surroundings.

The study, published recently in Nature Immunology, details the results of a collaboration between researchers from MIT, Harvard Medical School, Yale University, Stanford University, and University of Pennsylvania — an effort led by Bokai Zhu, an MIT postdoc and member of the Broad Institute of MIT and Harvard and the Ragon Institute of MGH, MIT, and Harvard.

Zhu explains the impact of this new tool: “Initially we would say, oh, I found a cell. This is called a T cell. Using the same dataset, by applying CellLENS, now I can say this is a T cell, and it is currently attacking a specific tumor boundary in a patient.

“I can use existing information to better define what a cell is, what is the subpopulation of that cell, what that cell is doing, and what is the potential functional readout of that cell. This method may be used to identify a new biomarker, which provides specific and detailed information about diseased cells, allowing for more targeted therapy development.”

This is a critical advance because current methodologies often miss critical molecular or contextual information — for example, immunotherapies may target cells that only exist at the boundary of a tumor, limiting efficacy. By using deep learning, the researchers can detect many different layers of information with CellLENS, including morphology and where the cell is spatially in a tissue.

When applied to samples from healthy tissue and several types of cancer, including lymphoma and liver cancer, CellLENS uncovered rare immune cell subtypes and revealed how their activity and location relate to disease processes — such as tumor infiltration or immune suppression.

These discoveries could help scientists better understand how the immune system interacts with tumors and pave the way for more precise cancer diagnostics and immunotherapies.

“I’m extremely excited by the potential of new AI tools, like CellLENS, to help us more holistically understand aberrant cellular behaviors within tissues,” says co-author Alex K. Shalek, the director of the Institute for Medical Engineering and Science (IMES), the J. W. Kieckhefer Professor in IMES and Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research at MIT, as well as an Institute member of the Broad Institute and a member of the Ragon Institute. “We can now measure a tremendous amount of information about individual cells and their tissue contexts with cutting-edge, multi-omic assays. Effectively leveraging that data to nominate new therapeutic leads is a critical step in developing improved interventions. When coupled with the right input data and careful downsteam validations, such tools promise to accelerate our ability to positively impact human health and wellness.”

Study shows a link between obesity and what’s on local restaurant menus

Fri, 07/11/2025 - 11:35am

For many years, health experts have been concerned about “food deserts,” places where residents lack good nutritional options. Now, an MIT-led study of three major global cities uses a new, granular method to examine the issue, and concludes that having fewer and less nutritional eating options nearby correlates with obesity and other health outcomes.

Rather than just mapping geographic areas, the researchers examined the dietary value of millions of food items on roughly 30,000 restaurant menus and derived a more precise assessment of the connection between neighborhoods and nutrition.

“We show that what is sold in a restaurant has a direct correlation to people’s health,” says MIT researcher Fabio Duarte, co-author of a newly published paper outlining the study’s results. “The food landscape matters.”

The open-access paper, “Data-driven nutritional assessment of urban food landscapes: insights from Boston, London, Dubai,” was published this week in Nature: Scientific Reports.

The co-authors are Michael Tufano, a PhD student at Wageningen University, in the Netherlands; Duarte, associate director of MIT’s Senseable City Lab, which uses data to study cities as dynamic systems; Martina Mazzarello, a postdoc at the Senseable City Lab; Javad Eshtiyagh, a research fellow at the Senseable City Lab; Carlo Ratti, professor of the practice and director of the Senseable City Lab; and Guido Camps, a senior researcher at Wageningen University.

Scanning the menu

To conduct the study, the researchers examined menus from Boston, Dubai, and London, in the summer of 2023, compiling a database of millions of items available through popular food-delivery platforms. The team then evaluated the food items as rated by the USDA’s FoodData Central database, an information bank with 375,000 kinds of food products listed. The study deployed two main metrics, the Meal Balance Index, and the Nutrient-Rich Foods Index.

The researchers examined about 222,000 menu items from over 2,000 restaurants in Boston, about 1.6 million menu items from roughly 9,000 restaurants in Dubai, and about 3.1 million menu items from about 18,000 restaurants in London. In Boston, about 71 percent of the items were in the USDA database; in Dubai and London, that figure was 42 percent and 56 percent, respectively.

The team then rated the nutritional value of the items appearing on menus, and correlated the food data with health-outcome data from Boston and London. In London, they found a clear correlation between neighborhood menu offerings and obesity, or the lack thereof; with a slightly less firm correlation in Boston. Areas with food options that include a lot of dietary fibers, sometimes along with fruits and vegetables, tend to have better health data.

In Dubai, the researchers did not have the same types of health data available but did observe a strong correlation between rental prices and the nutritional value of neighborhood-level food, suggesting that wealthier residents have better nourishment options.

“At the item level, when we have less nutritional food, we see more cases of obsesity,” Tufano says. “It’s true that not only do we have more fast food in poor neighborhoods, but the nutritional value is not the same.”

Re-mapping the food landscape

By conducting the study in this fashion, the scholars added a layer of analysis to past studies of food deserts. While past work has broken ground by identifying neighborhoods and areas lacking good food access, this research makes a more comprehensive assessment of what people consume. The research moves toward evaluating the complex mix of food available in any given area, which can be true even of areas with more limited options.

“We were not satisfied with this idea that if you only have fast food, it’s a food desert, but if you have a Whole Foods, it’s not,” Duarte says. “It’s not necessarily like that.”

For the Senseable City Lab researchers, the study is a new technique further enabling them to understand city dynamics and the effects of the urban environment on health. Past lab studies have often focused on issues such as urban mobility, while extending to matters such as mobility and air pollution, among other topics.

Being able to study food and health at the neighborhood level, though, is still another example of the ways that data-rich spheres of life can be studied in close detail.

“When we started working on cities and data, the data resolution was so low,” Ratti says. “Today the amount of data is so immense we see this great opportunity to look at cities and see the influence of the urban environment as a big determinant of health. We see this as one of the new frontiers of our lab. It’s amazing how we can now look at this very precisely in cities.”

How an MIT professor introduced hundreds of thousands of students to neuroscience

Fri, 07/11/2025 - 9:00am

From the very beginning, MIT Professor Mark Bear’s philosophy for the textbook “Neuroscience: Exploring the Brain” was to provide an accessible and exciting introduction to the field while still giving undergraduates a rigorous scientific foundation. In the 30 years since its first print printing in 1995, the treasured 975-page tome has gone on to become the leading introductory neuroscience textbook, reaching hundreds of thousands of students at hundreds of universities around the world.

“We strive to present the hard science without making the science hard,” says Bear, the Picower Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. The fifth edition of the textbook is out today from the publisher Jones & Bartlett Learning.

Bear says the book is conceived, written, and illustrated to instill students with the state of knowledge in the field without assuming prior sophistication in science. When he first started writing it in the late 1980s — in an effort soon joined by his co-authors and former Brown University colleagues Barry Connors and Michael Paradiso — there simply were no undergraduate neuroscience textbooks. Up until then, first as a graduate teaching assistant and then as a young professor, Bear taught Brown’s pioneering introductory neuroscience class with a spiral-bound stack of photocopied studies and other scrounged readings.

Don’t overwhelm

Because universities were only beginning to launch neuroscience classes and majors at the time, Bear recalls that it was hard to find a publisher. The demand was just too uncertain. With an unsure market, Bear says, the original publisher, Williams & Wilkins, wanted to keep costs down by printing only in black and white. But Bear and his co-authors insisted on color. Consistent with their philosophy for the book, they wanted students, even before they began reading, to be able to learn from attractive, high-quality illustrations.

“Rather than those that speak a thousand words, we wanted to create illustrations that each make a single point.” Bear says. “We don’t want to overwhelm students with a bunch of detail. If people want to know what’s in our book, just look at the pictures.”

Indeed, if the book had struck students as impenetrable and dull, Bear says, he and his co-authors would have squandered the advantage they had in presenting their subject: the inherently fascinating and exciting brain.

“Most good scientists are extremely enthusiastic about the science. It exciting. It’s fun. It turns them on,” Bear says. “We try to communicate the joy. We’re so lucky because the object of our affection is the brain.”

To help bring that joy and excitement across, another signature of the book throughout its 30-year-history has been the way it presents the process of discovery alongside the discoveries themselves, Bear says. While it’s instructive to provide students with the experimental evidence that supports the concepts they are learning, it would bog down the text to delineate the details of every experiment. Instead, Bear, Connors, and Paradiso have chosen to highlight the process of discovery via one-page guest essays by prominent neuroscientists who share their discovery stories personally. Each edition has featured about 25 such “Path of Discovery” essays, so more than 100 scientists have participated, including several Nobel Prize winners, such as the Picower Institute’s founding director, Susumu Tonegawa.

The new edition includes Path of Discovery essays by current Picower Institute Director Li-Huei Tsai and Picower Institute colleague Emery N. Brown. Tsai recounts her discovery that sensory stimulation of 40Hz rhythms in the brain can trigger a health-promoting response among many different cell types. Brown writes about how various biological cycles and rhythms in the brain and body, such as the circadian rhythms and brain waves, help organize our daily lives.

Immense impact

Jones & Bartlett reports that more than 470 colleges and universities in 48 U.S. states and the District of Columbia have used the fourth edition of the book. Various editions have also been translated into seven other languages, including Chinese, French, Portuguese, and Spanish. There are hundreds of reviews on Amazon.com with an average around 4.6 stars. One reviewer wrote about the fourth edition: “I never knew it was possible to love a textbook before!”

The reviews sometimes go beyond mere internet postings. Once, after Bear received an award in Brazil, he found himself swarmed at the podium by scores of students eager for him to sign their copies of the book. And earlier this year, when Bear needed surgery, the anesthesiologist was excited to meet him.

“The anesthesiologist was like, ‘Are you the Mark Bear who wrote the textbook?,’ and she was so excited, because she said, ‘This book changed my life,’” Bear recalls. “After I recovered, she showed up in the ICU for me to sign it. All of us authors have had this experience that there are people whose lives we’ve touched.”

While Bear is proud that so many students have benefited from the book, he also notes that teaching and textbook writing have benefited him as a scientist. They have helped him present his research more clearly, he says, and have given him a broad perspective on what’s truly important in the field.

“Experience teaching will influence the impact of your own science by making you more able to effectively communicate it.” Bear says. “And the teacher has a difficult job of surveying a field and saying, ‘I’ve got to capture the important advances and set aside the less-important stuff.’ It gives you a perspective that helps you to discriminate between more-important and less-important problems in your own research.”

Over the course of 30 years via their carefully crafted book, Bear, Connors, and Paradiso have lent that perspective to generations of students. And the next generation will start with today’s publishing of the new edition. 

Gift from Dick Larson establishes Distinguished Professorship in Data, Systems, and Society

Thu, 07/10/2025 - 4:30pm

The MIT Institute for Data, Systems, and Society (IDSS) announced the creation of a new endowed chair made possible by the generosity of IDSS professor post-tenure and “MIT lifer” Richard “Dick” Larson. Effective July 1, the fund provides a full professorship for senior IDSS faculty: the Distinguished Professorship in Data, Systems, and Society.

“As a faculty member, MIT has not only accepted but embraced my several mid-career changes of direction,” says Larson. “I have called five different academic departments my home, starting with Electrical Engineering (that is what it was called in the 1960s) and now finalized with the interdepartmental, interdisciplinary IDSS — Institute for Data, Systems and Society. Those beautiful three words  — data, systems, society — they represent my energy and commitment over the second half of my career. My gifted chair is an effort to keep alive those three words, with others following me doing research, teaching and mentoring centered around data, systems, society.”

Larson’s career has focused his operations research and systems expertise on a wide variety of problems, in both public and private sectors. His contributions span the fields of urban service systems (especially emergency response systems), disaster planning, pandemics, queueing, logistics, technology-enabled education, smart-energy houses, and workforce planning. His latest book, “Model Thinking for Everyday Life,” draws on decades of experience as a champion of STEM education at MIT and beyond, such as his leadership of MIT BLOSSOMS.

“Dick Larson has been making an impact at MIT for over half a century,” says IDSS Director Fotini Christia, the Ford International Professor in Political Science. “This gift extends his already considerable legacy and ensures his impact will continue to be felt for many years to come.”

Christia is pleased that IDSS and brain and cognitive science professor Alexander “Sasha” Rakhlin is the inaugural holder of the new professorship. The selection recognizes Rakhlin’s distinguished scholarly record, dedicated service to IDSS, excellence in teaching, and contributions to research in statistics and computation.

“Sasha’s analysis of neural network complexity, and his work developing tools for online prediction, are perfect examples of research which builds bridges across disciplines, and also connects different departments and units at MIT,” says Michale Fee, the Glen V. and Phyllis F. Dorflinger Professor of Neuroscience, and head of the Department of Brain and Cognitive Sciences. “It’s wonderful to see Sasha’s contributions recognized in this way, and I’m grateful to Dick Larson for supporting this vision.”

Rakhlin’s research is in machine learning, with an emphasis on statistics and computation. He is interested in formalizing the process of learning, in analyzing learning models, and in deriving and implementing emerging learning methods. A significant thrust of his research is in developing theoretical and algorithmic tools for online prediction, a learning framework where data arrives in a sequential fashion.

“I am honored to be the inaugural holder of the Distinguished Professorship in Data, Systems, and Society,” says Rakhlin. “Professor Larson’s commitment to education and service to MIT both serve as models to follow.”

Walk-through screening system enhances security at airports nationwide

Thu, 07/10/2025 - 4:20pm

A new security screener that people can simply walk past may soon be coming to an airport near you. Last year, U.S. airports nationwide began adopting HEXWAVE — a commercialized walkthrough security screening system based on microwave imaging technology developed at MIT Lincoln Laboratory — to satisfy a new Transportation Security Administration (TSA) mandate for enhanced employee screening to detect metallic and nonmetallic threats. The TSA is now in the process of evaluating HEXWAVE as a potential replacement of metal detectors to screen PreCheck passengers.

Typically, when you arrive at an airport security checkpoint line, you place your carry-on items on the conveyer belt, remove your shoes and any metallic items, and enter a body scanner. As you hold still for a few seconds with your feet spread apart and your arms extended over your head, the scanner creates a generic, featureless 3D body outline revealing any metallic or nonmetallic concealed weapons or other prohibited items.

Requiring individuals to stop, remove clothing and belongings, and pose for scans impedes traffic flow in airports and other highly populated venues, such as stadiums, shopping malls, mass transit stations, and schools. To enable more efficient screening of unstructured crowds and ensure public safety, the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) sponsored Lincoln Laboratory to prototype a high-resolution imaging system capable of scanning people and their belongings as they walk by. This R&D effort was conducted as part of S&T's Surface Transportation Explosive Threat Detection Program, which aims to provide the surface-transportation end user-community (e.g., mass transit) with a layered and integrated capability to detect threat items at the speed of the traveling public.

The laboratory's prototype microwave imager, which consists of a set of antennas installed on flat panels, operates under the same fundamental principle as existing body scanners: low-energy radio waves (less powerful than those transmitted by a cellphone) are transmitted from antennas toward a person's body and reflect off skin and any hidden objects; the reflected waves return to the antennas and are processed by a computer to create an image, which security personnel then review to identify any potential concealed threats.

The novelty of the laboratory's invention lies in its ability to discreetly handle a constant stream of subjects in motion, measuring each subject very quickly (within tens of milliseconds) and reconstructing 3D microwave images of each subject at a video rate. To meet these challenging requirements, the laboratory team developed a cost-effective antenna array and efficient image-reconstruction algorithms. Compared to existing systems, the laboratory's 3D microwave imager runs 100 times faster using the same computing hardware. In 2017, the team demonstrated the prototype's ability to detect various simulated threat items at varying distances on a rail platform at the Massachusetts Bay Transit Authority (MBTA) Emergency Training Center in Boston.

"The goal of our work is to provide security staff with more effective tools to protect public spaces. To that end, microwave imaging technology can quickly and unobtrusively provide visibility of items carried into a venue," says William Moulder, who led the technology's development at Lincoln Laboratory.

In 2018, the security company Liberty Defense licensed the imaging technology and entered into a cooperative research and development agreement (CRADA) with Lincoln Laboratory. Transitioning technology to industry for commercialization is part of the laboratory's role as a federally funded research and development center, and CRADAs provide a mechanism for such transition to happen. Through the CRADA, Liberty Defense maintained Lincoln Laboratory's core image-reconstruction intellectual property and made the technology enhancements required for commercialization, including an entirely new hardware architecture, radio frequency (RF) antenna modules, and a transceiver system that meets Federal Communications Commission waveform and RF performance requirements for indoor and outdoor operation. The co-organizational team facilitating the transition of the technology was recognized by the Federal Laboratory Consortium for Technology Transfer with a 2019 Excellence in Technology Transfer Award for the Northeast region.

By 2021, Liberty Defense had prototyped a walk-through security screening system, HEXWAVE. That same year, through the TSA's On-Person Screening Capability Program, Liberty Defense received a contract award to demonstrate HEXWAVE's enhanced threat-detection and high-throughput capabilities for screening aviation workers. Following successful testing of HEXWAVE at sports complexes, entertainment arenas, and shopping centers, both nationally and internationally, Liberty Defense began offering the product for sale.

"HEXWAVE is a great example of how federally funded R&D can be successfully transitioned to industry to meet real-world security needs," says Asha Rajagopal, the laboratory's chief technology transfer officer. "By working with Liberty Defense, we helped accelerate the delivery of a critical capability into the hands of those protecting public spaces."

In 2023, TSA began testing HEXWAVE as a potential replacement of metal detectors used to screen passengers in TSA PreCheck lanes. Airports across the United States started deploying HEXWAVE in 2024 to meet the TSA's employee screening mandate by the April 2026 deadline. Liberty Defense notes various other markets for HEXWAVE; the first units for commercial applications were delivered to Los Alamos National Laboratory in 2023, and the technology has since been deployed at other national labs, correctional facilities, government buildings, and courthouses.

"Liberty was extremely fortunate to license the technology from MIT Lincoln Laboratory," says Bill Frain, CEO of Liberty Defense. "From the outset, they've been a true partner — bringing not only deep innovation and technical expertise, but also a clear vision for commercial deployment. Together, we've successfully brought next-generation technology to market to help protect people in public spaces."

Designing across cultural and geographic divides

Thu, 07/10/2025 - 4:10pm

In addition to the typical rigors of MIT classes, Terrascope Subject 2.00C/1.016/EC.746 (Design for Complex Environmental Issues) poses some unusual hurdles for students to navigate: collaborating across time zones, bridging different cultural and institutional experiences, and trying to do hands-on work over Zoom. That’s because the class includes students from not only MIT, but also Diné College in Tsaile, Arizona, within the Navajo Nation, and the University of Puerto Rico-Ponce (UPRP).

Despite being thousands of miles apart, students work in teams to tackle a real-world problem for a client, based on the Terrascope theme for the year. “Understanding how to collaborate over long distances with people who are not like themselves will be an important item in many of these students’ toolbelts going forward, in some cases just as much as — or more than — any particular design technique,” says Ari Epstein, Terrascope associate director and senior lecturer. Over the past several years, Epstein has taught the class along with Joel Grimm of MIT Beaver Works and Libby Hsu of MIT D-Lab, as well instructors from the two collaborating institutions. Undergraduate teaching fellows from all three schools are also key members of the instructional staff.

Since the partnership began three years ago (initially with Diné College, with the addition of UPRP two years ago), the class themes have included food security and sustainable agriculture in Navajo Nation; access to reliable electrical power in Puerto Rico; and this year, increasing museum visitors’ engagement with artworks depicting mining and landscape alteration in Nevada.

Each team — which includes students from all three colleges — meets with clients online early in the term to understand their needs; then, through an iterative process, teams work on designing prototypes. During MIT’s spring break, teams travel to meet with the clients onsite to get feedback and continue to refine their prototypes. At the end of the term, students present their final products to the clients, an expert panel, and their communities at a hybrid showcase event held simultaneously on all three campuses.

Free-range design engineering

“I really loved the class,” says Graciela Leon, a second-year mechanical engineering major who took the subject in 2024. “It was not at all what I was expecting,” she adds. While the learning objectives on the syllabus are fairly traditional — using an iterative engineering design process, developing teamwork skills, and deepening communication skills, to name a few — the approach is not. “Terrascope is just kind of like throwing you into a real-world problem … it feels a lot more like you are being trusted with this actual challenge,” Leon says.

The 2024 challenge was to find a way to help the clients, Puerto Rican senior citizens, turn on gasoline-powered generators when the electrical power grid fails; some of them struggle with the pull cords necessary to start the generators. The students were tasked with designing solutions to make starting the generators easier.

Terrascope instructors teach fundamental skills such as iterative design spirals and scrum workflow frameworks, but they also give students ample freedom to follow their ideas. Leon admits she was a bit frustrated at first, because she wasn’t sure what she was supposed to be doing. “I wanted to be building things and thought, ‘Wow, I have to do all these other things, I have to write some kind of client profile and understand my client’s needs.’ I was just like, ‘Hand me a drill! I want to design something!’”

When he took the class last year, Uziel Rodriguez-Andujar was also thrown off initially by the independence teams had. Now a second-year UPRP student in mechanical engineering, he’s accustomed to lecture-based classes. “What I found so interesting is the way [they] teach the class, which is, ‘You make your own project, and we need you to find a solution to this. How it will look, and when you have it — that’s up to you,’” he says.

Clearing hurdles

Teaching the course on three different campuses introduces a number of challenges for students and instructors to overcome — among them, operating in three different time zones, overcoming language barriers, navigating different cultural and institutional norms, communicating effectively, and designing and building prototypes over Zoom.

“The culture span is huge,” explains Epstein. “There are different ways of speaking, different ways of listening, and each organization has different resources.”

First-year MIT student EJ Rodriguez found that one of the biggest obstacles was trying to convey ideas to teammates clearly. He took the class this year, when the theme revolved around the environmental impacts of lithium mining. The client, the Nevada Museum of Art, wanted to find ways to engage visitors with its artwork collection related to mining-related landscape changes.

Rodriguez and his team designed a pendulum with a light affixed to it that illuminates a painting by a Native American artist. When the pendulum swings, it changes how the visitor experiences the artwork. The team built parts for the pendulum on different campuses, and they reached a point where they realized their pieces were incompatible. “We had different visions of what we wanted for the project, and different vocabulary we were using to describe our ideas. Sometimes there would be a misunderstanding … It required a lot of honesty from each campus to be like, ‘OK, I thought we were doing exactly this,’ and obviously in a really respectful way.”

It’s not uncommon for students at Diné College and UPRP to experience an initial hurdle that their MIT peers do not. Epstein notes, “There’s a tendency for some folks outside MIT to see MIT students as these brilliant people that they don’t belong in the same room with.” But the other students soon realize not only that they can hold their own intellectually, but also that their backgrounds and experiences are incredibly valuable. “Their life experiences actually put them way ahead of many MIT students in some ways, when you think about design and fabrication, like repairing farm equipment or rebuilding transmissions,” he adds.

That’s how Cauy Bia felt when he took the class in 2024. Currently a first-year graduate student in biology at Diné College, Bia questioned whether he’d be on par with the MIT students. “I’ve grown up on a farm, and we do a lot of building, a lot of calculations, a lot of hands-on stuff. But going into this, I was sweating it so hard [wondering], ‘Am I smart enough to work with these students?’ And then, at the end of the day, that was never an issue,” he says.

The value of reflection

Every two weeks, Terrascope students write personal reflections about their experiences in the class, which helps them appreciate their academic and personal development. “I really felt that I had undergone a process that made me grow as an engineer,” says Leon. “I understood the importance of people and engineering more, including teamwork, working with clients, and de-centering the project away from what I wanted to build and design.”

When Bia began the semester, he says, he was more of a “make-or-break-type person” and tended to see things in black and white. “But working with all three campuses, it kind of opened up my thought process so I can assess more ideas, more voices and opinions. And I can get broader perspectives and get bigger ideas from that point,” he says. It was also a powerful experience culturally for him, particularly “drawing parallels between Navajo history, Navajo culture, and seeing the similarities between that and Puerto Rican culture, seeing how close we are as two nations.”

Rodriguez-Andujar gained an appreciation for the “constant struggle between simplicity and complexity” in engineering. “You have all these engineers trying to over-engineer everything,” he says. “And after you get your client feedback [halfway through the semester], it turns out, ‘Oh, that doesn’t work for me. I’m sorry — you have to scale it down like a hundred times and make it a lot simpler.’”

For instructors, the students’ reflections are invaluable as they strive to make improvements every year. In many ways, you might say the class is an iterative design spiral, too. “The past three years have themselves been prototypes,” Epstein says, “and all of the instructional staff are looking forward to continuing these exciting partnerships.”

A bionic knee integrated into tissue can restore natural movement

Thu, 07/10/2025 - 2:00pm

MIT researchers have developed a new bionic knee that can help people with above-the-knee amputations walk faster, climb stairs, and avoid obstacles more easily than they could with a traditional prosthesis.

Unlike prostheses in which the residual limb sits within a socket, the new system is directly integrated with the user’s muscle and bone tissue. This enables greater stability and gives the user much more control over the movement of the prosthesis.

Participants in a small clinical study also reported that the limb felt more like a part of their own body, compared to people who had more traditional above-the-knee amputations.

“A prosthesis that's tissue-integrated — anchored to the bone and directly controlled by the nervous system — is not merely a lifeless, separate device, but rather a system that is carefully integrated into human physiology, offering a greater level of prosthetic embodiment. It’s not simply a tool that the human employs, but rather an integral part of self,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Tony Shu PhD ’24 is the lead author of the paper, which appears today in Science.

Better control

Over the past several years, Herr’s lab has been working on new prostheses that can extract neural information from muscles left behind after an amputation and use that information to help guide a prosthetic limb.

During a traditional amputation, pairs of muscles that take turns stretching and contracting are usually severed, disrupting the normal agonist-antagonist relationship of the muscles. This disruption makes it very difficult for the nervous system to sense the position of a muscle and how fast it’s contracting.

Using the new surgical approach developed by Herr and his colleagues, known as agonist-antagonist myoneuronal interface (AMI), muscle pairs are reconnected during surgery so that they still dynamically communicate with each other within the residual limb. This sensory feedback helps the wearer of the prosthesis to decide how to move the limb, and also generates electrical signals that can be used to control the prosthetic limb.

In a 2024 study, the researchers showed that people with amputations below the knee who received the AMI surgery were able to walk faster and navigate around obstacles much more naturally than people with traditional below-the-knee amputations.

In the new study, the researchers extended the approach to better serve people with amputations above the knee. They wanted to create a system that could not only read out signals from the muscles using AMI but also be integrated into the bone, offering more stability and better sensory feedback.

To achieve that, the researchers developed a procedure to insert a titanium rod into the residual femur bone at the amputation site. This implant allows for better mechanical control and load bearing than a traditional prosthesis. Additionally, the implant contains 16 wires that collect information from electrodes located on the AMI muscles inside the body, which enables more accurate transduction of the signals coming from the muscles.

This bone-integrated system, known as e-OPRA, transmits AMI signals to a new robotic controller developed specifically for this study. The controller uses this information to calculate the torque necessary to move the prosthesis the way that the user wants it to move.

“All parts work together to better get information into and out of the body and better interface mechanically with the device,” Shu says. “We’re directly loading the skeleton, which is the part of the body that’s supposed to be loaded, as opposed to using sockets, which is uncomfortable and can lead to frequent skin infections.”

In this study, two subjects received the combined AMI and e-OPRA system, known as an osseointegrated mechanoneural prosthesis (OMP). These users were compared with eight who had the AMI surgery but not the e-OPRA implant, and seven users who had neither AMI nor e-OPRA. All subjects took a turn at using an experimental powered knee prosthesis developed by the lab.

The researchers measured the participants’ ability to perform several types of tasks, including bending the knee to a specified angle, climbing stairs, and stepping over obstacles. In most of these tasks, users with the OMP system performed better than the subjects who had the AMI surgery but not the e-OPRA implant, and much better than users of traditional prostheses.

“This paper represents the fulfillment of a vision that the scientific community has had for a long time — the implementation and demonstration of a fully physiologically integrated, volitionally controlled robotic leg,” says Michael Goldfarb, a professor of mechanical engineering and director of the Center for Intelligent Mechatronics at Vanderbilt University, who was not involved in the research. “This is really difficult work, and the authors deserve tremendous credit for their efforts in realizing such a challenging goal.”

A sense of embodiment

In addition to testing gait and other movements, the researchers also asked questions designed to evaluate participants’ sense of embodiment — that is, to what extent their prosthetic limb felt like a part of their own body.

Questions included whether the patients felt as if they had two legs, if they felt as if the prosthesis was part of their body, and if they felt in control of the prosthesis. Each question was designed to evaluate the participants’ feelings of agency, ownership of device, and body representation.

The researchers found that as the study went on, the two participants with the OMP showed much greater increases in their feelings of agency and ownership than the other subjects.

“Another reason this paper is significant is that it looks into these embodiment questions and it shows large improvements in that sensation of embodiment,” Herr says. “No matter how sophisticated you make the AI systems of a robotic prosthesis, it’s still going to feel like a tool to the user, like an external device. But with this tissue-integrated approach, when you ask the human user what is their body, the more it’s integrated, the more they’re going to say the prosthesis is actually part of self.”

The AMI procedure is now done routinely on patients with below-the-knee amputations at Brigham and Women’s Hospital, and Herr expects it will soon become the standard for above-the-knee amputations as well. The combined OMP system will need larger clinical trials to receive FDA approval for commercial use, which Herr expects may take about five years.

The research was funded by the Yang Tan Collective and DARPA.

Supporting mission-driven space innovation, for Earth and beyond

Thu, 07/10/2025 - 12:00am

As spaceflight becomes more affordable and accessible, the story of human life in space is just beginning. Aurelia Institute wants to make sure that future benefits all of humanity — whether in space or here on Earth.

Founded by Ariel Ekblaw SM ’17, PhD ’20; Danielle DeLatte ’11; and former MIT research scientist Sana Sharma, the nonprofit institute serves as a research lab for space technology and architecture, a center for education and outreach, and a policy hub dedicated to inspiring more people to work in the space industry.

At the heart of the Aurelia Institute’s mission is a commitment to making space accessible to all people. A big part of that work involves annual microgravity flights that Ekblaw says are equal part research missions, workforce training, and inspiration for the next generation of space enthusiasts.

“We’ve done that every year,” Ekblaw says of the flights. “We now have multiple cohorts of students that connect across years. It brings together people from very different backgrounds. We’ve had artists, designers, architects, ethicists, teachers, and others fly with us. In our R&D, we are interested in space infrastructure for the public good. That’s why we’re directing our technology portfolios toward near-term, massive infrastructure projects in low-Earth orbit that benefit life on Earth.”

From the annual flights to the Institute’s self-assembling space architecture technology known as TESSERAE, much of Aurelia’s work is an extension of projects Ekblaw started as a graduate student at MIT.

“My life trajectory changed when I came to MIT,” says Ekblaw, who is still a visiting researcher at MIT. “I am incredibly grateful for the education I got in the Media Lab and the Department of Aeronautics and Astronautics. MIT is what gave me the skill, the technology, and the community to be able to spin out Aurelia and do something important in the space industry at scale.”

“MIT changes lives”

Ekblaw has always been passionate about space. As an undergraduate at Yale University, she took part in a NASA microgravity flight as part of a research project. In the first year of her PhD program at MIT, she led the launch of the Space Exploration Initiative, a cross-Institute effort to drive innovation at the frontiers of space exploration. The ongoing initiative started as a research group but soon raised enough money to conduct microgravity flights and, more recently, conduct missions to the International Space Station and the moon.

“The Media Lab was like magic in the years I was there,” Ekblaw says. “It had this sense of what we used to call ‘anti-disciplinary permission-lessness.’ You could get funding to explore really different and provocative ideas. Our mission was to democratize access to space.”

In 2016, while taking a class taught by Neri Oxman, then a professor in the Media Lab, Ekblaw got the idea for the TESSERAE Project, in which tiles autonomously self-assemble into spherical space structures.

“I was thinking about the future of human flight, and the class was a seeding moment for me,” Ekblaw says. “I realized self-assembly works OK on Earth, it works particularly well at small scales like in biology, but it generally struggles with the force of gravity once you get to larger objects. But microgravity in space was a perfect application for self-assembly.”

That semester, Ekblaw was also taking Professor Neil Gershenfeld’s class MAS.863 (How to Make (Almost) Anything), where she began building prototypes. Over the ensuing years of her PhD, subsequent versions of the TESSERAE system were tested on microgravity flights run by the Space Exploration Initiative, in a suborbital mission with the space company Blue Origin, and as part of a 30-day mission aboard the International Space Station.

“MIT changes lives,” Ekblaw says. “It completely changed my life by giving me access to real spaceflight opportunities. The capstone data for my PhD was from an International Space Station mission.”

After earning her PhD in 2020, Ekblaw decided to ask two researchers from the MIT community and the Space Exploration Initiative, Danielle DeLatte and Sana Sharma, to partner with her to further develop research projects, along with conducting space education and policy efforts. That collaboration turned into Aurelia.

“I wanted to scale the work I was doing with the Space Exploration Initiative, where we bring in students, introduce them to zero-g flights, and then some graduate to sub-orbital, and eventually flights to the International Space Station,” Ekblaw says. “What would it look like to bring that out of MIT and bring that opportunity to other students and mid-career people from all walks of life?”

Every year, Aurelia charters a microgravity flight, bringing about 25 people along to conduct 10 to 15 experiments. To date, nearly 200 people have participated in the flights across the Space Exploration Initiative and Aurelia, and more than 70 percent of those fliers have continued to pursue activities in the space industry post-flight.

Aurelia also offers open-source classes on designing research projects for microgravity environments and contributes to several education and community-building activities across academia, industry, and the arts.

In addition to those education efforts, Aurelia has continued testing and improving the TESSERAE system. In 2022, TESSERAE was brought on the first private mission to the International Space Station, where astronauts conducted tests around the system’s autonomous self-assembly, disassembly, and stability. Aurelia will return to the International Space Station in early 2026 for further testing as part of a recent grant from NASA.

The work led Aurelia to recently spin off the TESSERAE project into a separate, for-profit company. Ekblaw expects there to be more spinoffs out of Aurelia in coming years.

Designing for space, and Earth

The self-assembly work is only one project in Aurelia’s portfolio. Others are focused on designing human-scale pavilions and other habitats, including a space garden and a massive, 20-foot dome depicting the interior of space architectures in the future. This space habitat pavilion was recently deployed as part of a six-month exhibit at the Seattle Museum of Flight.

“The architectural work is asking, ‘How are we going to outfit these systems and actually make the habitats part of a life worth living?’” Ekblaw explains.

With all of its work, Aurelia’s team looks at space as a testbed to bring new technologies and ideas back to our own planet.

“When you design something for the rigors of space, you often hit on really robust technologies for Earth,” she says.

Changing the conversation in health care

Wed, 07/09/2025 - 4:50pm

Generative artificial intelligence is transforming the ways humans write, read, speak, think, empathize, and act within and across languages and cultures. In health care, gaps in communication between patients and practitioners can worsen patient outcomes and prevent improvements in practice and care. The Language/AI Incubator, made possible through funding from the MIT Human Insight Collaborative (MITHIC), offers a potential response to these challenges. 

The project envisions a research community rooted in the humanities that will foster interdisciplinary collaboration across MIT to deepen understanding of generative AI’s impact on cross-linguistic and cross-cultural communication. The project’s focus on health care and communication seeks to build bridges across socioeconomic, cultural, and linguistic strata.

The incubator is co-led by Leo Celi, a physician and the research director and senior research scientist with the Institute for Medical Engineering and Science (IMES), and Per Urlaub, professor of the practice in German and second language studies and director of MIT’s Global Languages program. 

“The basis of health care delivery is the knowledge of health and disease,” Celi says. “We’re seeing poor outcomes despite massive investments because our knowledge system is broken.”

A chance collaboration

Urlaub and Celi met during a MITHIC launch event. Conversations during the event reception revealed a shared interest in exploring improvements in medical communication and practice with AI.

“We’re trying to incorporate data science into health-care delivery,” Celi says. “We’ve been recruiting social scientists [at IMES] to help advance our work, because the science we create isn’t neutral.”

Language is a non-neutral mediator in health care delivery, the team believes, and can be a boon or barrier to effective treatment. “Later, after we met, I joined one of his working groups whose focus was metaphors for pain: the language we use to describe it and its measurement,” Urlaub continues. “One of the questions we considered was how effective communication can occur between doctors and patients.”

Technology, they argue, impacts casual communication, and its impact depends on both users and creators. As AI and large language models (LLMs) gain power and prominence, their use is broadening to include fields like health care and wellness. 

Rodrigo Gameiro, a physician and researcher with MIT’s Laboratory for Computational Physiology, is another program participant. He notes that work at the laboratory centers responsible AI development and implementation. Designing systems that leverage AI effectively, particularly when considering challenges related to communicating across linguistic and cultural divides that can occur in health care, demands a nuanced approach. 

“When we build AI systems that interact with human language, we’re not just teaching machines how to process words; we’re teaching them to navigate the complex web of meaning embedded in language,” Gameiro says.

Language’s complexities can impact treatment and patient care. “Pain can only be communicated through metaphor,” Urlaub continues, “but metaphors don’t always match, linguistically and culturally.” Smiley faces and one-to-10 scales — pain measurement tools English-speaking medical professionals may use to assess their patients — may not travel well across racial, ethnic, cultural, and language boundaries.

“Science has to have a heart” 

LLMs can potentially help scientists improve health care, although there are some systemic and pedagogical challenges to consider. Science can focus on outcomes to the exclusion of the people it’s meant to help, Celi argues. “Science has to have a heart,” he says. “Measuring students’ effectiveness by counting the number of papers they publish or patents they produce misses the point.”

The point, Urlaub says, is to investigate carefully while simultaneously acknowledging what we don’t know, citing what philosophers call Epistemic Humility. Knowledge, the investigators argue, is provisional, and always incomplete. Deeply held beliefs may require revision in light of new evidence. 

“No one’s mental view of the world is complete,” Celi says. “You need to create an environment in which people are comfortable acknowledging their biases.”

“How do we share concerns between language educators and others interested in AI?” Urlaub asks. “How do we identify and investigate the relationship between medical professionals and language educators interested in AI’s potential to aid in the elimination of gaps in communication between doctors and patients?” 

Language, in Gameiro’s estimation, is more than just a tool for communication. “It reflects culture, identity, and power dynamics,” he says. In situations where a patient might not be comfortable describing pain or discomfort because of the physician’s position as an authority, or because their culture demands yielding to those perceived as authority figures, misunderstandings can be dangerous. 

Changing the conversation

AI’s facility with language can help medical professionals navigate these areas more carefully, providing digital frameworks offering valuable cultural and linguistic contexts in which patient and practitioner can rely on data-driven, research-supported tools to improve dialogue. Institutions need to reconsider how they educate medical professionals and invite the communities they serve into the conversation, the team says. 

‘We need to ask ourselves what we truly want,” Celi says. “Why are we measuring what we’re measuring?” The biases we bring with us to these interactions — doctors, patients, their families, and their communities — remain barriers to improved care, Urlaub and Gameiro say.

“We want to connect people who think differently, and make AI work for everyone,” Gameiro continues. “Technology without purpose is just exclusion at scale.”

“Collaborations like these can allow for deep processing and better ideas,” Urlaub says.

Creating spaces where ideas about AI and health care can potentially become actions is a key element of the project. The Language/AI Incubator hosted its first colloquium at MIT in May, which was led by Mena Ramos, a physician and the co-founder and CEO of the Global Ultrasound Institute

The colloquium also featured presentations from Celi, as well as Alfred Spector, a visiting scholar in MIT’s Department of Electrical Engineering and Computer Science, and Douglas Jones, a senior staff member in the MIT Lincoln Laboratory’s Human Language Technology Group. A second Language/AI Incubator colloquium is planned for August.

Greater integration between the social and hard sciences can potentially increase the likelihood of developing viable solutions and reducing biases. Allowing for shifts in the ways patients and doctors view the relationship, while offering each shared ownership of the interaction, can help improve outcomes. Facilitating these conversations with AI may speed the integration of these perspectives. 

“Community advocates have a voice and should be included in these conversations,” Celi says. “AI and statistical modeling can’t collect all the data needed to treat all the people who need it.”

Community needs and improved educational opportunities and practices should be coupled with cross-disciplinary approaches to knowledge acquisition and transfer. The ways people see things are limited by their perceptions and other factors. “Whose language are we modeling?” Gameiro asks about building LLMs. “Which varieties of speech are being included or excluded?” Since meaning and intent can shift across those contexts, it’s important to remember these when designing AI tools. 

“AI is our chance to rewrite the rules”

While there’s lots of potential in the collaboration, there are serious challenges to overcome, including establishing and scaling the technological means to improve patient-provider communication with AI, extending opportunities for collaboration to marginalized and underserved communities, and reconsidering and revamping patient care. 

But the team isn’t daunted.

Celi believes there are opportunities to address the widening gap between people and practitioners while addressing gaps in health care. “Our intent is to reattach the string that’s been cut between society and science,” he says. “We can empower scientists and the public to investigate the world together while also acknowledging the limitations engendered in overcoming their biases.”

Gameiro is a passionate advocate for AI’s ability to change everything we know about medicine. “I’m a medical doctor, and I don’t think I’m being hyperbolic when I say I believe AI is our chance to rewrite the rules of what medicine can do and who we can reach,” he says.

“Education changes humans from objects to subjects,” Urlaub argues, describing the difference between disinterested observers and active and engaged participants in the new care model he hopes to build. “We need to better understand technology’s impact on the lines between these states of being.”

Celi, Gameiro, and Urlaub each advocate for MITHIC-like spaces across health care, places where innovation and collaboration are allowed to occur without the kinds of arbitrary benchmarks institutions have previously used to mark success.

“AI will transform all these sectors,” Urlaub believes. “MITHIC is a generous framework that allows us to embrace uncertainty with flexibility.”

“We want to employ our power to build community among disparate audiences while admitting we don’t have all the answers,” Celi says. “If we fail, it’s because we failed to dream big enough about how a reimagined world could look.”

AI shapes autonomous underwater “gliders”

Wed, 07/09/2025 - 4:35pm

Marine scientists have long marveled at how animals like fish and seals swim so efficiently despite having different shapes. Their bodies are optimized for efficient, hydrodynamic aquatic navigation so they can exert minimal energy when traveling long distances.

Autonomous vehicles can drift through the ocean in a similar way, collecting data about vast underwater environments. However, the shapes of these gliding machines are less diverse than what we find in marine life — go-to designs often resemble tubes or torpedoes, since they’re fairly hydrodynamic as well. Plus, testing new builds requires lots of real-world trial-and-error.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the University of Wisconsin at Madison propose that AI could help us explore uncharted glider designs more conveniently. Their method uses machine learning to test different 3D designs in a physics simulator, then molds them into more hydrodynamic shapes. The resulting model can be fabricated via a 3D printer using significantly less energy than hand-made ones.

The MIT scientists say that this design pipeline could create new, more efficient machines that help oceanographers measure water temperature and salt levels, gather more detailed insights about currents, and monitor the impacts of climate change. The team demonstrated this potential by producing two gliders roughly the size of a boogie board: a two-winged machine resembling an airplane, and a unique, four-winged object resembling a flat fish with four fins.

Peter Yichen Chen, MIT CSAIL postdoc and co-lead researcher on the project, notes that these designs are just a few of the novel shapes his team’s approach can generate. “We’ve developed a semi-automated process that can help us test unconventional designs that would be very taxing for humans to design,” he says. “This level of shape diversity hasn’t been explored previously, so most of these designs haven’t been tested in the real world.”

But how did AI come up with these ideas in the first place? First, the researchers found 3D models of over 20 conventional sea exploration shapes, such as submarines, whales, manta rays, and sharks. Then, they enclosed these models in “deformation cages” that map out different articulation points that the researchers pulled around to create new shapes.

The CSAIL-led team built a dataset of conventional and deformed shapes before simulating how they would perform at different “angles-of-attack” — the direction a vessel will tilt as it glides through the water. For example, a swimmer may want to dive at a -30 degree angle to retrieve an item from a pool.

These diverse shapes and angles of attack were then used as inputs for a neural network that essentially anticipates how efficiently a glider shape will perform at particular angles and optimizes it as needed.

Giving gliding robots a lift

The team’s neural network simulates how a particular glider would react to underwater physics, aiming to capture how it moves forward and the force that drags against it. The goal: find the best lift-to-drag ratio, representing how much the glider is being held up compared to how much it’s being held back. The higher the ratio, the more efficiently the vehicle travels; the lower it is, the more the glider will slow down during its voyage.

Lift-to-drag ratios are key for flying planes: At takeoff, you want to maximize lift to ensure it can glide well against wind currents, and when landing, you need sufficient force to drag it to a full stop.

Niklas Hagemann, an MIT graduate student in architecture and CSAIL affiliate, notes that this ratio is just as useful if you want a similar gliding motion in the ocean.

“Our pipeline modifies glider shapes to find the best lift-to-drag ratio, optimizing its performance underwater,” says Hagemann, who is also a co-lead author on a paper that was presented at the International Conference on Robotics and Automation in June. “You can then export the top-performing designs so they can be 3D-printed.”

Going for a quick glide

While their AI pipeline seemed realistic, the researchers needed to ensure its predictions about glider performance were accurate by experimenting in more lifelike environments.

They first fabricated their two-wing design as a scaled-down vehicle resembling a paper airplane. This glider was taken to MIT’s Wright Brothers Wind Tunnel, an indoor space with fans that simulate wind flow. Placed at different angles, the glider’s predicted lift-to-drag ratio was only about 5 percent higher on average than the ones recorded in the wind experiments — a small difference between simulation and reality.

A digital evaluation involving a visual, more complex physics simulator also supported the notion that the AI pipeline made fairly accurate predictions about how the gliders would move. It visualized how these machines would descend in 3D.

To truly evaluate these gliders in the real world, though, the team needed to see how their devices would fare underwater. They printed two designs that performed the best at specific points-of-attack for this test: a jet-like device at 9 degrees and the four-wing vehicle at 30 degrees.

Both shapes were fabricated in a 3D printer as hollow shells with small holes that flood when fully submerged. This lightweight design makes the vehicle easier to handle outside of the water and requires less material to be fabricated. The researchers placed a tube-like device inside these shell coverings, which housed a range of hardware, including a pump to change the glider’s buoyancy, a mass shifter (a device that controls the machine’s angle-of-attack), and electronic components.

Each design outperformed a handmade torpedo-shaped glider by moving more efficiently across a pool. With higher lift-to-drag ratios than their counterpart, both AI-driven machines exerted less energy, similar to the effortless ways marine animals navigate the oceans.

As much as the project is an encouraging step forward for glider design, the researchers are looking to narrow the gap between simulation and real-world performance. They are also hoping to develop machines that can react to sudden changes in currents, making the gliders more adaptable to seas and oceans.

Chen adds that the team is looking to explore new types of shapes, particularly thinner glider designs. They intend to make their framework faster, perhaps bolstering it with new features that enable more customization, maneuverability, or even the creation of miniature vehicles.

Chen and Hagemann co-led research on this project with OpenAI researcher Pingchuan Ma SM ’23, PhD ’25. They authored the paper with Wei Wang, a University of Wisconsin at Madison assistant professor and recent CSAIL postdoc; John Romanishin ’12, SM ’18, PhD ’23; and two MIT professors and CSAIL members: lab director Daniela Rus and senior author Wojciech Matusik. Their work was supported, in part, by a Defense Advanced Research Projects Agency (DARPA) grant and the MIT-GIST Program.

Collaborating with the force of nature

Wed, 07/09/2025 - 4:30pm

Common sense tells us to run from molten lava flowing from active volcanoes. But MIT professors J. Jih, Cristina Parreño Alonso, and Skylar Tibbits — faculty in the Department of Architecture at the School of Architecture and Planning — have their bags packed to head to southwest Iceland in anticipation of an imminent volcanic eruption. The Nordic island nation is currently experiencing a period of intense seismic activity; seven volcanic eruptions have taken place in its southern peninsula in under a year.

Earlier this year, the faculty built and placed a series of lightweight, easily deployable steel structures close to the volcano, where a few of the recent eruptions have taken place; several more structures are on trucks waiting to be delivered to sites where fissures open and lava oozes out. Cameras are in place to record what happens when the lava meets and hits these structures to help understand the lava flows.

This new research explores what type of shapes and materials can be used to interact with lava and successfully divert it from heading in the direction of habitats or critical infrastructure that lie in its path. Their work is supported by a Professor Amar. G. Bose Research Grant.

“We’re trying to imagine new ways of conceptualizing infrastructure when it relates to lava and volcanic eruptions,” says Jih, an associate professor of the practice. “Lovely for us as designers, physical prototyping is the only way you can test some of these ideas out.” 

Currently, the Icelandic Department of Civic Protection and Emergency Management and an engineering group, EFLA, are diverting the lava with massive berms (approximately 44 to 54 yards in length and 9 yards in height) made from earth and stone.

Berms protecting the town of Grindavik, a power plant, and the popular Blue Lagoon geothermal spa have met with mixed results. In November 2024, a volcano erupted for the seventh time in less than a year, forcing the evacuation of town residents and the Blue Lagoon’s guests and employees. The latter’s parking lot was consumed by lava.

Sigurdur Thorsteinsson, chief brand, design, and innovation officer of the Blue Lagoon, as well as a designer and a partner in Design Group Italia, was on site for this eruption and several others.

“Some magma went into the city of Grindavik and three or four houses were destroyed,” says Thorsteinsson. “One of our employees watched her house go under magma on television, which was an emotional moment.”

While staff at the Blue Lagoon have become very efficient at evacuating guests, says Thorsteinsson, each eruption forces the tourist destination to close and townspeople to evacuate, disrupting lives and livelihoods.

“You cannot really stop the magma,” says Thorsteinsson, who is working with the MIT faculty on this research project. “It’s too powerful.”

Tibbits, associate professor of design research and founder and co-director of the Self-Assembly Lab, agrees. His research explores how to guide or work with the forces of nature.

Last year, Tibbits and Jih were in Iceland on another research project when erupting volcanoes interrupted their work. The two started thinking about how the lava could be redirected.

“The question is: Can we find more strategic interventions in the field that could work with the lava, rather than fight it?” says Tibbits.

To investigate what kinds of materials would withstand this type of interaction, they invited Parreño Alonso, a senior lecturer in the Department of Architecture, to join them.

“Cristina, being the department authority on magma, was an obvious and important partner for us,” says Jih with a smile.

Parreño Alonso has been working with volcanic rock for years and taught a series of design studios exploring volcanic rock as an architectural material. She also has proposed designing structures to engage directly with lava flows and recently has been examining volcanic rock in a molten state and melting basalt in MIT’s foundry with Michael Tarkanian, a senior lecturer in MIT’s Department of Materials Science and Engineering, and Metals Lab director. For this project, she is exploring the potential of molten rock as a substitute for concrete, a widely used material because of its pliability.

“It’s exciting how this idea of working with volcanoes was taking shape in parallel, from different angles, within the same department,” says Parreño Alonso. “I love how these parallel interests have led to such a beautiful collaboration.”

She also sees other opportunities by collaborating with these forces of nature.

“We are interested in the potential of generating something out of the interaction with the lava,” she says. “Could it be a landscape that becomes a park? There are many possibilities.”

The steel structures were first tested at MIT’s Metals Lab with Tarkanian and then built onsite in Iceland. The team wanted to make the structures lightweight so they could be quickly set up in the field, but strong enough so they wouldn’t be easily destroyed. Various designs were created; this iteration of the design has V-shaped structures that can guide the lava to flow around them, or they can be reconfigured as ramps or tunnels.

“There is a road that has been hit by many of the recent eruptions and must keep being rebuilt,” says Tibbits. “We created two ramps that could in the future serve as tunnels, allowing the lava to flow over the road and create a type of lava cave where the cars could drive under the cooled lava.”

Tibbits says they see the structures in the field now as an initial intervention. After documenting and studying how they interact with the lava, the architects will develop new iterations of what they believe will eventually become critical infrastructure for locations around the world with active volcanoes.

“If we can show and prove what kinds of shapes and structures and what kinds of materials can divert magma flows, I think it’s incredibly valuable research,” says Thorsteinsson.

Thorsteinsson lives in Italy half of the year and says the volcanoes there — Mount Etna in Sicily and Mount Vesuvius in the Gulf of Naples — pose a greater danger than those in Iceland because of the densely populated neighborhoods nearby. Volcanoes in Hawaii and Japan are in similarly populated areas.

“Whatever information you can learn about diverting magma flows to other directions and what kinds of structures are needed — it would be priceless,” he says.

Implantable device could save diabetes patients from dangerously low blood sugar

Wed, 07/09/2025 - 5:00am

For people with Type 1 diabetes, developing hypoglycemia, or low blood sugar, is an ever-present threat. When glucose levels become extremely low, it creates a life-threatening situation for which the standard treatment of care is injecting a hormone called glucagon.

As an emergency backup, for cases where patients may not realize that their blood sugar is dropping to dangerous levels, MIT engineers have designed an implantable reservoir that can remain under the skin and be triggered to release glucagon when blood sugar levels get too low.

This approach could also help in cases where hypoglycemia occurs during sleep, or for diabetic children who are unable to administer injections on their own.

“This is a small, emergency-event device that can be placed under the skin, where it is ready to act if the patient’s blood sugar drops too low,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), and the senior author of the study. “Our goal was to build a device that is always ready to protect patients from low blood sugar. We think this can also help relieve the fear of hypoglycemia that many patients, and their parents, suffer from.”

The researchers showed that this device could also be used to deliver emergency doses of epinephrine, a drug that is used to treat heart attacks and can also prevent severe allergic reactions, including anaphylactic shock.

Siddharth Krishnan, a former MIT research scientist who is now an assistant professor of electrical engineering at Stanford University, is the lead author of the study, which appears today in Nature Biomedical Engineering.

Emergency response

Most patients with type 1 diabetes use daily insulin injections to help their body absorb sugar and prevent their blood sugar levels from getting too high. However, if their blood sugar levels get too low, they develop hypoglycemia, which can lead to confusion and seizures, and may be fatal if it goes untreated.

To combat hypoglycemia, some patients carry preloaded syringes of glucagon, a hormone that stimulates the liver to release glucose into the bloodstream. However, it isn’t always easy for people, especially children, to know when they are becoming hypoglycemic.

“Some patients can sense when they’re getting low blood sugar, and go eat something or give themselves glucagon,” Anderson says. “But some are unaware that they’re hypoglycemic, and they can just slip into confusion and coma. This is also a problem when patients sleep, as they are reliant on glucose sensor alarms to wake them when sugar drops dangerously low.”

To make it easier to counteract hypoglycemia, the MIT team set out to design an emergency device that could be triggered either by the person using it, or automatically by a sensor.

The device, which is about the size of a quarter, contains a small drug reservoir made of a 3D-printed polymer. The reservoir is sealed with a special material known as a shape-memory alloy, which can be programmed to change its shape when heated. In this case, the researcher used a nickel-titanium alloy that is programmed to curl from a flat slab into a U-shape when heated to 40 degrees Celsius.

Like many other protein or peptide drugs, glucagon tends to break down quickly, so the liquid form can’t be stored long-term in the body. Instead, the MIT team created a powdered version of the drug, which remains stable for much longer and stays in the reservoir until released.

Each device can carry either one or four doses of glucagon, and it also includes an antenna tuned to respond to a specific frequency in the radiofrequency range. That allows it to be remotely triggered to turn on a small electrical current, which is used to heat the shape-memory alloy. When the temperature reaches the 40-degree threshold, the slab bends into a U shape, releasing the contents of the reservoir.

Because the device can receive wireless signals, it could also be designed so that drug release is triggered by a glucose monitor when the wearer’s blood sugar drops below a certain level.

“One of the key features of this type of digital drug delivery system is that you can have it talk to sensors,” Krishnan says. “In this case, the continuous glucose-monitoring technology that a lot of patients use is something that would be easy for these types of devices to interface with.”

Reversing hypoglycemia

After implanting the device in diabetic mice, the researchers used it to trigger glucagon release as the animals’ blood sugar levels were dropping. Within less than 10 minutes of activating the drug release, blood sugar levels began to level off, allowing them to remain within the normal range and avert hypoglycemia.

The researchers also tested the device with a powdered version of epinephrine. They found that within 10 minutes of drug release, epinephrine levels in the bloodstream became elevated and heart rate increased.

In this study, the researchers kept the devices implanted for up to four weeks, but they now plan to see if they can extend that time up to at least a year.

“The idea is you would have enough doses that can provide this therapeutic rescue event over a significant period of time. We don’t know exactly what that is — maybe a year, maybe a few years, and we’re currently working on establishing what the optimal lifetime is. But then after that, it would need to be replaced,” Krishnan says.

Typically, when a medical device is implanted in the body, scar tissue develops around the device, which can interfere with its function. However, in this study, the researchers showed that even after fibrotic tissue formed around the implant, they were able to successfully trigger the drug release.

The researchers are now planning for additional animal studies and hope to begin testing the device in clinical trials within the next three years.

“It’s really exciting to see our team accomplish this, which I hope will someday help diabetic patients and could more broadly provide a new paradigm for delivering any emergency medicine,” says Robert Langer, the David H. Koch Institute Professor at MIT and an author of the paper.

Other authors of the paper include Laura O’Keeffe, Arnab Rudra, Derin Gumustop, Nima Khatib, Claudia Liu, Jiawei Yang, Athena Wang, Matthew Bochenek, Yen-Chun Lu, Suman Bose, and Kaelan Reed.

The research was funded by the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, a JDRF postdoctoral fellowship, and the National Institute of Biomedical Imaging and Bioengineering.

Processing our technological angst through humor

Wed, 07/09/2025 - 12:00am

The first time Steve Jobs held a public demo of the Apple Macintosh, in early 1984, scripted jokes were part of the rollout. First, Jobs pulled the machine out of a bag. Then, using speech technology from Samsung, the Macintosh made a quip about rival IBM’s mainframes: “Never trust a computer you can’t lift.”

There’s a reason Jobs was doing that. For the first few decades that computing became part of cultural life, starting in the 1950s, computers seemed unfriendly, grim, and liable to work against human interests. Take the 1968 film “2001: A Space Odyssey,” in which the onboard computer, HAL, turns against the expedition’s astronauts. It’s a famous cultural touchstone. Jobs, in selling the idea of a personal computer, was using humor to ease concerns about the machines.

“Against the sense of computing as cold and numbers-driven, the fact that this computer was using voice technology to deliver jokes made it seem less forbidding, less evil,” says MIT scholar Benjamin Mangrum.

In fact, this dynamic turns up throughout modern culture, in movies, television, fiction, and the theater. We often deal with our doubts and fears about computing through humor, whether reconciling ourselves to machines or critiquing them. Now, Mangrum analyzes this phenomenon in a new book, “The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence,” published this month by Stanford University Press.

“Comedy has been a form for making this technology seem ordinary,” says Mangrum, an associate professor in MIT’s literature program. “Where in other circumstances computing might seem inhuman or impersonal, comedy allows us to incorporate it into our lives in a way that makes it make sense.”

Reversals of fortune

Mangrum’s interest in the subject was sparked partly by William Marchant’s 1955 play, “The Desk Set” — a romantic comedy later turned into a film starring Katharine Hepburn and Spencer Tracy — which queries, among other things, how office workers will co-exist alongside computers.

Perhaps against expectations, romantic comedies have turned out to be one of the most prominent contemporary forms of culture that grapple with technology and its effects on us. Mangrum, in the book, explains why: Their plot structure often involves reversals, which sometimes are extended to technology, too. Computing might seem forbidding, but it might also pull people together.

“One of the common tropes about romantic comedies is that there are characters or factors in the drama that obstruct the happy union of two people,” Mangrum observes. “And often across the arc of the drama, the obstruction or obstructive character is transformed into a partner, or collaborator, and assimilated within the happy couple’s union. That provides a template for how some cultural producers want to present the experience of computing. It begins as an obstruction and ends as a partner.”

That plot structure, Mangrum notes, dates to antiquity and was common in Shakespeare’s day. Still, as he writes in the book, there is “no timeless reality called Comedy,” as the vehicles and forms of it change over time. Beyond that, specific jokes about computing can quickly become outmoded. Steve Jobs made fun of mainframes, and the 1998 Nora Ephron comedy “You’ve Got Mail” got laughs out of dial-up modems, but those jokes might leave most people puzzled today.

“Comedy is not a fixed resource,” Mangrum says. “It’s an ever-changing toolbox.”

Continuing this evolution into the 21st century, Mangrum observes that a lot of computational comedy centers on an entire category of commentary he calls “the Great Tech-Industrial Joke.” This focuses on the gap between noble-sounding declared aspirations of technology and the sometimes-dismal outcomes it creates.

Social media, for instance, promised new worlds of connectivity and social exploration, and has benefits people enjoy — but it has also generated polarization, misinformation, and toxicity. Technology’s social effects are complex. Whole televisions shows, such as “Silicon Valley,” have dug into this terrain.

“The tech industry announces that some of its products have revolutionary or utopian aims, but the achievements of many of them fall far short of that,” Mangrum says. “It’s a funny setup for a joke. People have been claiming we’re saving the world, when actually we’re just processing emails faster. But it’s a mode of criticism aimed at big tech, since its products are more complicated.”

A complicated, messy picture

“The Comedy of Computation” digs into several other facets of modern culture and technology. The notion of personal authenticity, as Mangrum observes, is a fairly recent and modern construct in society — and it’s another sphere of life that collides with computing, since social media is full of charges of inauthenticity.

“That ethics of authenticity connects to comedy, as we make jokes about people not being authentic,” Mangrum says.

“The Comedy of Computation” has received praise from other scholars. Mark Goble, a professor of English at the University of California at Berkeley, has called it “essential for understanding the technological world in its complexity, absurdity, and vibrancy.”

For his part, Mangrum emphasizes that his book is an exploration of the full complexity of technology, culture, and society.

“There’s this really complicated, messy picture,” Mangrum says. “And comedy sometimes finds a way of experiencing and finding pleasure in that messiness, and other times it neatly wraps it up in a lesson that can make things neater than they actually are.”

Mangrum adds that the book focuses on “the combination of the threat and pleasure that’s involved across the history of the computer, in the ways it’s been assimilated and shaped society, with real advances and benefits, along with real threats, for instance to employment. I’m interested in the duality, the simultaneous and seemingly conflicting features of that experience.”

Study could lead to LLMs that are better at complex reasoning

Tue, 07/08/2025 - 12:00am

For all their impressive capabilities, large language models (LLMs) often fall short when given challenging new tasks that require complex reasoning skills.

While an accounting firm’s LLM might excel at summarizing financial reports, that same model could fail unexpectedly if tasked with predicting market trends or identifying fraudulent transactions.

To make LLMs more adaptable, MIT researchers investigated how a certain training technique can be strategically deployed to boost a model’s performance on unfamiliar, difficult problems.

They show that test-time training, a method that involves temporarily updating some of a model’s inner workings during deployment, can lead to a sixfold improvement in accuracy. The researchers developed a framework for implementing a test-time training strategy that uses examples of the new task to maximize these gains.

Their work could improve a model’s flexibility, enabling an off-the-shelf LLM to adapt to complex tasks that require planning or abstraction. This could lead to LLMs that would be more accurate in many applications that require logical deduction, from medical diagnostics to supply chain management.

“Genuine learning — what we did here with test-time training — is something these models can’t do on their own after they are shipped. They can’t gain new skills or get better at a task. But we have shown that if you push the model a little bit to do actual learning, you see that huge improvements in performance can happen,” says Ekin Akyürek PhD ’25, lead author of the study.

Akyürek is joined on the paper by graduate students Mehul Damani, Linlu Qiu, Han Guo, and Jyothish Pari; undergraduate Adam Zweiger; and senior authors Yoon Kim, an assistant professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Jacob Andreas, an associate professor in EECS and a member of CSAIL. The research will be presented at the International Conference on Machine Learning.

Tackling hard domains

LLM users often try to improve the performance of their model on a new task using a technique called in-context learning. They feed the model a few examples of the new task as text prompts which guide the model’s outputs.

But in-context learning doesn’t always work for problems that require logic and reasoning.

The MIT researchers investigated how test-time training can be used in conjunction with in-context learning to boost performance on these challenging tasks. Test-time training involves updating some model parameters — the internal variables it uses to make predictions — using a small amount of new data specific to the task at hand.

The researchers explored how test-time training interacts with in-context learning. They studied design choices that maximize the performance improvements one can coax out of a general-purpose LLM.

“We find that test-time training is a much stronger form of learning. While simply providing examples can modestly boost accuracy, actually updating the model with those examples can lead to significantly better performance, particularly in challenging domains,” Damani says.

In-context learning requires a small set of task examples, including problems and their solutions. The researchers use these examples to create a task-specific dataset needed for test-time training.

To expand the size of this dataset, they create new inputs by slightly changing the problems and solutions in the examples, such as by horizontally flipping some input data. They find that training the model on the outputs of this new dataset leads to the best performance.

In addition, the researchers only update a small number of model parameters using a technique called low-rank adaption, which improves the efficiency of the test-time training process.

“This is important because our method needs to be efficient if it is going to be deployed in the real world. We find that you can get huge improvements in accuracy with a very small amount of parameter training,” Akyürek says.

Developing new skills

Streamlining the process is key, since test-time training is employed on a per-instance basis, meaning a user would need to do this for each individual task. The updates to the model are only temporary, and the model reverts to its original form after making a prediction.

A model that usually takes less than a minute to answer a query might take five or 10 minutes to provide an answer with test-time training, Akyürek adds.

“We wouldn’t want to do this for all user queries, but it is useful if you have a very hard task that you want to the model to solve well. There also might be tasks that are too challenging for an LLM to solve without this method,” he says.

The researchers tested their approach on two benchmark datasets of extremely complex problems, such as IQ puzzles. It boosted accuracy as much as sixfold over techniques that use only in-context learning.

Tasks that involved structured patterns or those which used completely unfamiliar types of data showed the largest performance improvements.

“For simpler tasks, in-context learning might be OK. But updating the parameters themselves might develop a new skill in the model,” Damani says.

In the future, the researchers want to use these insights toward the development of models that continually learn.

The long-term goal is an LLM that, given a query, can automatically determine if it needs to use test-time training to update parameters or if it can solve the task using in-context learning, and then implement the best test-time training strategy without the need for human intervention.

This work is supported, in part, by the MIT-IBM Watson AI Lab and the National Science Foundation.

MIT chemists boost the efficiency of a key enzyme in photosynthesis

Mon, 07/07/2025 - 2:00pm

During photosynthesis, an enzyme called rubisco catalyzes a key reaction — the incorporation of carbon dioxide into organic compounds to create sugars. However, rubisco, which is believed to be the most abundant enzyme on Earth, is very inefficient compared to the other enzymes involved in photosynthesis.

MIT chemists have now shown that they can greatly enhance a version of rubisco found in bacteria from a low-oxygen environment. Using a process known as directed evolution, they identified mutations that could boost rubisco’s catalytic efficiency by up to 25 percent.

The researchers now plan to apply their technique to forms of rubisco that could be used in plants to help boost their rates of photosynthesis, which could potentially improve crop yields.

“This is, I think, a compelling demonstration of successful improvement of a rubisco’s enzymatic properties, holding out a lot of hope for engineering other forms of rubisco,” says Matthew Shoulders, the Class of 1942 Professor of Chemistry at MIT.

Shoulders and Robert Wilson, a research scientist in the Department of Chemistry, are the senior authors of the new study, which appears this week in the Proceedings of the National Academy of Sciences. MIT graduate student Julie McDonald is the paper’s lead author.

Evolution of efficiency

When plants or photosynthetic bacteria absorb energy from the sun, they first convert it into energy-storing molecules such as ATP. In the next phase of photosynthesis, cells use that energy to transform a molecule known as ribulose bisphosphate into glucose, which requires several additional reactions. Rubisco catalyzes the first of those reactions, known as carboxylation. During that reaction, carbon from CO2 is added to ribulose bisphosphate.

Compared to the other enzymes involved in photosynthesis, rubisco is very slow, catalyzing only one to 10 reactions per second. Additionally, rubisco can also interact with oxygen, leading to a competing reaction that incorporates oxygen instead of carbon — a process that wastes some of the energy absorbed from sunlight.

“For protein engineers, that’s a really attractive set of problems because those traits seem like things that you could hopefully make better by making changes to the enzyme’s amino acid sequence,” McDonald says.

Previous research has led to improvement in rubisco’s stability and solubility, which resulted in small gains in enzyme efficiency. Most of those studies used directed evolution — a technique in which a naturally occurring protein is randomly mutated and then screened for the emergence of new, desirable features.

This process is usually done using error-prone PCR, a technique that first generates mutations in vitro (outside of the cell), typically introducing only one or two mutations in the target gene. In past studies on rubisco, this library of mutations was then introduced into bacteria that grow at a rate relative to rubisco activity. Limitations in error-prone PCR and in the efficiency of introducing new genes restrict the total number of mutations that can be generated and screened using this approach. Manual mutagenesis and selection steps also add more time to the process over multiple rounds of evolution.

The MIT team instead used a newer mutagenesis technique that the Shoulders Lab previously developed, called MutaT7. This technique allows the researchers to perform both mutagenesis and screening in living cells, which dramatically speeds up the process. Their technique also enables them to mutate the target gene at a higher rate.

“Our continuous directed evolution technique allows you to look at a lot more mutations in the enzyme than has been done in the past,” McDonald says.

Better rubisco

For this study, the researchers began with a version of rubisco, isolated from a family of semi-anaerobic bacteria known as Gallionellaceae, that is one of the fastest rubisco found in nature. During the directed evolution experiments, which were conducted in E. coli, the researchers kept the microbes in an environment with atmospheric levels of oxygen, creating evolutionary pressure to adapt to oxygen.

After six rounds of directed evolution, the researchers identified three different mutations that improved the rubisco’s resistance to oxygen. Each of these mutations are located near the enzyme’s active site (where it performs carboxylation or oxygenation). The researchers believe that these mutations improve the enzyme’s ability to preferentially interact with carbon dioxide over oxygen, which leads to an overall increase in carboxylation efficiency.

“The underlying question here is: Can you alter and improve the kinetic properties of rubisco to operate better in environments where you want it to operate better?” Shoulders says. “What changed through the directed evolution process was that rubisco began to like to react with oxygen less. That allows this rubisco to function well in an oxygen-rich environment, where normally it would constantly get distracted and react with oxygen, which you don’t want it to do.”

In ongoing work, the researchers are applying this approach to other forms of rubisco, including rubisco from plants. Plants are believed to lose about 30 percent of the energy from the sunlight they absorb through a process called photorespiration, which occurs when rubisco acts on oxygen instead of carbon dioxide.

“This really opens the door to a lot of exciting new research, and it’s a step beyond the types of engineering that have dominated rubisco engineering in the past,” Wilson says. “There are definite benefits to agricultural productivity that could be leveraged through a better rubisco.”

The research was funded, in part, by the National Science Foundation, the National Institutes of Health, an Abdul Latif Jameel Water and Food Systems Lab Grand Challenge grant, and a Martin Family Society Fellowship for Sustainability.

Professor Emeritus Barry Vercoe, a pioneering force in computer music, dies at 87

Mon, 07/07/2025 - 12:30pm

MIT Professor Emeritus Barry Lloyd Vercoe, a pioneering force in computer music, a founding faculty member of the MIT Media Lab, and a leader in the development of MIT’s Music and Theater Arts Section, passed away on June 15. He was 87.

Vercoe’s life was a rich symphony of artistry, science, and innovation that led to profound enhancements of musical experience for expert musicians as well as for the general public — and especially young people.

Born in Wellington, New Zealand, on July 24, 1937, Vercoe earned bachelor’s degrees in music (in 1959) and mathematics (in 1962) from the University of Auckland, followed by a doctor of musical arts in music composition from the University of Michigan in 1968.

After completing postdoctoral research in digital audio processing at Princeton University and a visiting lectureship at Yale University, Vercoe joined MIT’s Department of Humanities (Music) in 1971, beginning a tenure in the department that lasted through 1984. During this period, he played a key role in advancing what would become MIT’s Music and Theater Arts (MTA) Section, helping to shape its forward-thinking curriculum and interdisciplinary philosophy. Vercoe championed the integration of musical creativity with scientific inquiry, laying the groundwork for MTA’s enduring emphasis on music technology and experimental composition.

In 1973, Vercoe founded MIT’s Experimental Music Studio (EMS) — the Institute’s first dedicated computer music facility, and one of the first in the world. Operated under the auspices of the music program, EMS became a crucible for innovation in algorithmic composition, digital synthesis, and computer-assisted performance. His leadership not only positioned MIT as a hub for music technology, but also influenced how the Institute approached the intersection of the arts with engineering. This legacy is honored today by a commemorative plaque in the Kendall Square MBTA station.

Violist, faculty founder of the MIT Chamber Music Society, and Institute Professor Marcus Thompson says: “Barry was first and foremost a fine musician, and composer for traditional instruments and ensembles. As a young professor, he taught our MIT undergraduates to write and sing Renaissance counterpoint as he envisioned how the act of traditional music-making offered a guide to potential artistic interaction between humans and computers. In 1976, he enlisted me to premiere what became his iconic, and my most-performed, work, ‘Synapse for Viola and Computer.’”

During a Guggenheim Fellowship in 1982–83, Vercoe developed the Synthetic Performer, a groundbreaking real-time interactive accompaniment system, while working closely with flautist Larry Beauregard at the Institute for Research and Coordination in Acoustics/Music (IRCAM) in Paris.

In 1984, Vercoe became a founding faculty member of the MIT Media Lab, where he launched the Music, Mind, and Machine group. His research spanned machine listening, music cognition, and real-time digital audio synthesis. His Csound language, created in 1985, is still widely used for music programming, and his contributions helped define the MPEG-4 Structured Audio standard.

He also served as associate academic head of the Media Lab’s graduate program in Media Arts and Sciences (MAS). Vercoe mentored many future leaders in digital music and sound computation, including two of his MAS graduate students — Anna Huang SM ’08 and Paris Smaragdis PhD ’01 — who have recently joined MIT’s music faculty, and Miller Puckette, an emeritus faculty member at the University of California at San Diego, and Richard Boulanger, a professor of electronic production and design at the Berklee College of Music.

“Barry Vercoe will be remembered by designers, developers, researchers, and composers for his greatest ‘composition,’ Csound, his free and open-source software synthesis language,” states Boulanger. “I know that, through Csound, Barry’s musical spirit will live on, not only in my teaching, my research, and my music, but in the apps, plugins, and musical compositions of generations to come.”

Tod Machover, faculty director of the MIT Media Lab and Muriel R. Cooper Professor of Music and Media, reflects, “Barry Vercoe was a giant in the field of computer music whose innovations in software synthesis, interactive performance, and educational tools for young people influenced and inspired many, including myself. He was a superb mentor, always making sure that artistic sensibility drove music tech innovation, and that sophisticated expression was at the core of Media Lab — and MIT — culture.”

Vercoe’s work earned numerous accolades. In addition to the Guggenheim Fellowship, he was also honored with the 1992 Computerworld Smithsonian Award for innovation and the 2004 SEAMUS Lifetime Achievement Award.

Beyond MIT, Vercoe consulted with Analog Devices and collaborated with international institutions like IRCAM under the direction of Pierre Boulez. His commitment to democratizing music technology was evident in his contributions to the One Laptop per Child initiative, which brought accessible digital sound tools to young people in underserved communities worldwide.

He is survived by his former wives, Kathryn Veda Vaughn and Elizabeth Vercoe; their children, Andrea Vercoe and Scott Vercoe; and generations of students and collaborators who continue to build on his groundbreaking work. A memorial service for family will be held in New Zealand later this summer, and a special event in his honor will take place at MIT in the fall. The Media Lab will share details about the MIT gathering as they become available.

Named professor emeritus at the MIT Media Lab upon his retirement in 2010, Vercoe’s legacy embodies the lab’s — and MIT’s — vision of creative, ethical, interdisciplinary research at the convergence of art, science, and technology. His music, machines, and generously inventive spirit will continue to forever shape the way we listen, learn, and communicate.

New postdoctoral fellowship program to accelerate innovation in health care

Mon, 07/07/2025 - 10:00am

The MIT Health and Life Sciences Collaborative (MIT HEALS) is launching the Biswas Postdoctoral Fellowship Program to advance the work of outstanding early-career researchers in health and life sciences. Supported by a gift from the Biswas Family Foundation, the program aims to help apply cutting-edge research to improve health care and the lives of millions.

The program will support exceptional postdocs dedicated to innovation in human health care through a full range of pathways, such as leveraging AI in health-related research, developing low-cost diagnostics, and the convergence of life sciences with such areas as economics, business, policy, or the humanities. With initial funding of $12 million, five four-year fellowships will be awarded for each of the next four years, starting in early 2026.

“An essential goal of MIT HEALS is to find new ways and opportunities to deliver health care solutions at scale, and the Biswas Family Foundation shares our commitment to scalable innovation and broad impact. MIT is also in the talent business, and the foundation’s gift allows us to bring exceptional scholars to campus to explore some of the most pressing issues in human health and build meaningful connections across academia and industry. We look forward to welcoming the first cohort of Biswas Fellows to MIT,” says MIT president Sally Kornbluth.

“We are deeply honored to launch this world-class postdoctoral fellows program,” adds Anantha P. Chandrakasan, MIT’s chief innovation and strategy officer and head of MIT HEALS. “We fully expect to attract top candidates from around the globe to lead innovative cross-cutting projects in AI and health, cancer therapies, diagnostics, and beyond. These fellows will be selected through a rigorous process overseen by a distinguished committee, and will have the opportunity to collaborate with our faculty on the most promising and impactful ideas.”

Angela Koehler, faculty lead of MIT HEALS, professor in MIT’s Department of Biological Engineering, and associate director of the Koch Institute for Integrative Cancer Research, emphasized that the objectives of MIT HEALS align well with a stated goal of the Biswas Family Foundation: to leverage “scientific and technological advancements to revolutionize health care and make a lasting impact on global public health.”

“Health care is a team sport,” Koehler says. “MIT HEALS seeks to create connections involving investigators with diverse expertise across the Institute to tackle the most transformative problems impacting human health. Members of the MIT community are well poised to participate in teams and make an impact.”

MIT HEALS also seeks to maximize its effectiveness by expanding collaboration with medical schools and hospitals, starting with defining important problems that can be approached through research, and continuing all the way to clinical studies, Koehler says.

The Biswas Family Foundation has already demonstrated a similar strategy.

“The Biswas family has a history of enabling connections and partnerships between institutions that each bring a piece to the puzzle,” Koehler says. “This could be a dataset, an algorithm, an agent, a technology platform, or patients.”

Hope Biswas, co-founder of the Biswas Family Foundation with her husband, MIT alumnus Sanjit Biswas SM ’05, also highlighted the synergies between the foundation and MIT.

“The Biswas Family Foundation is proud to support the MIT HEALS initiative, which reimagines how scientific discovery can translate into real-world health impact. Its focus on promoting interdisciplinary collaboration to find new solutions to challenges in health care aligns closely with our mission to advance science and technology to improve health outcomes at scale,” Biswas says.

“As part of this commitment,” Biswas adds, “we are especially proud to support outstanding postdoctoral scholars focused on high-impact cross-disciplinary work in fields such as computational biology, nanoscale therapeutics, women’s health, and fundamental, curiosity-driven life sciences research. We are excited to contribute to an effort that brings together cutting-edge science and a deep commitment to translating knowledge into action.”

AI and machine-learning systems present a new universe of opportunities to investigate disease, biological mechanisms, therapeutics, and health care delivery using huge datasets.

“AI and computational systems biology can improve the accuracy of diagnostic approaches, enable the development of precision medicines, improve choices related to individualized treatment strategy, and improve operational efficiency within health care systems,” says Koehler. “Sanjit and Hope’s support of broad initiatives in AI and computational systems biology will help MIT researchers explore a variety of paths to impact human health on a large scale.”

Frontiers in health-related research are increasingly found where diverse fields converge, and Koehler provides the example of how advances in high-throughput experimentation to develop large datasets “may couple well with the development of new computation or AI tools.” She adds that the four-year funding term provided by the postdoctoral fellowship is “long enough to enable fellows to think big and take on projects at interfaces, emerging as bilingual researchers at the end of the program.”

Chandrakasan sees potential in the program for the Biswas Fellows to make revolutionary progress in health research.

“I’m incredibly grateful to the Biswas Family Foundation for their generous support in enabling transformative research at MIT,” Chandrakasan says.

Exploring data and its influence on political behavior

Mon, 07/07/2025 - 10:00am

Data and politics are becoming increasingly intertwined. Today’s political campaigns and voter mobilization efforts are now entirely data-driven. Voters, pollsters, and elected officials are relying on data to make choices that have local, regional, and national impacts.

A Department of Political Science course offers students tools to help make sense of these choices and their outcomes.

In class 17.831 (Data and Politics), students are introduced to principles and practices necessary to understand electoral and other types of political behavior. Taught by associate professor of political science Daniel Hidalgo, students use real-world datasets to explore topics like election polling and prediction, voter turnout, voter targeting, and shifts in public opinion over time.

The course wants students to describe why and how the use of data and statistical methods has changed electoral politics, understand the basic principles of social science statistics, and analyze data using modern statistical computing tools. The course capstone is an original project that involves the collection, analysis, and interpretation of original survey data used in modern campaigns.

“I wanted to create an applied, practice-based course that would appeal to undergraduates and provide a foundation for parsing, understanding, and reporting on large datasets in politics,” says Hidalgo, who redesigned the course for the spring 2025 semester.

Hidalgo, who also works in the Political Methodology Lab at MIT, investigates the political economy of elections, campaigns, and representation in developing democracies, especially in Latin America, as well as quantitative methods in the social sciences.

Politics and modernity

The influence of, and access to, artificial intelligence and large language models makes a course like Data and Politics even more important, Hidalgo says. “You have to understand the people at the other end of the data,” he argues.

The course also centers the human element in politics, exploring conflict, bias, their structures, and impacts while also working to improve information literacy and coherent storytelling.

“Data analysis and collection will never be perfect,” Hidalgo says. “But analyzing and understanding who holds which ideas, and why, and using the information to tell a coherent story is valuable in politics and elsewhere.”

The “always on” nature of news and related content, coupled with the variety of communications channels available to voters, has increased the complexity of the data collection process in polling and campaigns. “In the past, people would answer the phone when you called their homes,” Hidalgo notes, describing analog methods previously used to collect voter data. Now, political scientists, data analysts, and others must contend with the availability of streaming content, mobile devices, and other channels comprising a vast, fractured media ecosystem.

The course opens a window into what happens behind the scenes of local and national political campaigns, which appealed to second-year political science major Jackson Hamilton. “I took this class hoping to expand my ability to use coding for political science applications, and in order to better understand how political models and predictions work,” he says.

“We tailor-made our own sets of questions and experimental designs that we thought would be interesting,” Hamilton adds. “I found that political issues that get a lot of media coverage are not necessarily the same issues which divide lawmakers, at least locally.”

Transparency and accountability in politics and other areas

Teaching students to use tools like polling and data analysis effectively can improve their ability to identify and combat disinformation and misinformation. “As a political scientist, I’m substantively engaged,” Hidalgo says, “and I’d like to help others be engaged, too.”

“There’s lots of data available, and this course provides a foundation and the resources necessary to understand and visualize it,” Hidalgo continues. “The ability to design, implement, and understand surveys has value inside and outside the classroom.”

In politics, Hidalgo believes equipping students to navigate these spaces effectively can potentially improve and increase civic engagement. Data, he says, can help defend ideas. “There’s so much information, it’s important to develop the skills and abilities necessary to understand and visualize it,” he says. “This has value for everyone.”

Second-year physics major Sean Wilson, who also took the class this spring, notes the value of data visualization and analysis both as a potential physicist and a voter. “Data analysis in both politics and in physics is essential work given that voting tendencies, public opinion, and government leadership change so often in the United States,” he says, “and that modeling can be used to support physical hypotheses and improve our understanding of how things work.”

For Wilson, the course can help anyone interested in understanding large groups’ behaviors. “Political scientists are constantly working to better understand how and why certain events occur in U.S. politics, and data analysis is an effective tool for doing so,” he says. “Members of a representative democracy can make better decisions with this kind of information.”

Hamilton, meanwhile, learned more about the behind-the-scenes machinery at work in electoral politics. “I had the opportunity to create a couple of budget trade-off questions, to get a sense of what people actually thought the government should spend money on when they had to make choices,” he says.

“Computer science and data science aren’t just useful for STEM applications; data science approaches can also be extremely useful in many social sciences,” Hamilton argues.

“[Hidalgo helped me realize] that I needed to understand and use data science approaches to gain a deeper understanding of my areas of interest,” Hamilton says. “He focuses on how different approaches in coding can be applied to different types of problems in political science.” 

Study shows how a common fertilizer ingredient benefits plants

Mon, 07/07/2025 - 8:00am

Lanthanides are a class of rare earth elements that in many countries are added to fertilizer as micronutrients to stimulate plant growth. But little is known about how they are absorbed by plants or influence photosynthesis, potentially leaving their benefits untapped.

Now, researchers from MIT have shed light on how lanthanides move through and operate within plants. These insights could help farmers optimize their use to grow some of the world’s most popular crops.

Published today in the Journal of the American Chemical Society, the study shows that a single nanoscale dose of lanthanides applied to seeds can make some of the world’s most common crops more resilient to UV stress. The researchers also uncovered the chemical processes by which lanthanides interact with the chlorophyll pigments that drive photosynthesis, showing that different lanthanide elements strengthen chlorophyll by replacing the magnesium at its center.

“This is a first step to better understand how these elements work in plants, and to provide an example of how they could be better delivered to plants, compared to simply applying them in the soil,” says Associate Professor Benedetto Marelli, who conducted the research with postdoc Giorgio Rizzo. “This is the first example of a thorough study showing the effects of lanthanides on chlorophyll, and their beneficial effects to protect plants from UV stress.”

Inside plant connections

Certain lanthanides are used as contrast agents in MRI and for applications including light-emitting diodes, solar cells, and lasers. Over the last 50 years, lanthanides have become increasingly used in agriculture to enhance crop yields, with China alone applying lanthanide-based fertilizers to nearly 4 million hectares of land each year.

“Lanthanides have been considered for a long time to be biologically irrelevant, but that’s changed in agriculture, especially in China,” says Rizzo, the paper’s first author. “But we largely don’t know how lanthanides work to benefit plants — nor do we understand their uptake mechanisms from plant tissues.”

Recent studies have shown that low concentrations of lanthanides can promote plant growth, root elongation, hormone synthesis, and stress tolerance, but higher doses can cause harm to plants. Striking the right balance has been hard because of our lack of understanding around how lanthanides are absorbed by plants or how they interact with root soil.

For the study, the researchers leveraged seed coating and treatment technologies they previously developed to investigate the way the plant pigment chlorophyll interacts with lanthanides, both inside and outside of plants. Up until now, researchers haven’t been sure whether chlorophyll interacts with lanthanide ions at all.

Chlorophyll drives photosynthesis, but the pigments lose their ability to efficiently absorb light when the magnesium ion at their core is removed. The researchers discovered that lanthanides can fill that void, helping chlorophyll pigments partially recover some of their optical properties in a process known as re-greening.

“We found that lanthanides can boost several parameters of plant health,” Marelli says. “They mostly accumulate in the roots, but a small amount also makes its way to the leaves, and some of the new chlorophyll molecules made in leaves have lanthanides incorporated in their structure.”

This study also offers the first experimental evidence that lanthanides can increase plant resilience to UV stress, something the researchers say was completely unexpected.

“Chlorophylls are very sensitive pigments,” Rizzo says. “They can convert light to energy in plants, but when they are isolated from the cell structure, they rapidly hydrolyze and degrade. However, in the form with lanthanides at their center, they are pretty stable, even after extracting them from plant cells.”

The researchers, using different spectroscopic techniques, found the benefits held across a range of staple crops, including chickpea, barley, corn, and soybeans.

The findings could be used to boost crop yield and increase the resilience of some of the world’s most popular crops to extreme weather.

“As we move into an environment where extreme heat and extreme climate events are more common, and particularly where we can have prolonged periods of sun in the field, we want to provide new ways to protect our plants,” Marelli says. “There are existing agrochemicals that can be applied to leaves for protecting plants from stressors such as UV, but they can be toxic, increase microplastics, and can require multiple applications. This could be a complementary way to protect plants from UV stress.”

Identifying new applications

The researchers also found that larger lanthanide elements like lanthanum were more effective at strengthening chlorophyll pigments than smaller ones. Lanthanum is considered a low-value byproduct of rare earths mining, and can become a burden to the rare earth element (REE) supply chain due to the need to separate it from more desirable rare earths. Increasing the demand for lanthanum could diversify the economics of REEs and improve the stability of their supply chain, the scientists suggest.

“This study shows what we could do with these lower-value metals,” Marelli says. “We know lanthanides are extremely useful in electronics, magnets, and energy. In the U.S., there’s a big push to recycle them. That’s why for the plant studies, we focused on lanthanum, being the most abundant, cheapest lanthanide ion.”

Moving forward, the team plans to explore how lanthanides work with other biological molecules, including proteins in the human body.

In agriculture, the team hopes to scale up its research to include field and greenhouse studies to continue testing the results of UV resilience on different crop types and in experimental farm conditions.

“Lanthanides are already widely used in agriculture,” Rizzo says. “We hope this study provides evidence that allows more conscious use of them and also a new way to apply them through seed treatments.”

The research was supported by the MIT Climate Grand Challenge and the Office for Naval Research.

Pages