MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 5 hours 30 min ago

Exploring the cellular neighborhood

Mon, 03/11/2024 - 4:50pm

Cells rely on complex molecular machines composed of protein assemblies to perform essential functions such as energy production, gene expression, and protein synthesis. To better understand how these machines work, scientists capture snapshots of them by isolating proteins from cells and using various methods to determine their structures. However, isolating proteins from cells also removes them from the context of their native environment, including protein interaction partners and cellular location.

Recently, cryogenic electron tomography (cryo-ET) has emerged as a way to observe proteins in their native environment by imaging frozen cells at different angles to obtain three-dimensional structural information. This approach is exciting because it allows researchers to directly observe how and where proteins associate with each other, revealing the cellular neighborhood of those interactions within the cell.

With the technology available to image proteins in their native environment, MIT graduate student Barrett Powell wondered if he could take it one step further: What if molecular machines could be observed in action? In a paper published March 8 in Nature Methods, Powell describes the method he developed, called tomoDRGN, for modeling structural differences of proteins in cryo-ET data that arise from protein motions or proteins binding to different interaction partners. These variations are known as structural heterogeneity. 

Although Powell had joined the lab of MIT associate professor of biology Joey Davis as an experimental scientist, he recognized the potential impact of computational approaches in understanding structural heterogeneity within a cell. Previously, the Davis Lab developed a related methodology named cryoDRGN to understand structural heterogeneity in purified samples. As Powell and Davis saw cryo-ET rising in prominence in the field, Powell took on the challenge of re-imagining this framework to work in cells.

When solving structures with purified samples, each particle is imaged only once. By contrast, cryo-ET data is collected by imaging each particle more than 40 times from different angles. That meant tomoDRGN needed to be able to merge the information from more than 40 images, which was where the project hit a roadblock: the amount of data led to an information overload.

To address this, Powell successfully rebuilt the cryoDRGN model to prioritize only the highest-quality data. When imaging the same particle multiple times, radiation damage occurs. The images acquired earlier, therefore, tend to be of higher quality because the particles are less damaged.

“By excluding some of the lower-quality data, the results were actually better than using all of the data — and the computational performance was substantially faster,” Powell says.

Just as Powell was beginning work on testing his model, he had a stroke of luck: The authors of a groundbreaking new study that visualized, for the first time, ribosomes inside cells at near-atomic resolution, shared their raw data on the Electric Microscopy Public Image Archive (EMPIAR). This dataset was an exemplary test case for Powell, through which he demonstrated that tomoDRGN could uncover structural heterogeneity within cryo-ET data. 

According to Powell, one exciting result is what tomoDRGN found surrounding a subset of ribosomes in the EMPIAR dataset. Some of the ribosomal particles were associated with a bacterial cell membrane and engaged in a process called cotranslational translocation. This occurs when a protein is being simultaneously synthesized and transported across a membrane. Researchers can use this result to make new hypotheses about how the ribosome functions with other protein machinery integral to transporting proteins outside of the cell, now guided by a structure of the complex in its native environment. 

After seeing that tomoDRGN could resolve structural heterogeneity from a structurally diverse dataset, Powell was curious: How small of a population could tomoDRGN identify? For that test, he chose a protein named apoferritin, which is a commonly used benchmark for cryo-ET and is often treated as structurally homogeneous. Ferritin is a protein used for iron storage and is referred to as apoferritin when it lacks iron.

Surprisingly, in addition to the expected particles, tomoDRGN revealed a minor population of ferritin particles — with iron bound — making up just 2 percent of the dataset, that was not previously reported. This result further demonstrated tomoDRGN's ability to identify structural states that occur so infrequently that they would be averaged out of a 3D reconstruction. 

Powell and other members of the Davis Lab are excited to see how tomoDRGN can be applied to further ribosomal studies and to other systems. Davis works on understanding how cells assemble, regulate, and degrade molecular machines, so the next steps include exploring ribosome biogenesis within cells in greater detail using this new tool.

“What are the possible states that we may be losing during purification?” Davis asks. “Perhaps more excitingly, we can look at how they localize within the cell and what partners and protein complexes they may be interacting with.”

A new sensor detects harmful “forever chemicals” in drinking water

Mon, 03/11/2024 - 3:00pm

MIT chemists have designed a sensor that detects tiny quantities of perfluoroalkyl and polyfluoroalkyl substances (PFAS) — chemicals found in food packaging, nonstick cookware, and many other consumer products.

These compounds, also known as “forever chemicals” because they do not break down naturally, have been linked to a variety of harmful health effects, including cancer, reproductive problems, and disruption of the immune and endocrine systems.

Using the new sensor technology, the researchers showed that they could detect PFAS levels as low as 200 parts per trillion in a water sample. The device they designed could offer a way for consumers to test their drinking water, and it could also be useful in industries that rely heavily on PFAS chemicals, including the manufacture of semiconductors and firefighting equipment.

“There’s a real need for these sensing technologies. We’re stuck with these chemicals for a long time, so we need to be able to detect them and get rid of them,” says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT and the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences.

Other authors of the paper are former MIT postdoc and lead author Sohyun Park and MIT graduate student Collette Gordon.

Detecting PFAS

Coatings containing PFAS chemicals are used in thousands of consumer products. In addition to nonstick coatings for cookware, they are also commonly used in water-repellent clothing, stain-resistant fabrics, grease-resistant pizza boxes, cosmetics, and firefighting foams.

These fluorinated chemicals, which have been in widespread use since the 1950s, can be released into water, air, and soil, from factories, sewage treatment plants, and landfills. They have been found in drinking water sources in all 50 states.

In 2023, the Environmental Protection Agency created an “advisory health limit” for two of the most hazardous PFAS chemicals, known as perfluorooctanoic acid (PFOA) and perfluorooctyl sulfonate (PFOS). These advisories call for a limit of 0.004 parts per trillion for PFOA and 0.02 parts per trillion for PFOS in drinking water.

Currently, the only way that a consumer could determine if their drinking water contains PFAS is to send a water sample to a laboratory that performs mass spectrometry testing. However, this process takes several weeks and costs hundreds of dollars.

To create a cheaper and faster way to test for PFAS, the MIT team designed a sensor based on lateral flow technology — the same approach used for rapid Covid-19 tests and pregnancy tests. Instead of a test strip coated with antibodies, the new sensor is embedded with a special polymer known as polyaniline, which can switch between semiconducting and conducting states when protons are added to the material.

The researchers deposited these polymers onto a strip of nitrocellulose paper and coated them with a surfactant that can pull fluorocarbons such as PFAS out of a drop of water placed on the strip. When this happens, protons from the PFAS are drawn into the polyaniline and turn it into a conductor, reducing the electrical resistance of the material. This change in resistance, which can be measured precisely using electrodes and sent to an external device such as a smartphone, gives a quantitative measurement of how much PFAS is present.

This approach works only with PFAS that are acidic, which includes two of the most harmful PFAS — PFOA and perfluorobutanoic acid (PFBA).

A user-friendly system

The current version of the sensor can detect concentrations as low as 200 parts per trillion for PFBA, and 400 parts per trillion for PFOA. This is not quite low enough to meet the current EPA guidelines, but the sensor uses only a fraction of a milliliter of water. The researchers are now working on a larger-scale device that would be able to filter about a liter of water through a membrane made of polyaniline, and they believe this approach should increase the sensitivity by more than a hundredfold, with the goal of meeting the very low EPA advisory levels.

“We do envision a user-friendly, household system,” Swager says. “You can imagine putting in a liter of water, letting it go through the membrane, and you have a device that measures the change in resistance of the membrane.”

Such a device could offer a less expensive, rapid alternative to current PFAS detection methods. If PFAS are detected in drinking water, there are commercially available filters that can be used on household drinking water to reduce those levels. The new testing approach could also be useful for factories that manufacture products with PFAS chemicals, so they could test whether the water used in their manufacturing process is safe to release into the environment.

The research was funded by an MIT School of Science Fellowship to Gordon, a Bose Research Grant, and a Fulbright Fellowship to Park.

For people who speak many languages, there’s something special about their native tongue

Sun, 03/10/2024 - 8:01pm

A new study of people who speak many languages has found that there is something special about how the brain processes their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language. When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.

“Something makes it a little bit easier to process — maybe it’s that you’ve spent more time using that language — and you get a dip in activity for the native language compared to other languages that you speak proficiently,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.

Many languages, one network

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko’s lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language. 

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

“With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency,” Fedorenko says.

For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.

Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.

They were also scanned while listening to four languages they didn’t speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.

The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from “Alice in Wonderland” translated into many languages.

Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants’ native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn’t need to work very hard to interpret it.

“As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you've had more experience with it,” Fedorenko says.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

“Here we’re getting a hint that the response in the language network scales up with how much you understand from the input,” Malik-Moraleda says. “We didn’t quantify the level of understanding here, but in the future we’re planning to evaluate how much people are truly understanding the passages that they're listening to, and then see how that relates to the activation.”

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.

“What we’re seeing here is that the language regions are engaged when we process all these languages, and then there’s this other network that comes in for non-native languages to help you out because it’s a harder task,” Malik-Moraleda says.

In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.

The research was funded by the McGovern Institute for Brain Research, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

Researchers enhance peripheral vision in AI models

Fri, 03/08/2024 - 12:00am

Peripheral vision enables humans to see shapes that aren’t directly in our line of sight, albeit with less detail. This ability expands our field of vision and can be helpful in many situations, such as detecting a vehicle approaching our car from the side.

Unlike humans, AI does not have peripheral vision. Equipping computer vision models with this ability could help them detect approaching hazards more effectively or predict whether a human driver would notice an oncoming object.

Taking a step in this direction, MIT researchers developed an image dataset that allows them to simulate peripheral vision in machine learning models. They found that training models with this dataset improved the models’ ability to detect objects in the visual periphery, although the models still performed worse than humans.

Their results also revealed that, unlike with humans, neither the size of objects nor the amount of visual clutter in a scene had a strong impact on the AI’s performance.

“There is something fundamental going on here. We tested so many different models, and even when we train them, they get a little bit better but they are not quite like humans. So, the question is: What is missing in these models?” says Vasha DuTell, a postdoc and co-author of a paper detailing this study.

Answering that question may help researchers build machine learning models that can see the world more like humans do. In addition to improving driver safety, such models could be used to develop displays that are easier for people to view.

Plus, a deeper understanding of peripheral vision in AI models could help researchers better predict human behavior, adds lead author Anne Harrington MEng ’23.

“Modeling peripheral vision, if we can really capture the essence of what is represented in the periphery, can help us understand the features in a visual scene that make our eyes move to collect more information,” she explains.

Their co-authors include Mark Hamilton, an electrical engineering and computer science graduate student; Ayush Tewari, a postdoc; Simon Stent, research manager at the Toyota Research Institute; and senior authors William T. Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Ruth Rosenholtz, principal research scientist in the Department of Brain and Cognitive Sciences and a member of CSAIL. The research will be presented at the International Conference on Learning Representations.

“Any time you have a human interacting with a machine — a car, a robot, a user interface — it is hugely important to understand what the person can see. Peripheral vision plays a critical role in that understanding,” Rosenholtz says.

Simulating peripheral vision

Extend your arm in front of you and put your thumb up — the small area around your thumbnail is seen by your fovea, the small depression in the middle of your retina that provides the sharpest vision. Everything else you can see is in your visual periphery. Your visual cortex represents a scene with less detail and reliability as it moves farther from that sharp point of focus.

Many existing approaches to model peripheral vision in AI represent this deteriorating detail by blurring the edges of images, but the information loss that occurs in the optic nerve and visual cortex is far more complex.

For a more accurate approach, the MIT researchers started with a technique used to model peripheral vision in humans. Known as the texture tiling model, this method transforms images to represent a human’s visual information loss.  

They modified this model so it could transform images similarly, but in a more flexible way that doesn’t require knowing in advance where the person or AI will point their eyes.

“That let us faithfully model peripheral vision the same way it is being done in human vision research,” says Harrington.

The researchers used this modified technique to generate a huge dataset of transformed images that appear more textural in certain areas, to represent the loss of detail that occurs when a human looks further into the periphery.

Then they used the dataset to train several computer vision models and compared their performance with that of humans on an object detection task.

“We had to be very clever in how we set up the experiment so we could also test it in the machine learning models. We didn’t want to have to retrain the models on a toy task that they weren’t meant to be doing,” she says.

Peculiar performance

Humans and models were shown pairs of transformed images which were identical, except that one image had a target object located in the periphery. Then, each participant was asked to pick the image with the target object.

“One thing that really surprised us was how good people were at detecting objects in their periphery. We went through at least 10 different sets of images that were just too easy. We kept needing to use smaller and smaller objects,” Harrington adds.

The researchers found that training models from scratch with their dataset led to the greatest performance boosts, improving their ability to detect and recognize objects. Fine-tuning a model with their dataset, a process that involves tweaking a pretrained model so it can perform a new task, resulted in smaller performance gains.

But in every case, the machines weren’t as good as humans, and they were especially bad at detecting objects in the far periphery. Their performance also didn’t follow the same patterns as humans.

“That might suggest that the models aren’t using context in the same way as humans are to do these detection tasks. The strategy of the models might be different,” Harrington says.

The researchers plan to continue exploring these differences, with a goal of finding a model that can predict human performance in the visual periphery. This could enable AI systems that alert drivers to hazards they might not see, for instance. They also hope to inspire other researchers to conduct additional computer vision studies with their publicly available dataset.

“This work is important because it contributes to our understanding that human vision in the periphery should not be considered just impoverished vision due to limits in the number of photoreceptors we have, but rather, a representation that is optimized for us to perform tasks of real-world consequence,” says Justin Gardner, an associate professor in the Department of Psychology at Stanford University who was not involved with this work. “Moreover, the work shows that neural network models, despite their advancement in recent years, are unable to match human performance in this regard, which should lead to more AI research to learn from the neuroscience of human vision. This future research will be aided significantly by the database of images provided by the authors to mimic peripheral human vision.”

This work is supported, in part, by the Toyota Research Institute and the MIT CSAIL METEOR Fellowship.

How sensory gamma rhythm stimulation clears amyloid in Alzheimer’s mice

Thu, 03/07/2024 - 5:40pm

Studies at MIT and elsewhere are producing mounting evidence that light flickering and sound clicking at the gamma brain rhythm frequency of 40 hertz (Hz) can reduce Alzheimer’s disease (AD) progression and treat symptoms in human volunteers as well as lab mice. In a new open-access study in Nature using a mouse model of the disease, MIT researchers reveal a key mechanism that may contribute to these beneficial effects: clearance of amyloid proteins, a hallmark of AD pathology, via the brain’s glymphatic system, a recently discovered “plumbing” network parallel to the brain’s blood vessels.

“Ever since we published our first results in 2016, people have asked me how does it work? Why 40Hz? Why not some other frequency?” says study senior author Li-Huei Tsai, Picower Professor of Neuroscience and director of The Picower Institute for Learning and Memory of MIT and MIT’s Aging Brain Initiative. “These are indeed very important questions we have worked very hard in the lab to address.”

The new paper describes a series of experiments, led by Mitch Murdock PhD '23 when he was a brain and cognitive sciences doctoral student at MIT, showing that when sensory gamma stimulation increases 40Hz power and synchrony in the brains of mice, that prompts a particular type of neuron to release peptides. The study results further suggest that those short protein signals then drive specific processes that promote increased amyloid clearance via the glymphatic system.

“We do not yet have a linear map of the exact sequence of events that occurs,” says Murdock, who was jointly supervised by Tsai and co-author and collaborator Ed Boyden, Y. Eva Tan Professor of Neurotechnology at MIT, a member of the McGovern Institute for Brain Research and an affiliate member of the Picower Institute. “But the findings in our experiments support this clearance pathway through the major glymphatic routes.”

From gamma to glymphatics

Because prior research has shown that the glymphatic system is a key conduit for brain waste clearance and may be regulated by brain rhythms, Tsai and Murdock’s team hypothesized that it might help explain the lab’s prior observations that gamma sensory stimulation reduces amyloid levels in Alzheimer’s model mice.

Working with “5XFAD” mice, which genentically model Alzheimer’s, Murdock and co-authors first replicated the lab’s prior results that 40Hz sensory stimulation increases 40Hz neuronal activity in the brain and reduces amyloid levels. Then they set out to measure whether there was any correlated change in the fluids that flow through the glymphatic system to carry away wastes. Indeed, they measured increases in cerebrospinal fluid in the brain tissue of mice treated with sensory gamma stimulation compared to untreated controls. They also measured an increase in the rate of interstitial fluid leaving the brain. Moreover, in the gamma-treated mice he measured increased diameter of the lymphatic vessels that drain away the fluids and measured increased accumulation of amyloid in cervical lymph nodes, which is the drainage site for that flow.

To investigate how this increased fluid flow might be happening, the team focused on the aquaporin 4 (AQP4) water channel of astrocyte cells, which enables the cells to facilitate glymphatic fluid exchange. When they blocked APQ4 function with a chemical, that prevented sensory gamma stimulation from reducing amyloid levels and prevented it from improving mouse learning and memory. And when, as an added test, they used a genetic technique for disrupting AQP4, that also interfered with gamma-driven amyloid clearance.

In addition to the fluid exchange promoted by APQ4 activity in astrocytes, another mechanism by which gamma waves promote glymphatic flow is by increasing the pulsation of neighboring blood vessels. Several measurements showed stronger arterial pulsatility in mice subjected to sensory gamma stimulation compared to untreated controls.

One of the best new techniques for tracking how a condition, such as sensory gamma stimulation, affects different cell types is to sequence their RNA to track changes in how they express their genes. Using this method, Tsai and Murdock’s team saw that gamma sensory stimulation indeed promoted changes consistent with increased astrocyte AQP4 activity.

Prompted by peptides

The RNA sequencing data also revealed that upon gamma sensory stimulation a subset of neurons, called “interneurons,” experienced a notable uptick in the production of several peptides. This was not surprising in the sense that peptide release is known to be dependent on brain rhythm frequencies, but it was still notable because one peptide in particular, VIP, is associated with Alzheimer’s-fighting benefits and helps to regulate vascular cells, blood flow, and glymphatic clearance.

Seizing on this intriguing result, the team ran tests that revealed increased VIP in the brains of gamma-treated mice. The researchers also used a sensor of peptide release and observed that sensory gamma stimulation resulted in an increase in peptide release from VIP-expressing interneurons.

But did this gamma-stimulated peptide release mediate the glymphatic clearance of amyloid? To find out, the team ran another experiment: They chemically shut down the VIP neurons. When they did so, and then exposed mice to sensory gamma stimulation, they found that there was no longer an increase in arterial pulsatility and there was no more gamma-stimulated amyloid clearance.

“We think that many neuropeptides are involved,” Murdock says. Tsai added that a major new direction for the lab’s research will be determining what other peptides or other molecular factors may be driven by sensory gamma stimulation.

Tsai and Murdock add that while this paper focuses on what is likely an important mechanism — glymphatic clearance of amyloid — by which sensory gamma stimulation helps the brain, it’s probably not the only underlying mechanism that matters. The clearance effects shown in this study occurred rather rapidly, but in lab experiments and clinical studies weeks or months of chronic sensory gamma stimulation have been needed to have sustained effects on cognition.

With each new study, however, scientists learn more about how sensory stimulation of brain rhythms may help treat neurological disorders.

In addition to Tsai, Murdock, and Boyden, the paper’s other authors are Cheng-Yi Yang, Na Sun, Ping-Chieh Pao, Cristina Blanco-Duque, Martin C. Kahn, Nicolas S. Lavoie, Matheus B. Victor, Md Rezaul Islam, Fabiola Galiana, Noelle Leary, Sidney Wang, Adele Bubnys, Emily Ma, Leyla A. Akay, TaeHyun Kim, Madison Sneve, Yong Qian, Cuixin Lai, Michelle M. McCarthy, Nancy Kopell, Manolis Kellis, and Kiryl D. Piatkevich.

Support for the study came from Robert A. and Renee E. Belfer, the Halis Family Foundation, Eduardo Eurnekian, the Dolby family, Barbara J. Weedon, Henry E. Singleton, the Hubolow family, the Ko Hahn family, Carol and Gene Ludwig Family Foundation, Lester A. Gimpelson, Lawrence and Debra Hilibrand, Glenda and Donald Mattes, Kathleen and Miguel Octavio, David B. Emmes, the Marc Haas Foundation, Thomas Stocky and Avni Shah, the JPB Foundation, the Picower Institute, and the National Institutes of Health.

Three MIT alumni graduate from NASA astronaut training

Thu, 03/07/2024 - 2:40pm

“It's been a wild ride,” says Christopher Williams PhD ’12, moments after he received his astronaut pin, signifying graduation into the NASA astronaut corps.

Williams, along with Marcos Berríos ’06 and Christina “Chris” Birch PhD ’15, were among the 12-member class of astronaut candidates to graduate from basic training at NASA’s Johnson Space Center in Houston, Texas, on Tuesday, March 5.

NASA Astronaut Group 23 are the newest generation of Artemis astronauts, which includes 10 hailing from the United States, as well as two from the United Arab Emirates who trained alongside them.

During their more than two years of basic training, the group became proficient in such areas as spacewalking, robotics, space station systems, T-38 jets, and Russian language. The graduates also said that they asked endless questions about the functions of their spacesuit, which they wore while submerged in huge pools to practice spacewalks. They jumped into a frigid lake during a 10-day hike in Wyoming and shared the hauling of a 30-pound lava rock back to camp for more geology study, as well as the last bag of peanut M&Ms after running out of ready-to-eat meals during survival training in the Alabama back country.

“We feel ready to put our efforts and our energy into supporting NASA's science on the space station or in support of our return to the moon and this program,” says Birch. “All of the Flies feel a great sense of responsibility and excitement for what comes next.”

The team earned the nickname “The Flies” from the previous astronaut class, the “Turtles,” and even designed their team patch into a housefly shape. (Although team prefers calling themselves the Swarm, “which has a little bit more pizzazz,” says Birch.) “Traditionally, these names are usually things that do not take well to flight,” Birch adds. “We were really surprised that they gave us a flying creature. I think they have a lot of faith in us and hope that we fly soon.”

The Turtles were the first class to graduate under NASA’s Artemis program, in 2020. They included three aeronautics and astronautics alumni: Raja Chari SM ’01, Jasmin Moghbeli ’05, and Warren “Woody” Hoburg ’08. Former Whitehead Institute for Biomedical Research research fellow Kate Rubins, who was selected as a NASA astronaut in 2009 and had served as a flight engineer aboard the International Space Station, also joined the team.

After the newest graduates received their silver NASA astronaut pins, they joined the other 36 current astronauts eligible “to sit on the pointy end of a rocket” for such initiatives as assignments to the International Space Station, future commercial destinations, deep-space missions to destinations including the moon on NASA’s Orion spacecraft and Space Launch System rocket, and eventually, missions to Mars. The Artemis initiative also includes plans for the first woman and first person of color to walk on the moon.

For now, the Flies will be supporting all of these initiatives while Earthbound.

“Hopefully within next two or three years, my name will be called to go to space,” says Berrios. For now, he will stay in Houston, where he’ll be working in the human landing system program, including with private companies such as SpaceX and Blue Origin. He’ll also continue his training in advanced robotics and Russian, and he is training at various international partner countries working with space station modules.

Marcos Berríos

When he was selected to join the NASA astronaut program, Berríos had been serving as the commander of Detachment 1, 413th Flight Test Squadron and deputy director of the Combat Search and Rescue (CSAR) Combined Task Force. As a test pilot, he has accumulated more than 110 combat missions and 1,400 hours of flight time in more than 21 different aircraft.

Berríos calls Guaynabo, Puerto Rico, his hometown, and says he appreciated other Latino American astronauts, including Franklin R. Chang Diaz PhD ’77, serving as his role models and mentors. He hopes to do the same for others.

“Today, hopefully, marks another opportunity to open doors for others like me in the future, to recognize that the talent in the Latin American community is strong,” he said on the day of his graduation. His advice to those dreaming of being an astronaut is “to not give up, to stay curious, stay humble, be disciplined, and throughout all adversity, throughout all obstacles, that would all be worth it in the end.”

“I've always wanted to be an astronaut,” he says. He read a lot of astronaut autobiographies, and frequently Googled class 2.007 (Design and Manufacturing I), which led him to study mechanical engineering at MIT. He earned his master’s degree in mechanical engineering as well as a doctorate in aeronautics and astronautics from Stanford University, and then enrolled at the U.S. Naval Test Pilot School in Patuxent River, Maryland.

As a developmental test pilot at the CSAR Combined Test Force at Nellis Air Force Base in Nevada, he learned avionics, defensive systems, synthetic vision technologies, and electric vertical-takeoff-and-landing vehicles.

Berríos says that MIT, particularly while working with Professor Alexander Slocum, instilled within him the discipline required for his successes. “I don't want to admit how spending, like, 24 hours on problem set after problem set just provided that attitude and mentality of like, ‘Yeah, this is tough, this is hard,’ but you know we've got the skills, we've got the resources, we've got our colleagues, and we're going to figure it out … and we're going to find a pretty novel way to solve it.”

He says he found spacewalk training to be especially tough “physically, because you're in a pressurized spacesuit — it's stiff, it requires strength and stamina — but also mentally, because you have to be focused for six hours at a time and maintain high awareness of your surroundings as well as for your partner.”

The new astronaut says he identifies first as an engineer and researcher. “We're kind of a jack-of-all-trades,” he says. “One of the amazing things about being an astronaut, and certainly one of the things that was very captivating for me about this job, was all of the different subject matters that we get to touch on. I mean, it's incredible.”

Christina Birch  

An Arizona native, Birch graduated from the University of Arizona with bachelor’s degrees in mathematics, biochemistry, and molecular biophysics. As a doctoral candidate in biological engineering at MIT, she conducted original research at the intersection of synthetic biology, microfluidics, and infectious disease, and worked in the Jacquin Niles lab in the Department of Biological Engineering. “I really am grateful for (her advisor, Niles) taking me on, especially when he was starting up his lab.”

After graduation, she taught bioengineering at the University of California at Riverside, and scientific writing and communication at Caltech. But she didn’t forget the skills she gained while on the MIT cycling team; in 2018, she left academia to become a decorated track cyclist on the U.S. National Team. She was training for the 2020 Summer Olympics, while also working as a scientific consultant for startups in various technology sectors from robotics to vaccine development, when she was selected by NASA.

“I really need to give a shout out to the MIT cycling team,” she says. “They helped give me my start,” she says. “It was just a fantastic place to get a taste of that cycling community which I'm still a part of. I do still ride; I'm focused on longer-distance races, and I like to do gravel races.”

She’s also excited that the International Space Station has a bike trainer called CEVIS, and Teal CEVIS, to reduce muscle and bone loss experienced in microgravity.  

Her next role is to support the Orion program.

“Last week, I was out in San Diego supporting the underway recovery training, which is the landing and recovery team’s practice to recover crew from the Orion capsule after a simulated splashdown in the Pacific. It was just such an incredible learning opportunity for me getting up to speed on this new vehicle. We're doing the Orion 2 mission, which is really an incredible test flight.”

“The more I learn about the program, the more I see how many different elements that we are building from scratch,” she says. “What really sets NASA apart is our dedication to safety, and I know that we will fly astronauts to the moon when we're ready, and now that comes under a little bit of my purview and my responsibilities.”

How does she incorporate her backgrounds in cycling and her biological engineering research into the space program? “The common link between my pursuit of the pointy edge of the bike race, and also original research at MIT, has always been the stepping into the unknown, comfort-pushing boundaries. Whether it's getting into the T38 jet for the first time — I don't have any prior aviation experience — and standing up in front of an audience to give a scientific lecture or to make an attack on the bike, you know I've done that emotional practice.

“I think being comfortable in discomfort and the unknown, stepping through that process with a rigorous sort of like engineering-questioning, is because MIT set me up so well with a strong foundation of understanding engineering principles, and applying those to big questions. Places where we don't have full understanding of a system or how something works, and then there is spaceflight, how we are very much developing these technologies and testing them as we go. Ultimately, human lives are going to depend on asking really good questions.”

She says her biggest challenge so far has been diversifying her skill set.

“I had to make a pretty big transition when I arrived (to NASA training) because I had previously been in a mentality of trying to be the best in the world at something, be it the best in the world on the bike, or you know, being the expert in RNA aptamer malaria-targeting technologies, which is the research I was doing at MIT, and then having to switch to being both knowledgeable and skillful in a huge number of different areas that are required of an astronaut. I don't have an aviation background so that was something very new, very exciting, and very fun, it turns out. But also having to develop spacewalk skills, learning to speak Russian, learning to fly a robotic arm, and learning all about the International Space Station systems, so going from a specialist, really, to a generalist was a pretty big transition.

“One of the hardest things about astronaut training is finding balance, because we are switching between all of these different technical topics, sometimes in the span of a day. You might be in the jet in the morning and then you have to turn around and go to an emergency simulation for a space station in the afternoon. Reid Wiseman, the commander of the Artemis 2 mission, says, ‘Be where your feet are.’ And that was some of the best advice that he gave us coming into the office as candidates.”

Christopher Williams

Williams knew going into the training program that he would learn things in which he had no prior background.

“When you're flying in one of the T38 jets you're having to do, you know, back-of-the-envelope math estimating things while operating in a dynamic environment,” he recalls. “Other things, like doing an underwater run in the spacesuit, to finding alternatives when conjugating Russian verbs … learning how to approach problems and to solve them came from my time at MIT. Going through the physics grad program there made me much stronger at taking new topics and just sort of digesting them, figuring to how to break them down and solve them.”   

He did end up working with many MIT alumni. “Lots of MIT people have rotated through, so I've had lots of good conversations with Kate Rubins and a bunch of folks that passed through AeroAstro [the Department of Aeronautics and Astronautics].”

Williams grew up in Potomac, Maryland, dreaming of being an astronaut. A private pilot and Eagle Scout, Williams spent much of his high school and Stanford University years at the U.S. Naval Research Laboratory in Washington, studying supernovae using the Very Large Array radio telescope, and researching supernovae at NASA's Goddard Space Flight Center.   

At MIT, he pursued his doctorate in physics with a focus on astrophysics. When he wasn’t working as a campus emergency medical technician and volunteer firefighter, Williams and his advisor, Jackie Hewitt, built the Murchison Widefield Array, a low-frequency radio telescope array in Western Australia designed to study the epoch of reionization of the early universe. 

After graduation, he joined the faculty at Harvard Medical School, and was a medical physicist in the Radiation Oncology Department at the Brigham and Women’s Hospital and Dana-Farber Cancer Institute. As the lead physicist for the institute’s MRI-guided adaptive radiation therapy program, Williams focused on developing image guidance techniques for cancer treatments.  

He will be supporting the ongoing missions until it’s his turn to head to space. In the meantime, he looks forward to using his background in medicine to research how the human body is affected by space radiation and being in orbit.

“It’s strange, because as a scientist you know you're kind of in a different role. There are physics experiments on the space station, and tons of biology and chemistry experiments. It's actually really fun because I get to stretch different parts of my brain that I haven't had to before.”

“We're really representing all of NASA, all of America all over the world,” he says. “That's a huge responsibility on us. I really want to make everybody proud.”

Encouraging the next generation of astronauts

After the graduation ceremonies ended, NASA announced that it is accepting applications for new astronaut candidates through April 2. 

Berrios advises MIT students that no matter what their background is, they should apply if they want to be an astronaut. “Try and express in words how your education, how your career, and how your hobbies relate to human space exploration. Chris [Birch] and I have very different backgrounds and combinations of skill sets … I guarantee the next class is going to have an individual from MIT that has a background that we haven't even thought of yet.”

Birch says that just interviewing for the Artemis program “absolutely changed my life. I knew that even if I didn't become an astronaut, I had met, you know, a real incredible group of people that inspired me to push further to do more to find another way to serve and so I would really just encourage people to apply. A lot of people (who were accepted) applied more than once.”

Adds Williams, “If you meet the requirements, just do it. If that's your dream, tell people about it — because people will be excited for you and want to help you to achieve.”

How the brain coordinates speaking and breathing

Thu, 03/07/2024 - 2:00pm

MIT researchers have discovered a brain circuit that drives vocalization and ensures that you talk only when you breathe out, and stop talking when you breathe in.

The newly discovered circuit controls two actions that are required for vocalization: narrowing of the larynx and exhaling air from the lungs. The researchers also found that this vocalization circuit is under the command of a brainstem region that regulates the breathing rhythm, which ensures that breathing remains dominant over speech.

“When you need to breathe in, you have to stop vocalization. We found that the neurons that control vocalization receive direct inhibitory input from the breathing rhythm generator,” says Fan Wang, an MIT professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jaehong Park, a Duke University graduate student who is currently a visiting student at MIT, is the lead author of the study, which appears today in Science. Other authors of the paper include MIT technical associates Seonmi Choi and Andrew Harrahill, former MIT research scientist Jun Takatoh, and Duke University researchers Shengli Zhao and Bao-Xia Han.

Vocalization control

Located in the larynx, the vocal cords are two muscular bands that can open and close. When they are mostly closed, or adducted, air exhaled from the lungs generates sound as it passes through the cords.

The MIT team set out to study how the brain controls this vocalization process, using a mouse model. Mice communicate with each other using sounds known as ultrasonic vocalizations (USVs), which they produce using the unique whistling mechanism of exhaling air through a small hole between nearly closed vocal cords.

“We wanted to understand what are the neurons that control the vocal cord adduction, and then how do those neurons interact with the breathing circuit?” Wang says.

To figure that out, the researchers used a technique that allows them to map the synaptic connections between neurons. They knew that vocal cord adduction is controlled by laryngeal motor neurons, so they began by tracing backward to find the neurons that innervate those motor neurons.

This revealed that one major source of input is a group of premotor neurons found in the hindbrain region called the retroambiguus nucleus (RAm). Previous studies have shown that this area is involved in vocalization, but it wasn’t known exactly which part of the RAm was required or how it enabled sound production.

The researchers found that these synaptic tracing-labeled RAm neurons were strongly activated during USVs. This observation prompted the team to use an activity-dependent method to target these vocalization-specific RAm neurons, termed as RAmVOC. They used chemogenetics and optogenetics to explore what would happen if they silenced or stimulated their activity. When the researchers blocked the RAmVOC neurons, the mice were no longer able to produce USVs or any other kind of vocalization. Their vocal cords did not close, and their abdominal muscles did not contract, as they normally do during exhalation for vocalization.

Conversely, when the RAmVOC neurons were activated, the vocal cords closed, the mice exhaled, and USVs were produced. However, if the stimulation lasted two seconds or longer, these USVs would be interrupted by inhalations, suggesting that the process is under control of the same part of the brain that regulates breathing.

“Breathing is a survival need,” Wang says. “Even though these neurons are sufficient to elicit vocalization, they are under the control of breathing, which can override our optogenetic stimulation.”

Rhythm generation

Additional synaptic mapping revealed that neurons in a part of the brainstem called the pre-Bötzinger complex, which acts as a rhythm generator for inhalation, provide direct inhibitory input to the RAmVOC neurons.

“The pre-Bötzinger complex generates inhalation rhythms automatically and continuously, and the inhibitory neurons in that region project to these vocalization premotor neurons and essentially can shut them down,” Wang says.

This ensures that breathing remains dominant over speech production, and that we have to pause to breathe while speaking.

The researchers believe that although human speech production is more complex than mouse vocalization, the circuit they identified in mice plays the conserved role in speech production and breathing in humans.

“Even though the exact mechanism and complexity of vocalization in mice and humans is really different, the fundamental vocalization process, called phonation, which requires vocal cord closure and the exhalation of air, is shared in both the human and the mouse,” Park says.

The researchers now hope to study how other functions such as coughing and swallowing food may be affected by the brain circuits that control breathing and vocalization.

The research was funded by the National Institutes of Health.

Method rapidly verifies that a robot will avoid collisions

Thu, 03/07/2024 - 12:00am

Before a robot can grab dishes off a shelf to set the table, it must ensure its gripper and arm won’t crash into anything and potentially shatter the fine china. As part of its motion planning process, a robot typically runs “safety check” algorithms that verify its trajectory is collision-free.

However, sometimes these algorithms generate false positives, claiming a trajectory is safe when the robot would actually collide with something. Other methods that can avoid false positives are typically too slow for robots in the real world.

Now, MIT researchers have developed a safety check technique which can prove with 100 percent accuracy that a robot’s trajectory will remain collision-free (assuming the model of the robot and environment is itself accurate). Their method, which is so precise it can discriminate between trajectories that differ by only millimeters, provides proof in only a few seconds.

But a user doesn’t need to take the researchers’ word for it — the mathematical proof generated by this technique can be checked quickly with relatively simple math.

The researchers accomplished this using a special algorithmic technique, called sum-of-squares programming, and adapted it to effectively solve the safety check problem. Using sum-of-squares programming enables their method to generalize to a wide range of complex motions.

This technique could be especially useful for robots that must move rapidly avoid collisions in spaces crowded with objects, such as food preparation robots in a commercial kitchen. It is also well-suited for situations where robot collisions could cause injuries, like home health robots that care for frail patients.

“With this work, we have shown that you can solve some challenging problems with conceptually simple tools. Sum-of-squares programming is a powerful algorithmic idea, and while it doesn’t solve every problem, if you are careful in how you apply it, you can solve some pretty nontrivial problems,” says Alexandre Amice, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

Amice is joined on the paper fellow EECS graduate student Peter Werner and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The work will be presented at the International Conference on Robots and Automation.

Certifying safety

Many existing methods that check whether a robot’s planned motion is collision-free do so by simulating the trajectory and checking every few seconds to see whether the robot hits anything. But these static safety checks can’t tell if the robot will collide with something in the intermediate seconds.

This might not be a problem for a robot wandering around an open space with few obstacles, but for robots performing intricate tasks in small spaces, a few seconds of motion can make an enormous difference.

Conceptually, one way to prove that a robot is not headed for a collision would be to hold up a piece of paper that separates the robot from any obstacles in the environment. Mathematically, this piece of paper is called a hyperplane. Many safety check algorithms work by generating this hyperplane at a single point in time. However, each time the robot moves, a new hyperplane needs to be recomputed to perform the safety check.

Instead, this new technique generates a hyperplane function that moves with the robot, so it can prove that an entire trajectory is collision-free rather than working one hyperplane at a time.

The researchers used sum-of-squares programming, an algorithmic toolbox that can effectively turn a static problem into a function. This function is an equation that describes where the hyperplane needs to be at each point in the planned trajectory so it remains collision-free.

Sum-of-squares can generalize the optimization program to find a family of collision-free hyperplanes. Often, sum-of-squares is considered a heavy optimization that is only suitable for offline use, but the researchers have shown that for this problem it is extremely efficient and accurate.

“The key here was figuring out how to apply sum-of-squares to our particular problem. The biggest challenge was coming up with the initial formulation. If I don’t want my robot to run into anything, what does that mean mathematically, and can the computer give me an answer?” Amice says.

In the end, like the name suggests, sum-of-squares produces a function that is the sum of several squared values. The function is always positive, since the square of any number is always a positive value.

Trust but verify

By double-checking that the hyperplane function contains squared values, a human can easily verify that the function is positive, which means the trajectory is collision-free, Amice explains.

While the method certifies with perfect accuracy, this assumes the user has an accurate model of the robot and environment; the mathematical certifier is only as good as the model.

“One really nice thing about this approach is that the proofs are really easy to interpret, so you don’t have to trust me that I coded it right because you can check it yourself,” he adds.

They tested their technique in simulation by certifying that complex motion plans for robots with one and two arms were collision-free. At its slowest, their method took just a few hundred milliseconds to generate a proof, making it much faster than some alternate techniques.

“This new result suggests a novel approach to certifying that a complex trajectory of a robot manipulator is collision free, elegantly harnessing tools from mathematical optimization, turned into surprisingly fast (and publicly available) software. While not yet providing a complete solution to fast trajectory planning in cluttered environments, this result opens the door to several intriguing directions of further research,” says Dan Halperin, a professor of computer science at Tel Aviv University, who was not involved with this research.

While their approach is fast enough to be used as a final safety check in some real-world situations, it is still too slow to be implemented directly in a robot motion planning loop, where decisions need to be made in microseconds, Amice says.

The researchers plan to accelerate their process by ignoring situations that don’t require safety checks, like when the robot is far away from any objects it might collide with. They also want to experiment with specialized optimization solvers that could run faster.

“Robots often get into trouble by scraping obstacles due to poor approximations that are made when generating their routes. Amice, Werner, and Tedrake have come to the rescue with a powerful new algorithm to quickly ensure that robots never overstep their bounds, by carefully leveraging advanced methods from computational algebraic geometry,” adds Steven LaVelle, professor in the Faculty of Information Technology and Electrical Engineering at the University of Oulu in Finland, and who was not involved with this work.

This work was supported, in part, by Amazon and the U.S. Air Force Research Laboratory.

Deciphering the cellular mechanisms behind ALS

Wed, 03/06/2024 - 4:00pm

At a time in which scientific research is increasingly cross-disciplinary, Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering, stands out as both a very early adopter of drawing from different scientific fields and a great advocate of the practice today.

When Fraenkel’s students find themselves at an impasse in their work, he suggests they approach their problem from a different angle or look for inspiration in a completely unrelated field.

“I think the thing that I always come back to is try going around it from the side,” Fraenkel says. “Everyone in the field is working in exactly the same way. Maybe you’ll come up with a solution by doing something different.”

Fraenkel’s work untangling the often-complicated mechanisms of disease to develop targeted therapies employs methods from the world of computer science, including algorithms that bring focus to processes most likely to be relevant. Using such methods, he has decoded fundamental aspects of Huntington’s disease and glioblastoma, and he and his collaborators are working to understand the mechanisms behind amyotrophic lateral sclerosis (ALS), also known as Lou Gehrig’s disease.

Very early on, Fraenkel was exposed to a merging of scientific disciplines. One of his teachers in high school, who was a student at Columbia University, started a program in which chemistry, physics, and biology were taught together. The teacher encouraged Fraenkel to visit a lab at Columbia run by Cyrus Levinthal, a physicist who taught one of the first biophysics classes at MIT. Fraenkel not only worked at the lab for a summer, he left high school (later earning an equivalency diploma) and started working at the lab full time and taking classes at Columbia.

“Here was a lab that was studying really important questions in biology, but the head of it had trained in physics,” Fraenkel says. “The idea that you could get really important insights by cross-fertilization, that’s something that I’ve always really appreciated. And now, we can see how this approach can impact how people are being treated for diseases or reveal really important fundamentals of science.”

Breaking barriers

At MIT, Fraenkel works in the Department of Biological Engineering and co-directs the Computational Systems Biology graduate program. For the study of ALS, he and his collaborators at Massachusetts General Hospital (MGH), including neurologist and neuroscientist Merit Cudkowicz, were recently awarded $1.25 million each from the nonprofit EverythingALS organization. The strategy behind the gift, Fraenkel says, is to encourage MIT and MGH to increase their collaboration, eventually enlisting other organizations as well, to form a hub for ALS research “to break down barriers in the field and really focus on the core problems.”

Fraenkel has been working with EverythingALS and their data scientists in collaboration with doctors James Berry of MGH and Lyle Ostrow of Temple University. He also works extensively with the nonprofit Answer ALS, a consortium of scientists studying the disease.

Fraenkel first got interested in ALS and other neurodegenerative diseases because traditional molecular biology research had not yielded effective therapies or, in the case of ALS, much insight into the disease’s causes.

“I was interested in places where the traditional approaches of molecular biology” — in which researchers hypothesize that a certain protein or gene or pathway is key to understanding a disease — “were not having a lot of luck or impact,” Fraenkel says. “Those are the places where if you come at it from another direction, the field could really advance.”

Fraenkel says that while traditional molecular biology has produced many valuable discoveries, it’s not very systematic. “If you start with the wrong hypothesis, you’re not going to get very far,” he says.

Systems biology, on the other hand, measures many cellular changes — including transcription of genes, protein-DNA interactions, of thousands of chemical compounds and of protein modifications — and can apply artificial intelligence and machine learning to those measurements to collectively identify the most important interactions.

“The goal of systems biology is to systematically measure as many cellular changes as possible, integrate this data, and let the data guide you to the most promising hypotheses,” Fraenkel says.

The Answer ALS project, with which Frankel works, involves approximately a thousand people with ALS who provided clinical information about their disease and blood cells. Their blood cells were reprogrammed to be pluripotent stem cells, meaning that the cells could be used to grow neurons that are studied and compared to neurons from a control group.

Emotional connection

While Fraenkel was intellectually inspired to apply systems biology to the challenging problem of understanding ALS — there is no known cause or cure for 80 to 90 percent of people with ALS — he also felt a strong emotional connection to the community of people with ALS and their advocates.

He tells a story of going to meet the director of an ALS organization in Israel who was trying to encourage scientists to work on the disease. Fraenkel knew the man had ALS. What he didn’t know before arriving at the meeting was that he was immobilized, lying in a hospital bed in his living room and only able to communicate with eye-blinking software.

“I sat down so we could both see the screen he was using to type characters out,” Fraenkel says, “and we had this fascinating conversation.”

“Here was a young guy in the prime of life, suffering in a way that’s unimaginable. At the same time, he was doing something amazing, running this organization to try to make a change. And he wasn’t the only one,” he says. “You meet one, and then another and then another — people who are sometimes on their last breaths and are still pushing to make a difference and cure the disease.”

The gift from EverythingALS — which was founded by Indu Navar after losing her husband, Peter Cohen, to ALS and later merged with CureALS, founded by Bill Nuti, who is living with ALS — aims to research the root causes of the disease, in the hope of finding therapies to stop its progression, and natural healing processes that could possibly restore function of damaged nerves.

To achieve those goals, Fraenkel says it is crucial to measure molecular changes in the cells of people with ALS and also to quantify the symptoms of ALS, which presents very differently from person to person. Fraenkel refers to how understanding the differences in various types of cancer has led to much better treatments, pointing out that ALS is nowhere near as well categorized or understood.

“The subtyping is really going to be what the field needs,” he says. “The prognosis for more than 80 percent of people with ALS is not appreciably different than it would have been 20, or maybe even 100, years ago.”

In the same way that Fraenkel was fascinated as a high school student by doing biology in a physicist’s lab, he says he loves that at MIT, different disciplines work together easily.

“You reach out to MIT colleagues in other departments, and they’re not surprised to hear from someone who’s not in their field,” Fraenkel says. “We’re a goal-oriented institution that focuses on solving hard problems.”

A noninvasive treatment for “chemo brain”

Wed, 03/06/2024 - 2:00pm

Patients undergoing chemotherapy often experience cognitive effects such as memory impairment and difficulty concentrating — a condition commonly known as “chemo brain.”

MIT researchers have now shown that a noninvasive treatment that stimulates gamma frequency brain waves may hold promise for treating chemo brain. In a study of mice, they found that daily exposure to light and sound with a frequency of 40 hertz protected brain cells from chemotherapy-induced damage. The treatment also helped to prevent memory loss and impairment of other cognitive functions.

This treatment, which was originally developed as a way to treat Alzheimer’s disease, appears to have widespread effects that could help with a variety of neurological disorders, the researchers say.

“The treatment can reduce DNA damage, reduce inflammation, and increase the number of oligodendrocytes, which are the cells that produce myelin surrounding the axons,” says Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the Picower Professor in the MIT Department of Brain and Cognitive Sciences. “We also found that this treatment improved learning and memory, and enhanced executive function in the animals.”

Tsai is the senior author of the new study, which appears today in Science Translational Medicine. The paper’s lead author is TaeHyun Kim, an MIT postdoc.

Protective brain waves

Several years ago, Tsai and her colleagues began exploring the use of light flickering at 40 hertz (cycles per second) as a way to improve the cognitive symptoms of Alzheimer’s disease. Previous work had suggested that Alzheimer’s patients have impaired gamma oscillations — brain waves that range from 25 to 80 hertz (cycles per second) and are believed to contribute to brain functions such as attention, perception, and memory.

Tsai’s studies in mice have found that exposure to light flickering at 40 hertz or sounds with a pitch of 40 hertz can stimulate gamma waves in the brain, which has many protective effects, including preventing the formation of amyloid beta plaques. Using light and sound together provides even more significant protection. The treatment also appears promising in humans: Phase 1 clinical trials in people with early-stage Alzheimer’s disease have found the treatment is safe and does offer some neurological and behavioral benefits.

In the new study, the researchers set out to see whether this treatment could also counteract the cognitive effects of chemotherapy treatment. Research has shown that these drugs can induce inflammation in the brain, as well as other detrimental effects such as loss of white matter — the networks of nerve fibers that help different parts of the brain communicate with each other. Chemotherapy drugs also promote loss of myelin, the protective fatty coating that allows neurons to propagate electrical signals. Many of these effects are also seen in the brains of people with Alzheimer’s.

“Chemo brain caught our attention because it is extremely common, and there is quite a lot of research on what the brain is like following chemotherapy treatment,” Tsai says. “From our previous work, we know that this gamma sensory stimulation has anti-inflammatory effects, so we decided to use the chemo brain model to test whether sensory gamma stimulation can be beneficial.”

As an experimental model, the researchers used mice that were given cisplatin, a chemotherapy drug often used to treat testicular, ovarian, and other cancers. The mice were given cisplatin for five days, then taken off of it for five days, then on again for five days. One group received chemotherapy only, while another group was also given 40-hertz light and sound therapy every day.

After three weeks, mice that received cisplatin but not gamma therapy showed many of the expected effects of chemotherapy: brain volume shrinkage, DNA damage, demyelination, and inflammation. These mice also had reduced populations of oligodendrocytes, the brain cells responsible for producing myelin.

However, mice that received gamma therapy along with cisplatin treatment showed significant reductions in all of those symptoms. The gamma therapy also had beneficial effects on behavior: Mice that received the therapy performed much better on tests designed to measure memory and executive function.

“A fundamental mechanism”

Using single-cell RNA sequencing, the researchers analyzed the gene expression changes that occurred in mice that received the gamma treatment. They found that in those mice, inflammation-linked genes and genes that trigger cell death were suppressed, especially in oligodendrocytes, the cells responsible for producing myelin.

In mice that received gamma treatment along with cisplatin, some of the beneficial effects could still be seen up to four months later. However, the gamma treatment was much less effective if it was started three months after the chemotherapy ended.

The researchers also showed that the gamma treatment improved the signs of chemo brain in mice that received a different chemotherapy drug, methotrexate, which is used to treat breast, lung, and other types of cancer.

“I think this is a very fundamental mechanism to improve myelination and to promote the integrity of oligodendrocytes. It seems that it’s not specific to the agent that induces demyelination, be it chemotherapy or another source of demyelination,” Tsai says.

Because of its widespread effects, Tsai’s lab is also testing gamma treatment in mouse models of other neurological diseases, including Parkinson’s disease and multiple sclerosis. Cognito Therapeutics, a company founded by Tsai and MIT Professor Edward Boyden, has finished a phase 2 trial of gamma therapy in Alzheimer’s patients, and plans to begin a phase 3 trial this year.

“My lab’s major focus now, in terms of clinical application, is Alzheimer’s; but hopefully we can test this approach for a few other indications, too,” Tsai says.

The research was funded by the JPB Foundation, the Ko Hahn Seed Fund, and the National Institutes of Health.

MIT scientists use a new type of nanoparticle to make vaccines more powerful

Wed, 03/06/2024 - 2:00pm

Many vaccines, including vaccines for hepatitis B and whooping cough, consist of fragments of viral or bacterial proteins. These vaccines often include other molecules called adjuvants, which help to boost the immune system’s response to the protein.

Most of these adjuvants consist of aluminum salts or other molecules that provoke a nonspecific immune response. A team of MIT researchers has now shown that a type of nanoparticle called a metal organic framework (MOF) can also provoke a strong immune response, by activating the innate immune system — the body’s first line of defense against any pathogen — through cell proteins called toll-like receptors.

In a study of mice, the researchers showed that this MOF could successfully encapsulate and deliver part of the SARS-CoV-2 spike protein, while also acting as an adjuvant once the MOF is broken down inside cells.

While more work would be needed to adapt these particles for use as vaccines, the study demonstrates that this type of structure can be useful for generating a strong immune response, the researchers say.

“Understanding how the drug delivery vehicle can enhance an adjuvant immune response is something that could be very helpful in designing new vaccines,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research and one of the senior authors of the new study.

Robert Langer, an MIT Institute Professor and member of the Koch Institute, and Dan Barouch, director of the Center for Virology and Vaccine Research at Beth Israel Deaconess Medical Center and a professor at Harvard Medical School, are also senior authors of the paper, which appears today in Science Advances. The paper’s lead author is former MIT postdoc and Ibn Khaldun Fellow Shahad Alsaiari.

Immune activation

In this study, the researchers focused on a MOF called ZIF-8, which consists of a lattice of tetrahedral units made up of a zinc ion attached to four molecules of imidazole, an organic compound. Previous work has shown that ZIF-8 can significantly boost immune responses, but it wasn’t known exactly how this particle activates the immune system.

To try to figure that out, the MIT team created an experimental vaccine consisting of the SARS-CoV-2 receptor-binding protein (RBD) embedded within ZIF-8 particles. These particles are between 100 and 200 nanometers in diameter, a size that allows them to get into the body’s lymph nodes directly or through immune cells such as macrophages.

Once the particles enter the cells, the MOFs are broken down, releasing the viral proteins. The researchers found that the imidazole components then activate toll-like receptors (TLRs), which help to stimulate the innate immune response.

“This process is analogous to establishing a covert operative team at the molecular level to transport essential elements of the Covid-19 virus to the body’s immune system, where they can activate specific immune responses to boost vaccine efficacy,” Alsaiari says.

RNA sequencing of cells from the lymph nodes showed that mice vaccinated with ZIF-8 particles carrying the viral protein strongly activated a TLR pathway known as TLR-7, which led to greater production of cytokines and other molecules involved in inflammation.

Mice vaccinated with these particles generated a much stronger response to the viral protein than mice that received the protein on its own.

“Not only are we delivering the protein in a more controlled way through a nanoparticle, but the compositional structure of this particle is also acting as an adjuvant,” Jaklenec says. “We were able to achieve very specific responses to the Covid protein, and with a dose-sparing effect compared to using the protein by itself to vaccinate.”

Vaccine access

While this study and others have demonstrated ZIF-8’s immunogenic ability, more work needs to be done to evaluate the particles’ safety and potential to be scaled up for large-scale manufacturing. If ZIF-8 is not developed as a vaccine carrier, the findings from the study should help to guide researchers in developing similar nanoparticles that could be used to deliver subunit vaccines, Jaklenec says.

“Most subunit vaccines usually have two separate components: an antigen and an adjuvant,” Jaklenec says. “Designing new vaccines that utilize nanoparticles with specific chemical moieties which not only aid in antigen delivery but can also activate particular immune pathways have the potential to enhance vaccine potency.”

One advantage to developing a subunit vaccine for Covid-19 is that such vaccines are usually easier and cheaper to manufacture than mRNA vaccines, which could make it easier to distribute them around the world, the researchers say.

“Subunit vaccines have been around for a long time, and they tend to be cheaper to produce, so that opens up more access to vaccines, especially in times of pandemic,” Jaklenec says.

The research was funded by Ibn Khaldun Fellowships for Saudi Arabian Women and in part by the Koch Institute Support (core) Grant from the U.S. National Cancer Institute.

New exhibits showcase trailblazing MIT women

Wed, 03/06/2024 - 10:00am

This spring, two new exhibits on campus are shining a light on the critical contributions of pathbreaking women at the Institute. They are part of MIT Libraries’ Women@MIT Archival Initiative in the Department of Distinctive Collections. Launched in 2017, the initiative not only adds to the historical record by collecting and preserving the papers of MIT-affiliated women, it shares their lives and work with global audiences through exhibits, multimedia projects, educational materials, and more.

Under the Lens

“Under the Lens: Women Biologists and Chemists at MIT 1865-2024,” examines the work of women in science and engineering at MIT beginning with Ellen Swallow Richards, the Institute’s first female student and instructor, through the present day, when a number of women with backgrounds in biology, biological engineering, chemistry, and chemical engineering — the subjects of focus in this exhibit — hold leadership positions at the Institute, including President Sally Kornbluth, Vice Provost for Faculty Paula Hammond, and Professor Amy Keating, who heads the Department of Biology. 

Exhibit curator Thera Webb, Women@MIT project archivist, explains the exhibit title’s double meaning: “The women featured in 'Under the Lens' are scientists whose work engages with the materials of our world on a molecular level, using the lens of a microscope,” she says. “The title also plays on the fact that women’s ability to work as scientists and academics has been scrutinized through the lens of public opinion since Victorian-era debates about co-education.”

Items for the exhibit, selected from Distinctive Collections, demonstrate the experiences of women students, research staff, and faculty. They include the 1870 handwritten faculty meeting notes admitting Richards, then Ellen Henrietta Swallow, as MIT’s first female student, stating “the Faculty are of the opinion that the admission of women as special students is as yet in the nature of an experiment.” Materials from alumna and late professor ChoKyun Rha’s “Rheological Characterization of Printing Ink,” circa 1979, include images of the development process of ink and data from experiments. Also on display are a lab coat and rodent brain tissue slides from the neuroscience laboratory of Susan Hockfield, MIT’s 16th president.

“The collections we have related to women at MIT not only show us what their academic and professional interests were, with items like lab notebooks and drafts of papers, but also how our MIT community has been actively supporting women in science,” says Webb. “Many of our alumnae and faculty have been involved with the founding of groups like the Association of American University Women, the MIT Women’s Association, the Association for Women in Science, and the Women in Chemistry Group.”

“Under the Lens: Women Biologists and Chemists at MIT 1865-2024” is on view in the Maihaugen Gallery (Room 14N-130) through June 21. There is an accompanying digital exhibit available on the MIT Libraries’ website.

Sisters in Making

“Sisters in Making: Prototyping and the Feminine Resilience,” on view in Rotch Library, explores the unseen women, often referred to as “weavers,” who were instrumental to the development of computers. The exhibit, the work of Deborah Tsogbe SM '23 and Soala Ajienka, a current architecture graduate student, spotlights the women who built the core rope memory and magnetic core memory for the Apollo Guidance Computer.

“While we ultimately know the names of the first men on the Moon, and of those who spearheaded the engineering initiatives behind the Apollo 11 mission, the names of the countless women who had a vital hand in realizing these feats have been missing from historical discourse,” Tsogbe and Ajienka write. “The focus of our work has been to uncover the names and faces of these women, who held important positions including overseeing communications, checking codes, running calculations, and weaving memory.”

Working in the archives, Tsogbe and Ajienka sought to identify the women involved in this endeavor, going through personnel logs, press releases, and other historical artifacts. Originally focused on the women working on rope memory, they broadened the scope of women involved in the journey to the moon and were able to name 534 women across 29 classes of work and nine organizations. Tsogbe and Ajienka fabricated a core memory prototype with the names of some of these women stored; they were technicians, data key punchers, engineers, librarians, and office staff from MIT, Raytheon, and NASA. Called the “memory dialer,” the prototype is intended to be a living archive.

Tsogbe and Ajienka created “Sisters in Making” as 2023 Women@MIT Fellows. This fellowship invites scholars, artists, and others to showcase materials from Distinctive Collections in engaging ways that contribute to greater understanding of the history of women at MIT and in STEM. The project also received a grant from the Council for the Arts at MIT.

“Deborah and Soala’s exhibit shows the variety of ways that the rich materials in the Women@MIT collections can be used,” says Webb. “Projects like these really highlight the value of historical collections in ways outside of traditional scholarly publications.”

“Sisters in Making: Prototyping and the Feminine Resilience” is on view in Rotch Library (Room 7-238) through April 8.

Nicole McGaa: Ensuring safe travels in space

Wed, 03/06/2024 - 12:00am

What do meteor showers, medicine, and MIT have in common? Aerospace engineering major Nicole McGaa.

The senior has long been drawn to both space and medicine. Growing up in Pittsburgh, Pennsylvania, she would search for good hillsides for watching meteor showers with her brother and father. Meanwhile, her favorite TV shows featured doctors and healers as main characters. The “Star Trek” series was a particular favorite, not just for characters like the physician Beverly Crusher but also for its scientific subject matter and diverse cast.

“I saw space as a place that was open for possibilities. The fact that ‘Star Trek’ is in a space setting is what invites people to think about what the future will be like and if it will it be better,” McGaa says. “Can we use space as a catalyst to make society more equitable?”

When it came time to choose a path after high school, McGaa says, “I thought, ‘Space and medicine are the two things that I really enjoy, I'll pick one of them eventually.’ But I got to MIT, and I realized by fate that MIT was one of the few places in the world that did space medicine, and things took off from there.”

McGaa’s research in bioastronautics, which is the study of biological systems in space, centers around making space travel safer for human bodies and minds. In the future, she envisions herself working with astronauts in a clinical setting, researching and characterizing the physiological impacts of spaceflight and creating countermeasures for such effects through physical, mechanical, or pharmaceutical solutions.

Emergency medicine

McGaa credits her time as a certified EMT with MIT Emergency Medical Services for guiding her path in bioastronautics and giving her the clinical perspective necessary for her work. “Space medicine is very much tied to emergency medicine,” she explains. “A lot of the people who first did space medicine then work in the ER, and many continue to this day to do both. It’s been good for me to help people directly while I'm also trying to help people at a more aspirational level through space.”

McGaa joined MIT EMS during her first year at the Institute, inspired by the kindness and care she received from an ER nurse in her past. As an EMT, she wished to provide such compassion for others, or, better yet, help them avoid medical emergencies completely.

Participating in MIT EMS is one of the most rewarding things she’s done at MIT, according to McGaa. She says responding to emergency calls on campus and throughout Boston and Cambridge, and learning how to provide care alongside other passionate volunteers has been invaluable to her life goals as a medical provider.

Indigenous science

Indigenous representation at MIT and in the scientific community at large is significant to McGaa, who is Oglala Lakota. With the Native American and Indigenous Association, of which she is now the co-president, she has worked to advance initiatives supporting Indigenous people at the Institute, through efforts such as establishing the Indigenous Peoples’ Center, revising MIT’s land acknowledgment, and successfully advocating for the hiring of MIT’s first tenure-track Native American professor.

McGaa continues to work on expanding inclusionary measures for Native students on campus. She is seeking approval for a smudging policy that would allow Indigenous students to engage in the religious practice of burning sage in select areas. Creating a space for students to participate in cultural traditions that they have been historically deprived of is an important way to promote community, according to McGaa, “Native students are, like me, trying to understand and reconnect with our traditions and culture. My generation is really trying to decolonize our identities to heal the kind of pain that our parents and grandparents went through.”

Last year, McGaa assembled an Indigenous rocketry team for First Nations Launch, a national competition in which students compete through designing, building, and launching a high-powered rocket. This was MIT’s first time sending a team, and McGaa headed the project as captain, elected by her peers.

Out-of-this-world research

The bioastronautics field offers a broad array of research topics. McGaa’s focus is on understanding the physiology of astronauts and designing countermeasures for the effects of space exploration that could be useful for people on Earth as well.

With graduate student Rachel Bellisle and Professor and Media Lab Director Dava Newman, McGaa has worked on MIT’s Gravity Loading Countermeasure Skinsuit, which helps astronauts avoid muscle and bone loss during duration spaceflight. This research aligns with McGaa’s overall goal to address different “physiological detriments” caused by space. She also hopes to study spaceflight-associated neuro-ocular syndrome, or SANS, a poorly understood condition that involves the brain and eye changes that impact astronauts. She plans to make this the focus of her studies moving forward, in a PhD program, likely followed by an MD degree.

As an undergraduate, McGaa also interned at the NASA Neil A. Armstrong Flight Research Center with Northrop Grumman Co. where she worked in test flight. And last summer, she worked at Blue Origin in fault management and systems autonomy in aerospace engineering. Noting the contrast between the longstanding government agency and the much newer company, she credited these experiences with strengthening her discipline and initiative, respectively.

To McGaa, all the areas she has explored at MIT, while seemingly varied, fall together in a cohesive way. “Emergency medicine, Indigenous science and advocacy, and space medicine, all connect to my Indigenous values, of excellence in engineering, and caretaking, and community,” she says. Making conditions better for humans in space, the “most hostile environment possible,” will translate to benefits for humanity on Earth as well. “The whole point of going to space is to solve hard things,” she says. “Space is not just for operational drive, it’s clearly for inspirational ambition, as well.”

“This MIT Bootcamp shook everything upside down and has given me the spirit of innovation”

Tue, 03/05/2024 - 4:15pm

A new MIT Bootcamps hybrid program recently convened 34 innovators to tackle substance use disorder from multiple perspectives. Together, they built and pitched new ventures with the goal of bringing life-saving innovations to the field.

The Substance Use Disorder (SUD) Ventures program featured workshops, case studies, and interactive sessions with researchers, entrepreneurs, and doctors who brought a multidisciplinary approach to tackling early detection, access to care and health equity, dual diagnosis, treatment, and relapse prevention. Through a rigorous selection process, the program cohort was chosen for their complementary, diverse backgrounds along with their passion for solving problems related to substance use.

Hybrid by design, the first three months of the program consisted of foundational work online, including a new asynchronous SUD 101 course led by Brown University Professor Carolina Haass-Koffler and live online sessions focused on topics like intellectual property and technology transfer. The program concluded with a five-day MIT Bootcamp on campus, where learners built and pitched a new venture to a panel of judges.

“Building a venture in the substance use disorder space is exceptionally challenging,” says Hanna Adeyema, director of MIT Bootcamps. “Our goal was not only to educate our learners but also to inspire and to ignite a sense of community. We achieved it by building relationships in a diverse group united by a shared vision to bring lifesaving products to market.”

Helping to solve an epidemic

In 2021, more than 46 million people suffered from substance use disorder in the United States. This means one out of every seven people in the U.S. can benefit from innovations in this field. In 2022, MIT Open Learning received a grant from the National Institute of Drug Abuse (NIDA) to create an entrepreneurship program for substance use disorder researchers. As the primary source of early-stage funding in this space, the National Institutes of Health (NIH) and NIDA are focused on initiatives, like the MIT Bootcamps SUD Ventures program, to help bring innovation to the field. 

Armed with a deep expertise in innovation and immersive educational experiences, MIT Open Learning’s team, including MIT Bootcamps, hit the ground running to build the SUD Ventures program. Other team members included Cynthia Breazeal, Erdin Beshimov, Carolina Haass-Koffler, Aikaterini “Katerina” Bagiati, and Andrés Felipe Salazar-Gómez. 

"The program connected substance use disorder knowledge and resources, including funding opportunities, to entrepreneurial competences and multifaceted skills of the learners,” says Cynthia Breazeal, dean for digital learning at MIT Open Learning and principal investigator for the project. “We have delivered a dynamic learning experience, sensitive to the root causes behind the innovation deficit in this field.”  

Instilling the spirit of innovation  

With 10-hour days, the immersive program blended formal and informal instruction to deliver a holistic and practical educational experience on substance use disorder and innovation. Learners attended case studies with health care companies like Prapela, Invistics, and RTM Vital Signs, moderated by Erdin Beshimov, the founder of MIT Bootcamps. They also attended workshops by MIT faculty, lectures by members of the NIH and NIDA, and interactive sessions with local startup veterans and medical professionals. 

Learners walked away from the sessions motivated to solve problems, equipped with tangible next steps for their businesses. Bill Aulet inspired learners to leverage their own innovation ecosystems and shared how MIT is “raising the bar” of the quality of entrepreneurship education. Professor Eric von Hippel, a pioneer of user innovation, encouraged learners to tap into clinicians, nurses, and individuals with lived and living experiences as an important source of innovation within the health-care system. To give the clinical perspective from Massachusetts General Hospital, cardiac anesthesiologist Nathaniel Sims and former MGH Innovation Support Center director Harry DeMonaco energized learners with a personal story of successfully bringing medical device innovation to market and how to work with hospitals and early-stage adopters.

“This MIT Bootcamp shook everything upside down and has given me the spirit of innovation and what it looks like to be able to work in a big way, and to be able to think in an even bigger way,” says learner Melissa “Dr. Mo” Dittberner. A resident of Volin, South Dakota, Dittberner is the CEO and founder of Straight Up Care, a platform for peer specialists to help people with mental health and substance use disorders. As an entrepreneur in the substance use disorder space, Dittberner knows what it takes to bring a business to life.

Bridging disciplines to create impact

In the evenings, the cohort broke out into teams of five to collaborate on building a venture related to substance use disorder. Coaches provided guidance and the tough feedback teams need in order to build a venture that solves a real problem. With vast differences in age, background, industry, and how they came to make an impact on substance use disorder, each team had experts in many different verticals, ultimately leading learners to a more thoughtful and potent solution. 

“One of the things MIT Bootcamps does really well is bring multiple disciplines to innovate together,” says Smit Patel, a pharmacist and digital health strategist who participated in the program. “We have seen a lot of silo innovation happening [in health care]. We have also seen problems being solved in piecemeal. How can we come together as a collective force — clinician and entrepreneur, a technologist, someone who has gone through this experience themselves — to build a solution?”

Dittberner echoed Patel’s sentiment, emphasizing the strength of the MIT Bootcamps community. “They’ve all kind of brought this different flavor,” Dittberner says. “I have created friendships and bonds that will last forever, which is so crucial to being able to be successful in the [SUD] space.” 

Intent on building a community of domain expert entrepreneurs, the SUD Ventures program will continue to bring together innovators to solve acute problems in the substance use space. With another three years of funding for this program, Adeyema says MIT Bootcamps’ goal is to nurture the community of innovators brought together by this program, enabling them to bring their ventures to life and create meaningful impact to society.

This program and its research are supported by the National Institute on Drug Abuse of the National Institutes of Health. This award is subject to the Cooperative Agreement Terms and Conditions of Award as set forth in RFA DA-22-020, entitled "Growing Great Ideas: Research Education Course in Product Development and Entrepreneurship for Life Science Researchers." The content of this publication is solely the responsibility of the authors and does not necessarily represent the views of the National Institutes of Health. 

Using generative AI to improve software testing

Tue, 03/05/2024 - 12:00am

Generative AI is getting plenty of attention for its ability to create text and images. But those media represent only a fraction of the data that proliferate in our society today. Data are generated every time a patient goes through a medical system, a storm impacts a flight, or a person interacts with a software application.

Using generative AI to create realistic synthetic data around those scenarios can help organizations more effectively treat patients, reroute planes, or improve software platforms — especially in scenarios where real-world data are limited or sensitive.

For the last three years, the MIT spinout DataCebo has offered a generative software system called the Synthetic Data Vault to help organizations create synthetic data to do things like test software applications and train machine learning models.

The Synthetic Data Vault, or SDV, has been downloaded more than 1 million times, with more than 10,000 data scientists using the open-source library for generating synthetic tabular data. The founders — Principal Research Scientist Kalyan Veeramachaneni and alumna Neha Patki ’15, SM ’16 — believe the company’s success is due to SDV’s ability to revolutionize software testing.

SDV goes viral

In 2016, Veeramachaneni’s group in the Data to AI Lab unveiled a suite of open-source generative AI tools to help organizations create synthetic data that matched the statistical properties of real data.

Companies can use synthetic data instead of sensitive information in programs while still preserving the statistical relationships between datapoints. Companies can also use synthetic data to run new software through simulations to see how it performs before releasing it to the public.

Veeramachaneni’s group came across the problem because it was working with companies that wanted to share their data for research.

“MIT helps you see all these different use cases,” Patki explains. “You work with finance companies and health care companies, and all those projects are useful to formulate solutions across industries.”

In 2020, the researchers founded DataCebo to build more SDV features for larger organizations. Since then, the use cases have been as impressive as they’ve been varied.

With DataCebo's new flight simulator, for instance, airlines can plan for rare weather events in a way that would be impossible using only historic data. In another application, SDV users synthesized medical records to predict health outcomes for patients with cystic fibrosis. A team from Norway recently used SDV to create synthetic student data to evaluate whether various admissions policies were meritocratic and free from bias.

In 2021, the data science platform Kaggle hosted a competition for data scientists that used SDV to create synthetic data sets to avoid using proprietary data. Roughly 30,000 data scientists participated, building solutions and predicting outcomes based on the company’s realistic data.

And as DataCebo has grown, it’s stayed true to its MIT roots: All of the company’s current employees are MIT alumni.

Supercharging software testing

Although their open-source tools are being used for a variety of use cases, the company is focused on growing its traction in software testing.

“You need data to test these software applications,” Veeramachaneni says. “Traditionally, developers manually write scripts to create synthetic data. With generative models, created using SDV, you can learn from a sample of data collected and then sample a large volume of synthetic data (which has the same properties as real data), or create specific scenarios and edge cases, and use the data to test your application.”

For example, if a bank wanted to test a program designed to reject transfers from accounts with no money in them, it would have to simulate many accounts simultaneously transacting. Doing that with data created manually would take a lot of time. With DataCebo’s generative models, customers can create any edge case they want to test.

“It’s common for industries to have data that is sensitive in some capacity,” Patki says. “Often when you’re in a domain with sensitive data you’re dealing with regulations, and even if there aren’t legal regulations, it’s in companies’ best interest to be diligent about who gets access to what at which time. So, synthetic data is always better from a privacy perspective.”

Scaling synthetic data

Veeramachaneni believes DataCebo is advancing the field of what it calls synthetic enterprise data, or data generated from user behavior on large companies’ software applications.

“Enterprise data of this kind is complex, and there is no universal availability of it, unlike language data,” Veeramachaneni says. “When folks use our publicly available software and report back if works on a certain pattern, we learn a lot of these unique patterns, and it allows us to improve our algorithms. From one perspective, we are building a corpus of these complex patterns, which for language and images is readily available. “

DataCebo also recently released features to improve SDV’s usefulness, including tools to assess the “realism” of the generated data, called the SDMetrics library as well as a way to compare models’ performances called SDGym.

“It’s about ensuring organizations trust this new data,” Veeramachaneni says. “[Our tools offer] programmable synthetic data, which means we allow enterprises to insert their specific insight and intuition to build more transparent models.”

As companies in every industry rush to adopt AI and other data science tools, DataCebo is ultimately helping them do so in a way that is more transparent and responsible.

“In the next few years, synthetic data from generative models will transform all data work,” Veeramachaneni says. “We believe 90 percent of enterprise operations can be done with synthetic data.”

At Sustainability Connect 2024, a look at how MIT is decarbonizing its campus

Mon, 03/04/2024 - 5:30pm

How is MIT working to meet its goal of decarbonizing the campus by 2050? How are local journalists communicating climate impacts and solutions to diverse audiences? What can each of us do to bring our unique skills and insight to tackle the challenges of climate and sustainability?

These are all questions asked — and answered — at Sustainability Connect, the yearly forum hosted by the MIT Office of Sustainability that offers an inside look at this transformative and comprehensive work that is the foundation for MIT’s climate and sustainability leadership on campus. The event invites individuals in every role at MIT to learn more about the sustainability and climate work happening on campus and to share their ideas, highlight important work, and find new ways to plug into ongoing efforts. “This event is a reminder of the remarkable, diverse, and committed group of colleagues we are all part of at MIT,” said Director of Sustainability Julie Newman as the event kicked off alongside Interfaith Chaplain and Spiritual Advisor to the Indigenous Community Nina Lytton, who offered a moment of connection to attendees. At the event, that diverse and committed group was made up of more than 130 community members representing more than 70 departments, labs, and centers.

This year, Sustainability Connect was timed with announcement of the new Climate Project at MIT, with Vice Provost Richard Lester joining the event to expound on MIT’s deep commitment to tackling the climate challenge over the next 10 years through a series of climate missions — many of which build upon the ongoing research taking place across campus already. In introducing the Climate Project at MIT, Lester echoed the theme of connection and collaboration. “This plan is about helping bridge the gap between what we would accomplish as a collection of energetic, talented, ambitious individuals, and what we're capable of if we act together,” he said.

Highlighting one of the many collaborative efforts to address MIT’s contributions to climate change was the Decarbonizing the Campus panel, which provided a real-time look at MIT’s work to eliminate carbon emissions from campus by 2050. Newman and Vice President for Campus Services and Stewardship Joe Higgins, along with Senior Campus Planner Vasso Mathes, Senior Sustainability Project Manager Steve Lanou, and PhD student Chenhan Shao, shared the many ways MIT is working to decarbonize its campus now and respond to evolving technologies and policies in the future. “A third of MIT's faculty and researchers … are working to identify ways in which MIT can amplify its contributions to addressing the world's climate crisis. But part and parcel to that goal is we're putting significant effort into decarbonizing MIT'S own carbon footprint here on our campus,” Higgins said before highlighting how MIT continues to work on projects focused on building efficiency, renewable energy on campus and off, and support of a cleaner grid, among many decarbonization strategies.

Newman shared the way in which climate education and research play an important role through the Decarbonization Working Group research streams, and courses like class 4.s42 (Carbon Reduction Pathways for the MIT Campus) offered by Professor Christoph Reinhart. Lanou and Shao also showcased how MIT is optimizing its response to Cambridge’s Building Energy Use Disclosure Ordinance, which is aimed at tracking and reducing emissions from large commercial properties in the city with a goal of net-zero buildings by 2035. “We’ve been able [create] pathways that would be practical, innovative, have a high degree of accountability, and that could work well within the structures and the limitations that we have,” Lanou said before debuting a dashboard he and Shao developed during Independent Activities Period to track and forecast work to meet the Cambridge goal. 

MIT’s robust commitment to decarbonize its campus goes beyond energy systems, as highlighted by the work of many staff members who led roundtables as part of Sustainability in Motion, where attendees were invited to sit down with colleagues from across campus responsible for implementing the numerous climate and sustainability commitments. Teams reported out on progress to date on a range of efforts including sustainable food systems, safe and sustainable labs, and procurement. “Tackling the unprecedented challenges of a changing planet in and around MIT takes the support of individuals and teams from all corners of the Institute,” said Assistant Director of Sustainability Brian Goldberg in leading the session. “Whether folks have sustainability or climate in their job title, or they’ve contributed countless volunteer hours to the cause, our community members are leading many meaningful efforts to transform MIT.”

The day culminated with a panel on climate in the media, taking the excitement from the room and putting it in context — how do you translate this work, these solutions, and these challenges for a diverse audience with an ever-changing appetite for these kinds of stories? Laur Hesse Fisher, program director for the Environmental Solutions Initiate (ESI); Barbara Moran, climate and environment reporter at WBUR radio; and independent climate journalist Annie Ropeik joined the panel moderated by Knight Science Journalism Program at MIT Director Deborah Blum. Blum spoke of the current mistrust of not only the media but of news stories of climate impacts and even solutions. “To those of us telling the story of climate change, how do we reach resistant audiences? How do we gain their trust?” she asked.

Fisher, who hosts the TIL Climate podcast and leads the ESI Journalism Fellowship, explained how she shifts her approach depending on her audience. “[With TIL Climate], a lot of what we do is, we try to understand what kinds of questions people have,” she said. “We have people submit questions to us, and then we answer them in language that they can understand.”

For Moran, reaching audiences relies on finding the right topic to bridge to deeper issues. On a recent story about solar arrays and their impact on forests and the landscape around them, Moran saw bees and pollinators as the way in. “I can talk about bees and flowers. And that will hook people enough to get in. And then through that, we can address this issue of forest versus commercial solar and this tension, and what can be done to address that, and what's working and what's not,” she said.

The panel highlighted that even as climate solutions and challenges become clearer, communicating them can remain a challenge. “Sustainability Connect is invaluable when it comes to sharing our work and bringing more people in, but over the years, it’s become clear how many people are still outside of these conversations,” said Newman. “Capping the day off with this conversation on climate in the media served as a jumping-off point for all of us to think how we can better communicate our efforts and tackle the challenges that keep us from bringing everyone to the table to help us find and share solutions for addressing climate change. It’s just the beginning of this conversation.”

School of Science announces 2024 Infinite Expansion Awards

Mon, 03/04/2024 - 5:20pm

The MIT School of Science has announced nine postdocs and research scientists as recipients of the 2024 Infinite Expansion Award, which highlights extraordinary members of the MIT community.

The following are the 2024 School of Science Infinite Expansion winners:

  • Sarthak Chandra, a research scientist in the Department of Brain and Cognitive Sciences, was nominated by Professor Ila Fiete, who wrote, “He has expanded the research abilities of my group by being a versatile and brilliant scientist, by drawing connections with a different area that he was an expert in from his PhD training, and by being a highly involved and caring mentor.”
     
  • Michal Fux, a research scientist in the Department of Brain and Cognitive Sciences, was nominated by Professor Pawan Sinha, who wrote, “She is one of those figurative beams of light that not only brilliantly illuminate scientific questions, but also enliven a research team.”
     
  • Andrew Savinov, a postdoc in the Department of Biology, was nominated by Associate Professor Gene-Wei Li, who wrote, “Andrew is an extraordinarily creative and accomplished biophysicist, as well as an outstanding contributor to the broader MIT community.”
     
  • Ho Fung Cheng, a postdoc in the Department of Chemistry, was nominated by Professor Jeremiah Johnson, who wrote, “His impact on research and our departmental community during his time at MIT has been outstanding, and I believe that he will be a worldclass teacher and research group leader in his independent career next year.”
     
  • Gabi Wenzel, a postdoc in the Department of Chemistry, was nominated by Assistant Professor Brett McGuire, who wrote, “In the one year since Gabi joined our team, she has become an indispensable leader, demonstrating exceptional skill, innovation, and dedication in our challenging research environment.”
     
  • Yu-An Zhang, a postdoc in the Department of Chemistry, was nominated by Professor Alison Wendlandt, who wrote, “He is a creative, deep-thinking scientist and a superb organic chemist. But above all, he is an off-scale mentor and a cherished coworker.”
     
  • Wouter Van de Pontseele, a senior postdoc in the Laboratory for Nuclear Science, was nominated by Professor Joseph Formaggio, who wrote, “He is a talented scientist with an intense creativity, scholarship, and student mentorship record. In the time he has been with my group, he has led multiple facets of my experimental program and has been a wonderful citizen of the MIT community.”
     
  • Alexander Shvonski, a lecturer in the Department of Physics, was nominated by Assistant Professor Andrew Vanderburg, who wrote, “… I have been blown away by Alex’s knowledge of education research and best practices, his skills as a teacher and course content designer, and I have been extremely grateful for his assistance.”
     
  • David Stoppel, a research scientist in The Picower Institute for Learning and Memory, was nominated by Professor Mark Bear and his research group, who wrote, “As impressive as his research achievements might be, David’s most genuine qualification for this award is his incredible commitment to mentorship and the dissemination of knowledge.”

Winners are honored with a monetary award and will be celebrated with family, friends, and nominators at a later date, along with recipients of the Infinite Mile Award.

Exposure to different kinds of music influences how the brain interprets rhythm

Mon, 03/04/2024 - 5:00am

When listening to music, the human brain appears to be biased toward hearing and producing rhythms composed of simple integer ratios — for example, a series of four beats separated by equal time intervals (forming a 1:1:1 ratio).

However, the favored ratios can vary greatly between different societies, according to a large-scale study led by researchers at MIT and the Max Planck Institute for Empirical Aesthetics and carried out in 15 countries. The study included 39 groups of participants, many of whom came from societies whose traditional music contains distinctive patterns of rhythm not found in Western music.

“Our study provides the clearest evidence yet for some degree of universality in music perception and cognition, in the sense that every single group of participants that was tested exhibits biases for integer ratios. It also provides a glimpse of the variation that can occur across cultures, which can be quite substantial,” says Nori Jacoby, the study’s lead author and a former MIT postdoc, who is now a research group leader at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany.

The brain’s bias toward simple integer ratios may have evolved as a natural error-correction system that makes it easier to maintain a consistent body of music, which human societies often use to transmit information.

“When people produce music, they often make small mistakes. Our results are consistent with the idea that our mental representation is somewhat robust to those mistakes, but it is robust in a way that pushes us toward our preexisting ideas of the structures that should be found in music,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.

McDermott is the senior author of the study, which appears today in Nature Human Behaviour. The research team also included scientists from more than two dozen institutions around the world.

A global approach

The new study grew out of a smaller analysis that Jacoby and McDermott published in 2017. In that paper, the researchers compared rhythm perception in groups of listeners from the United States and the Tsimane’, an Indigenous society located in the Bolivian Amazon rainforest.

To measure how people perceive rhythm, the researchers devised a task in which they play a randomly generated series of four beats and then ask the listener to tap back what they heard. The rhythm produced by the listener is then played back to the listener, and they tap it back again. Over several iterations, the tapped sequences became dominated by the listener’s internal biases, also known as priors.

“The initial stimulus pattern is random, but at each iteration the pattern is pushed by the listener’s biases, such that it tends to converge to a particular point in the space of possible rhythms,” McDermott says. “That can give you a picture of what we call the prior, which is the set of internal implicit expectations for rhythms that people have in their heads.”

When the researchers first did this experiment, with American college students as the test subjects, they found that people tended to produce time intervals that are related by simple integer ratios. Furthermore, most of the rhythms they produced, such as those with ratios of 1:1:2 and 2:3:3, are commonly found in Western music.

The researchers then went to Bolivia and asked members of the Tsimane’ society to perform the same task. They found that Tsimane’ also produced rhythms with simple integer ratios, but their preferred ratios were different and appeared to be consistent with those that have been documented in the few existing records of Tsimane’ music.

“At that point, it provided some evidence that there might be very widespread tendencies to favor these small integer ratios, and that there might be some degree of cross-cultural variation. But because we had just looked at this one other culture, it really wasn’t clear how this was going to look at a broader scale,” Jacoby says.

To try to get that broader picture, the MIT team began seeking collaborators around the world who could help them gather data on a more diverse set of populations. They ended up studying listeners from 39 groups, representing 15 countries on five continents — North America, South America, Europe, Africa, and Asia.

“This is really the first study of its kind in the sense that we did the same experiment in all these different places, with people who are on the ground in those locations,” McDermott says. “That hasn’t really been done before at anything close to this scale, and it gave us an opportunity to see the degree of variation that might exist around the world.”

Cultural comparisons

Just as they had in their original 2017 study, the researchers found that in every group they tested, people tended to be biased toward simple integer ratios of rhythm. However, not every group showed the same biases. People from North America and Western Europe, who have likely been exposed to the same kinds of music, were more likely to generate rhythms with the same ratios. However, many groups, for example those in Turkey, Mali, Bulgaria, and Botswana showed a bias for other rhythms.

“There are certain cultures where there are particular rhythms that are prominent in their music, and those end up showing up in the mental representation of rhythm,” Jacoby says.

The researchers believe their findings reveal a mechanism that the brain uses to aid in the perception and production of music.

“When you hear somebody playing something and they have errors in their performance, you’re going to mentally correct for those by mapping them onto where you implicitly think they ought to be,” McDermott says. “If you didn’t have something like this, and you just faithfully represented what you heard, these errors might propagate and make it much harder to maintain a musical system.”

Among the groups that they studied, the researchers took care to include not only college students, who are easy to study in large numbers, but also people living in traditional societies, who are more difficult to reach. Participants from those more traditional groups showed significant differences from college students living in the same countries, and from people who live in those countries but performed the test online.

“What’s very clear from the paper is that if you just look at the results from undergraduate students around the world, you vastly underestimate the diversity that you see otherwise,” Jacoby says. “And the same was true of experiments where we tested groups of people online in Brazil and India, because you’re dealing with people who have internet access and presumably have more exposure to Western music.”

The researchers now hope to run additional studies of different aspects of music perception, taking this global approach.

“If you’re just testing college students around the world or people online, things look a lot more homogenous. I think it’s very important for the field to realize that you actually need to go out into communities and run experiments there, as opposed to taking the low-hanging fruit of running studies with people in a university or on the internet,” McDermott says.

The research was funded by the James S. McDonnell Foundation, the Canadian National Science and Engineering Research Council, the South African National Research Foundation, the United States National Science Foundation, the Chilean National Research and Development Agency, the Austrian Academy of Sciences, the Japan Society for the Promotion of Science, the Keio Global Research Institute, the United Kingdom Arts and Humanities Research Council, the Swedish Research Council, and the John Fell Fund.

Tests show high-temperature superconducting magnets are ready for fusion

Mon, 03/04/2024 - 12:00am

In the predawn hours of Sept. 5, 2021, engineers achieved a major milestone in the labs of MIT’s Plasma Science and Fusion Center (PSFC), when a new type of magnet, made from high-temperature superconducting material, achieved a world-record magnetic field strength of 20 tesla for a large-scale magnet. That’s the intensity needed to build a fusion power plant that is expected to produce a net output of power and potentially usher in an era of virtually limitless power production.

The test was immediately declared a success, having met all the criteria established for the design of the new fusion device, dubbed SPARC, for which the magnets are the key enabling technology. Champagne corks popped as the weary team of experimenters, who had labored long and hard to make the achievement possible, celebrated their accomplishment.

But that was far from the end of the process. Over the ensuing months, the team tore apart and inspected the components of the magnet, pored over and analyzed the data from hundreds of instruments that recorded details of the tests, and performed two additional test runs on the same magnet, ultimately pushing it to its breaking point in order to learn the details of any possible failure modes.

All of this work has now culminated in a detailed report by researchers at PSFC and MIT spinout company Commonwealth Fusion Systems (CFS), published in a collection of six peer-reviewed papers in a special edition of the March issue of IEEE Transactions on Applied Superconductivity. Together, the papers describe the design and fabrication of the magnet and the diagnostic equipment needed to evaluate its performance, as well as the lessons learned from the process. Overall, the team found, the predictions and computer modeling were spot-on, verifying that the magnet’s unique design elements could serve as the foundation for a fusion power plant.

Enabling practical fusion power

The successful test of the magnet, says Hitachi America Professor of Engineering Dennis Whyte, who recently stepped down as director of the PSFC, was “the most important thing, in my opinion, in the last 30 years of fusion research.”

Before the Sept. 5 demonstration, the best-available superconducting magnets were powerful enough to potentially achieve fusion energy — but only at sizes and costs that could never be practical or economically viable. Then, when the tests showed the practicality of such a strong magnet at a greatly reduced size, “overnight, it basically changed the cost per watt of a fusion reactor by a factor of almost 40 in one day,” Whyte says.

“Now fusion has a chance,” Whyte adds. Tokamaks, the most widely used design for experimental fusion devices, “have a chance, in my opinion, of being economical because you’ve got a quantum change in your ability, with the known confinement physics rules, about being able to greatly reduce the size and the cost of objects that would make fusion possible.”

The comprehensive data and analysis from the PSFC’s magnet test, as detailed in the six new papers, has demonstrated that plans for a new generation of fusion devices — the one designed by MIT and CFS, as well as similar designs by other commercial fusion companies — are built on a solid foundation in science.

The superconducting breakthrough

Fusion, the process of combining light atoms to form heavier ones, powers the sun and stars, but harnessing that process on Earth has proved to be a daunting challenge, with decades of hard work and many billions of dollars spent on experimental devices. The long-sought, but never yet achieved, goal is to build a fusion power plant that produces more energy than it consumes. Such a power plant could produce electricity without emitting greenhouse gases during operation, and generating very little radioactive waste. Fusion’s fuel, a form of hydrogen that can be derived from seawater, is virtually limitless.

But to make it work requires compressing the fuel at extraordinarily high temperatures and pressures, and since no known material could withstand such temperatures, the fuel must be held in place by extremely powerful magnetic fields. Producing such strong fields requires superconducting magnets, but all previous fusion magnets have been made with a superconducting material that requires frigid temperatures of about 4 degrees above absolute zero (4 kelvins, or -270 degrees Celsius). In the last few years, a newer material nicknamed REBCO, for rare-earth barium copper oxide, was added to fusion magnets, and allows them to operate at 20 kelvins, a temperature that despite being only 16 kelvins warmer, brings significant advantages in terms of material properties and practical engineering.

Taking advantage of this new higher-temperature superconducting material was not just a matter of substituting it in existing magnet designs. Instead, “it was a rework from the ground up of almost all the principles that you use to build superconducting magnets,” Whyte says. The new REBCO material is “extraordinarily different than the previous generation of superconductors. You’re not just going to adapt and replace, you’re actually going to innovate from the ground up.” The new papers in Transactions on Applied Superconductivity describe the details of that redesign process, now that patent protection is in place.

A key innovation: no insulation

One of the dramatic innovations, which had many others in the field skeptical of its chances of success, was the elimination of insulation around the thin, flat ribbons of superconducting tape that formed the magnet. Like virtually all electrical wires, conventional superconducting magnets are fully protected by insulating material to prevent short-circuits between the wires. But in the new magnet, the tape was left completely bare; the engineers relied on REBCO’s much greater conductivity to keep the current flowing through the material.

“When we started this project, in let’s say 2018, the technology of using high-temperature superconductors to build large-scale high-field magnets was in its infancy,” says Zach Hartwig, the Robert N. Noyce Career Development Professor in the Department of Nuclear Science and Engineering. Hartwig has a co-appointment at the PSFC and is the head of its engineering group, which led the magnet development project. “The state of the art was small benchtop experiments, not really representative of what it takes to build a full-size thing. Our magnet development project started at benchtop scale and ended up at full scale in a short amount of time,” he adds, noting that the team built a 20,000-pound magnet that produced a steady, even magnetic field of just over 20 tesla — far beyond any such field ever produced at large scale.

“The standard way to build these magnets is you would wind the conductor and you have insulation between the windings, and you need insulation to deal with the high voltages that are generated during off-normal events such as a shutdown.” Eliminating the layers of insulation, he says, “has the advantage of being a low-voltage system. It greatly simplifies the fabrication processes and schedule.” It also leaves more room for other elements, such as more cooling or more structure for strength.

The magnet assembly is a slightly smaller-scale version of the ones that will form the donut-shaped chamber of the SPARC fusion device now being built by CFS in Devens, Massachusetts. It consists of 16 plates, called pancakes, each bearing a spiral winding of the superconducting tape on one side and cooling channels for helium gas on the other.

But the no-insulation design was considered risky, and a lot was riding on the test program. “This was the first magnet at any sufficient scale that really probed what is involved in designing and building and testing a magnet with this so-called no-insulation no-twist technology,” Hartwig says. “It was very much a surprise to the community when we announced that it was a no-insulation coil.”

Pushing to the limit … and beyond

The initial test, described in previous papers, proved that the design and manufacturing process not only worked but was highly stable — something that some researchers had doubted. The next two test runs, also performed in late 2021, then pushed the device to the limit by deliberately creating unstable conditions, including a complete shutoff of incoming power that can lead to a catastrophic overheating. Known as quenching, this is considered a worst-case scenario for the operation of such magnets, with the potential to destroy the equipment.

Part of the mission of the test program, Hartwig says, was “to actually go off and intentionally quench a full-scale magnet, so that we can get the critical data at the right scale and the right conditions to advance the science, to validate the design codes, and then to take the magnet apart and see what went wrong, why did it go wrong, and how do we take the next iteration toward fixing that. … It was a very successful test.”

That final test, which ended with the melting of one corner of one of the 16 pancakes, produced a wealth of new information, Hartwig says. For one thing, they had been using several different computational models to design and predict the performance of various aspects of the magnet’s performance, and for the most part, the models agreed in their overall predictions and were well-validated by the series of tests and real-world measurements. But in predicting the effect of the quench, the model predictions diverged, so it was necessary to get the experimental data to evaluate the models’ validity.

“The highest-fidelity models that we had predicted almost exactly how the magnet would warm up, to what degree it would warm up as it started to quench, and where would the resulting damage to the magnet would be,” he says. As described in detail in one of the new reports, “That test actually told us exactly the physics that was going on, and it told us which models were useful going forward and which to leave by the wayside because they’re not right.”

Whyte says, “Basically we did the worst thing possible to a coil, on purpose, after we had tested all other aspects of the coil performance. And we found that most of the coil survived with no damage,” while one isolated area sustained some melting. “It’s like a few percent of the volume of the coil that got damaged.” And that led to revisions in the design that are expected to prevent such damage in the actual fusion device magnets, even under the most extreme conditions.

Hartwig emphasizes that a major reason the team was able to accomplish such a radical new record-setting magnet design, and get it right the very first time and on a breakneck schedule, was thanks to the deep level of knowledge, expertise, and equipment accumulated over decades of operation of the Alcator C-Mod tokamak, the Francis Bitter Magnet Laboratory, and other work carried out at PSFC. “This goes to the heart of the institutional capabilities of a place like this,” he says. “We had the capability, the infrastructure, and the space and the people to do these things under one roof.”

The collaboration with CFS was also key, he says, with MIT and CFS combining the most powerful aspects of an academic institution and private company to do things together that neither could have done on their own. “For example, one of the major contributions from CFS was leveraging the power of a private company to establish and scale up a supply chain at an unprecedented level and timeline for the most critical material in the project: 300 kilometers (186 miles) of high-temperature superconductor, which was procured with rigorous quality control in under a year, and integrated on schedule into the magnet.”

The integration of the two teams, those from MIT and those from CFS, also was crucial to the success, he says. “We thought of ourselves as one team, and that made it possible to do what we did.”

Study determines the original orientations of rocks drilled on Mars

Mon, 03/04/2024 - 12:00am

As it trundles around an ancient lakebed on Mars, NASA’s Perseverance rover is assembling a one-of-a-kind rock collection. The car-sized explorer is methodically drilling into the Red Planet’s surface and pulling out cores of bedrock that it’s storing in sturdy titanium tubes. Scientists hope to one day return the tubes to Earth and analyze their contents for traces of embedded microbial life.

Since it touched down on the surface of Mars in 2021, the rover has filled 20 of its 43 tubes with cores of bedrock. Now, MIT geologists have remotely determined a crucial property of the rocks collected to date, which will help scientists answer key questions about the planet’s past.

In a study appearing today in the journal Earth and Space Science, an MIT team reports that they have determined the original orientation of most bedrock samples collected by the rover to date. By using the rover’s own engineering data, such as the positioning of the vehicle and its drill, the scientists could estimate the orientation of each sample of bedrock before it was drilled out from the Martian ground.

The results represent the first time scientists have oriented samples of bedrock on another planet. The team’s method can be applied to future samples that the rover collects as it expands its exploration outside the ancient basin. Piecing together the orientations of multiple rocks at various locations can then give scientists clues to the conditions on Mars in which the rocks originally formed.

“There are so many science questions that rely on being able to know the orientation of the samples we’re bringing back from Mars,” says study author Elias Mansbach, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

“The orientation of rocks can tell you something about any magnetic field that may have existed on the planet,” adds Benjamin Weiss, professor of planetary sciences at MIT. “You can also study how water and lava flowed on the planet, the direction of the ancient wind, and tectonic processes, like what was uplifted and what sunk. So it’s a dream to be able to orient bedrock on another planet, because it’s going to open up so many scientific investigations.”

Weiss and Mansbach’s co-authors are Tanja Bosak and Jennifer Fentress at MIT, along with collaborators at multiple institutions including the Jet Propulsion Laboratory at Caltech.

Profound shift

The Perseverance rover, nicknamed “Percy,” is exploring the floor of Jezero Crater, a large impact crater layered with igneous rocks, which may have been deposited from past volcanic eruptions, as well as sedimentary rocks that likely formed from long-dried-out rivers that fed into the basin.

“Mars was once warm and wet, and there’s a possibility there was life there at one time,” Weiss says. “It’s now cold and dry, and something profound must have happened on the planet.”

Many scientists, including Weiss, suspect that Mars, like Earth, once harbored a magnetic field that shielded the planet from the sun’s solar wind. Conditions then may have been favorable for water and life, at least for a time.

“Once that magnetic field went away, the sun’s solar wind — this plasma that boils off the sun and moves faster than the speed of sound — just slammed into Mars’ atmosphere and may have removed it over billions of years,” Weiss says. “We want to know what happened, and why.”

The rocks beneath the Martian surface likely hold a record of the planet’s ancient magnetic field. When rocks first form on a planet’s surface, the direction of their magnetic minerals is set by the surrounding magnetic field. The orientation of rocks can thus help to retrace the direction and intensity of the planet’s magnetic field and how it changed over time.

Since the Perseverance rover was collecting samples of bedrock, along with surface soil and air, as part of its exploratory mission, Weiss, who is a member of the rover’s science team, and Mansbach looked for ways to determine the original orientation of the rover’s bedrock samples as a first step toward reconstructing Mars’ magnetic history.

“It was an amazing opportunity, but initially there was no mission requirement to orient bedrock,” Mansbach notes.

Roll with it

Over several months, Mansbach and Weiss met with NASA engineers to hash out a plan for how to estimate the original orientation of each sample of bedrock before it was drilled out of the ground. The problem was a bit like predicting what direction a small circle of sheetcake is pointing, before twisting a round cookie cutter in to pull out a piece. Similarly, to sample bedrock, Perseverance corkscrews a tube-shaped drill into the ground at a perpendicular angle, then pulls the drill directly back out, along with any rock that it penetrates.

To estimate the orientation of the rock before it was drilled out of the ground, the team realized they need to measure three angles, the hade, azimuth, and roll, which are similar to the pitch, yaw, and roll of a boat. The hade is essentially the tilt of the sample, while the azimuth is the absolute direction the sample is pointing relative to true north. The roll refers to how much a sample must turn before returning to its original position.

In talking with engineers at NASA, the MIT geologists found that the three angles they required were related to measurements that the rover takes on its own in the course of its normal operations. They realized that to estimate a sample’s hade and azimuth they could use the rover’s measurements of the drill’s orientation, as they could assume the tilt of the drill is parallel to any sample that it extracts.

To estimate a sample’s roll, the team took advantage of one of the rover’s onboard cameras, which snaps an image of the surface where the drill is about to sample. They reasoned that they could use any distinguishing features on the surface image to determine how much the sample would have to turn in order to return to its original orientation.

In cases where the surface bore no distinguishing features, the team used the rover’s onboard laser to make a mark in the rock, in the shape of the letter “L,” before drilling out a sample — a move that was jokingly referred to at the time as the first graffiti on another planet.

By combining all the rover’s positioning, orienting, and imaging data, the team estimated the original orientations of all 20 of the Martian bedrock samples collected so far, with a precision that is comparable to orienting rocks on Earth.

“We know the orientations to within 2.7 degrees uncertainty, which is better than what we can do with rocks in the Earth,” Mansbach says. “We’re working with engineers now to automate this orienting process so that it can be done with other samples in the future.”

“The next phase will be the most exciting,” Weiss says. “The rover will drive outside the crater to get the oldest known rocks on Mars, and it’s an incredible opportunity to be able to orient these rocks, and hopefully uncover a lot of these ancient processes.”

This research was supported, in part, by NASA and the Mars 2020 Participating Scientist program.

Pages