MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 10 hours 13 min ago

High-performance computing, with much less code

Thu, 03/13/2025 - 4:30pm

Many companies invest heavily in hiring talent to create the high-performance library code that underpins modern artificial intelligence systems. NVIDIA, for instance, developed some of the most advanced high-performance computing (HPC) libraries, creating a competitive moat that has proven difficult for others to breach.

But what if a couple of students, within a few months, could compete with state-of-the-art HPC libraries with a few hundred lines of code, instead of tens or hundreds of thousands?

That’s what researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have shown with a new programming language called Exo 2.

Exo 2 belongs to a new category of programming languages that MIT Professor Jonathan Ragan-Kelley calls “user-schedulable languages” (USLs). Instead of hoping that an opaque compiler will auto-generate the fastest possible code, USLs put programmers in the driver's seat, allowing them to write “schedules” that explicitly control how the compiler generates code. This enables performance engineers to transform simple programs that specify what they want to compute into complex programs that do the same thing as the original specification, but much, much faster.

One of the limitations of existing USLs (like the original Exo) is their relatively fixed set of scheduling operations, which makes it difficult to reuse scheduling code across different “kernels” (the individual components in a high-performance library).

In contrast, Exo 2 enables users to define new scheduling operations externally to the compiler, facilitating the creation of reusable scheduling libraries. Lead author Yuka Ikarashi, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate, says that Exo 2 can reduce total schedule code by a factor of 100 and deliver performance competitive with state-of-the-art implementations on multiple different platforms, including Basic Linear Algebra Subprograms (BLAS) that power many machine learning applications. This makes it an attractive option for engineers in HPC focused on optimizing kernels across different operations, data types, and target architectures.

“It’s a bottom-up approach to automation, rather than doing an ML/AI search over high-performance code,” says Ikarashi. “What that means is that performance engineers and hardware implementers can write their own scheduling library, which is a set of optimization techniques to apply on their hardware to reach the peak performance.”

One major advantage of Exo 2 is that it reduces the amount of coding effort needed at any one time by reusing the scheduling code across applications and hardware targets. The researchers implemented a scheduling library with roughly 2,000 lines of code in Exo 2, encapsulating reusable optimizations that are linear-algebra specific and target-specific (AVX512, AVX2, Neon, and Gemmini hardware accelerators). This library consolidates scheduling efforts across more than 80 high-performance kernels with up to a dozen lines of code each, delivering performance comparable to, or better than, MKL, OpenBLAS, BLIS, and Halide.

Exo 2 includes a novel mechanism called “Cursors” that provides what they call a “stable reference” for pointing at the object code throughout the scheduling process. Ikarashi says that a stable reference is essential for users to encapsulate schedules within a library function, as it renders the scheduling code independent of object-code transformations.

“We believe that USLs should be designed to be user-extensible, rather than having a fixed set of operations,” says Ikarashi. “In this way, a language can grow to support large projects through the implementation of libraries that accommodate diverse optimization requirements and application domains.”

Exo 2’s design allows performance engineers to focus on high-level optimization strategies while ensuring that the underlying object code remains functionally equivalent through the use of safe primitives. In the future, the team hopes to expand Exo 2’s support for different types of hardware accelerators, like GPUs. Several ongoing projects aim to improve the compiler analysis itself, in terms of correctness, compilation time, and expressivity.

Ikarashi and Ragan-Kelley co-authored the paper with graduate students Kevin Qian and Samir Droubi, Alex Reinking of Adobe, and former CSAIL postdoc Gilbert Bernstein, now a professor at the University of Washington. This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the U.S. National Science Foundation, while the first author was also supported by Masason, Funai, and Quad Fellowships.

MIT engineers turn skin cells directly into neurons for cell therapy

Thu, 03/13/2025 - 11:00am

Converting one type of cell to another — for example, a skin cell to a neuron — can be done through a process that requires the skin cell to be induced into a “pluripotent” stem cell, then differentiated into a neuron. Researchers at MIT have now devised a simplified process that bypasses the stem cell stage, converting a skin cell directly into a neuron.

Working with mouse cells, the researchers developed a conversion method that is highly efficient and can produce more than 10 neurons from a single skin cell. If replicated in human cells, this approach could enable the generation of large quantities of motor neurons, which could potentially be used to treat patients with spinal cord injuries or diseases that impair mobility.

“We were able to get to yields where we could ask questions about whether these cells can be viable candidates for the cell replacement therapies, which we hope they could be. That’s where these types of reprogramming technologies can take us,” says Katie Galloway, the W. M. Keck Career Development Professor in Biomedical Engineering and Chemical Engineering.

As a first step toward developing these cells as a therapy, the researchers showed that they could generate motor neurons and engraft them into the brains of mice, where they integrated with host tissue.

Galloway is the senior author of two papers describing the new method, which appear today in Cell Systems. MIT graduate student Nathan Wang is the lead author of both papers.

From skin to neurons

Nearly 20 years ago, scientists in Japan showed that by delivering four transcription factors to skin cells, they could coax them to become induced pluripotent stem cells (iPSCs). Similar to embryonic stem cells, iPSCs can be differentiated into many other cell types. This technique works well, but it takes several weeks, and many of the cells don’t end up fully transitioning to mature cell types.

“Oftentimes, one of the challenges in reprogramming is that cells can get stuck in intermediate states,” Galloway says. “So, we’re using direct conversion, where instead of going through an iPSC intermediate, we’re going directly from a somatic cell to a motor neuron.”

Galloway’s research group and others have demonstrated this type of direct conversion before, but with very low yields — fewer than 1 percent. In Galloway’s previous work, she used a combination of six transcription factors plus two other proteins that stimulate cell proliferation. Each of those eight genes was delivered using a separate viral vector, making it difficult to ensure that each was expressed at the correct level in each cell.

In the first of the new Cell Systems papers, Galloway and her students reported a way to streamline the process so that skin cells can be converted to motor neurons using just three transcription factors, plus the two genes that drive cells into a highly proliferative state.

Using mouse cells, the researchers started with the original six transcription factors and experimented with dropping them out, one at a time, until they reached a combination of three — NGN2, ISL1, and LHX3 — that could successfully complete the conversion to neurons.

Once the number of genes was down to three, the researchers could use a single modified virus to deliver all three of them, allowing them to ensure that each cell expresses each gene at the correct levels.

Using a separate virus, the researchers also delivered genes encoding p53DD and a mutated version of HRAS. These genes drive the skin cells to divide many times before they start converting to neurons, allowing for a much higher yield of neurons, about 1,100 percent.

“If you were to express the transcription factors at really high levels in nonproliferative cells, the reprogramming rates would be really low, but hyperproliferative cells are more receptive. It’s like they’ve been potentiated for conversion, and then they become much more receptive to the levels of the transcription factors,” Galloway says.

The researchers also developed a slightly different combination of transcription factors that allowed them to perform the same direct conversion using human cells, but with a lower efficiency rate — between 10 and 30 percent, the researchers estimate. This process takes about five weeks, which is slightly faster than converting the cells to iPSCs first and then turning them into neurons.

Implanting cells

Once the researchers identified the optimal combination of genes to deliver, they began working on the best ways to deliver them, which was the focus of the second Cell Systems paper.

They tried out three different delivery viruses and found that a retrovirus achieved the most efficient rate of conversion. Reducing the density of cells grown in the dish also helped to improve the overall yield of motor neurons. This optimized process, which takes about two weeks in mouse cells, achieved a yield of more than 1,000 percent.

Working with colleagues at Boston University, the researchers then tested whether these motor neurons could be successfully engrafted into mice. They delivered the cells to a part of the brain known as the striatum, which is involved in motor control and other functions.

After two weeks, the researchers found that many of the neurons had survived and seemed to be forming connections with other brain cells. When grown in a dish, these cells showed measurable electrical activity and calcium signaling, suggesting the ability to communicate with other neurons. The researchers now hope to explore the possibility of implanting these neurons into the spinal cord.

The MIT team also hopes to increase the efficiency of this process for human cell conversion, which could allow for the generation of large quantities of neurons that could be used to treat spinal cord injuries or diseases that affect motor control, such as ALS. Clinical trials using neurons derived from iPSCs to treat ALS are now underway, but expanding the number of cells available for such treatments could make it easier to test and develop them for more widespread use in humans, Galloway says.

The research was funded by the National Institute of General Medical Sciences and the National Science Foundation Graduate Research Fellowship Program.

Five ways to succeed in sports analytics

Thu, 03/13/2025 - 12:00am

Sports analytics is fueled by fans, and funded by teams. The 19th annual MIT Sloan Sports Analytics Conference (SSAC), held last Friday and Saturday, showed more clearly than ever how both groups can join forces.

After all, for decades, the industry’s main energy source has been fans weary of bad strategies: too much bunting in baseball, too much punting in football, and more. The most enduring analytics icon, Bill James, was a teacher and night watchman until his annual “Baseball Abstract” books began to upend a century of conventional wisdom, in the 1980s. After that, sports analytics became a profession.

Meanwhile, franchise valuations keep rising, women’s sports are booming, and U.S. college sports are professionalizing. All of it should create more analytics jobs, as “Moneyball” author Michael Lewis noted during a Friday panel.

“This whole analytics movement is a byproduct of the decisions becoming really expensive decisions,” Lewis said. “It didn’t matter if you got it wrong if you were paying someone $50,000 a year. But if you’re going to pay them $50 million, you better get it right. So, all of a sudden, someone who can give you a little bit more of an edge in that decision-making has more value.”

Would you like to be a valued sports analytics professional? Here are five ideas, gleaned from MIT’s industry-leading event, about how to gain traction in the field.

1. You can jump into this industry.

Bill James, as it happens, was the first speaker on the opening Friday-morning panel at SSAC, held at the Hynes Convention Center in Boston. His theme: the value of everyone’s work, since today’s amateurs become tomorrow’s professionals.

“Time will reveal that the people doing really important work here are not the people sitting on the stages, but the people in the audience,” James said.

This year, that audience had 2,500 attendees, from 44 U.S. states, 42 countries, and over 220 academic institutions, along with dozens of panels, a research paper competition, and thousands of hallway conversations among networking attendees. SSAC was co-founded in 2007 by Daryl Morey SM ’00, president of basketball operations for the Philadelphia 76ers, and Jessica Gelman, CEO of KAGR, the Kraft Analytics Group. The first three conferences were held in MIT classrooms.

But even now, sports analytics remains largely a grassroots thing. Why? Because fans can observe sports intensively, without being bound to its conventions, then study it quantitatively.

“The driving thing for a lot of people is they want to take this [analytical] way of thinking and apply it to sports,” soccer journalist Ryan O’Hanlon of ESPN said to MIT News, in one of those hallway conversations.

O’Hanlon’s 2022 book, “Net Gains,” chronicles the work of several people who held non-sports jobs, made useful advances in soccer analytics, then jumped into the industry. Soon, the sport may have more landing spots, between the growth of Major League Soccer in the U.S. and women’s soccer everywhere. Also, in O’Hanlon’s estimation, only three of the 20 clubs in England’s Premier League are deeply invested in analytics: Brentford, Brighton, and (league-leading) Liverpool. That could change.

In any case, most of the people who leap from fandom to professional status are willing to examine issues that others take for granted.

“I think it’s not being afraid to question the way everyone is doing things,” O’Hanlon added. “Whether that’s how a game is played, how we acquire players, how we think about anything. Pretty much anyone who gets to a high level and has impact [in analytics] has asked those questions and found a way to answer some.”

2. Make friends with the video team.

Suppose you love a sport, start analyzing it, produce good work that gets some attention, and — jackpot! — get hired by a pro team to do analytics.

Well, as former NBA player Shane Battier pointed out during a basketball panel at SSAC, you still won’t spend any time talking to players about your beloved data. That just isn’t how professional teams work, not even stat-savvy ones.

But there is good news: Analysts can still reach coaches and athletes through skilled use of video clips. Most European soccer managers ignore data, but will pay attention to the team’s video analysts. Basketball coaches love video. In American football, film study is essential. And technology has made it easier than ever to link data to video clips.

So analysts should become buddies with the video group. Importantly, analytics professionals now grasp this better than ever, something evident at SSAC across sports.

“Video in football [soccer] is the best way to communicate and get on the same page,” said Sarah Rudd, co-founder and CTO of src | ftbl, and a former analyst for Arsenal, at Friday’s panel on soccer analytics.

3. Seek opportunities in women’s sports analytics.

Have we mentioned that women’s sports is booming? The WNBA is expanding, the size of the U.S. transfer market in women’s soccer has doubled for three straight years, and you can now find women’s college volleyball in a basic cable package.

That growth is starting to fund greater data collection, in the WNBA and elsewhere, a frequent conversation topic at SSAC.

As Jennifer Rizzotti, president of the WNBA’s Connecticut Sun, noted of her own playing days in the 1990s: “We didn’t have statistics, we didn’t have [opponents’] tendencies that were being explained to us. So, when I think of what players have access to now and how far we’ve come, it’s really impressive.” And yet, she added, the amount of data in men’s basketball remains well ahead of the women’s game: “It gives you an awareness of how far we have to go.”

Some women’s sports still lack the cash needed for basic analytics infrastructure. One Friday panelist, LPGA golfer Stacy Lewis, a 13-time winner on tour, noted that the popular ball-tracking analytics system used in men’s golf costs $1 million per week, beyond budget for the women’s game.

And at a Saturday panel, Gelman said that full data parity between men’s and women’s sports was not imminent. “Sadly, I think we’re years away because we just need more investment into it,” she said.

But there is movement. At one Saturday talk, data developer Charlotte Eisenberg detailed how the website Sports Reference — a key resource of free public data —has been adding play-by-play data for WNBA games. That can help for evaluating individual players, particularly over long time periods, and has long been available for NBA games.

In short, as women’s sports grow, their analytics opportunities will, too.

4. Don’t be daunted by someone’s blurry “eye test.”

A subtle trip-wire in sports analytics, even at SSAC, is the idea that analytics should match the so-called “eye test,” or seemingly intuitive sports observations.

Here’s the problem: There is no one “eye test” in any sport, because people’s intuitions differ. For some basketball coaches, an unselfish role player stands out. To others, a flashy off-the-dribble shooter passes the eye test, even without a high shooting percentage. That tension would exist even if statistics did not.

Enter analytics, which confirms the high value of efficient shooting (as well as old-school virtues like defense, rebounding, and avoiding turnovers). But in a twist, the definition of a good shot in basketball has famously changed. In 1979-80, the NBA introduced the three-point line; in 1985, teams were taking 3.1 three-pointers per game; now in 2024-25, teams are averaging 37.5 three-pointers per game, with great efficiency. What happened?

“People didn’t use [the three-point shot] well at the beginning,” Morey said on a Saturday panel, quipping that “they were too dumb to know that three is greater than two.”

Granted, players weren’t used to shooting threes in 1980. But it also took a long time to change intuitions in the sport. Today, analytics shows that a contested three-pointer is a higher-value shot that an open 18-foot two-pointer. That might still run counter to someone’s “eye test.”

Incidentally, always following analytically informed coaching might also lead to a more standardized, less interesting game, as Morey and basketball legend Sue Bird suggested at the same panel.

“There’s a little bit of instinct that is now removed from the game,” Bird said. Shooting threes makes sense, she concurred, but “You’re only focused on the three-point line, and it takes away all the other things.”

5. Think about absolute truths, but solve for current tactics. 

Bill James set the bar high for sports analytics: His breakthrough equation, “runs created,” described how baseball works with almost Newtonian simplicity. Team runs are the product of on-base percentage and slugging percentage, divided by plate appearances. This applies to individual players, too.

But it’s almost impossible to replicate that kind of fundamental formula in other sports.

“I think in soccer there’s still a ton to learn about how the game works,” O’Hanlon told MIT News. Should a team patiently build possession, play long balls, or press up high? And how do we value players with wildly varying roles?

That sometimes leads to situations where, O’Hanlon notes, “No one really knows the right questions that the data should be asking, because no one really knows the right way to play soccer.”

Happily, the search for underlying truths can also produce some tactical insights. Consider one of the three finalists in the conference’s research paper competition, “A Machine Learning Approach to Player Value and Decision Making in Professional Ultimate Frisbee,” by Braden Eberhard, Jacob Miller, and Nathan Sandholtz.

In it, the authors examine playing patterns in ultimate, seeing if teams score more by using a longer string of higher-percentage short-range passes, or by trying longer, high-risk throws. They found that players tend to try higher-percentage passes, although there is some variation, including among star players. That suggests tactical flexibility matters. If the defense is trying to take away short passes, throw long sometimes.

It is a classic sports issue: The right way to play often depends on how your opponent is playing. In the search for ultimate truths, analysts can reveal the usefulness of short-term tactics. That helps team win, which helps analytics types stay employed. But none of this would come to light if analysts weren’t digging into the sports they love, searching for answers and trying to let the world know what they find.

“There is nothing happening here that will change your life if you don’t follow through on it,” James said. “But there are many things happening here that will change your life if you do.” 

Making airfield assessments automatic, remote, and safe

Thu, 03/13/2025 - 12:00am

In 2022, Randall Pietersen, a civil engineer in the U.S. Air Force, set out on a training mission to assess damage at an airfield runway, practicing “base recovery” protocol after a simulated attack. For hours, his team walked over the area in chemical protection gear, radioing in geocoordinates as they documented damage and looked for threats like unexploded munitions.

The work is standard for all Air Force engineers before they deploy, but it held special significance for Pietersen, who has spent the last five years developing faster, safer approaches for assessing airfields as a master’s student and now a PhD candidate and MathWorks Fellow at MIT. For Pietersen, the time-intensive, painstaking, and potentially dangerous work underscored the potential for his research to enable remote airfield assessments.

“That experience was really eye-opening,” Pietersen says. “We’ve been told for almost a decade that a new, drone-based system is in the works, but it is still limited by an inability to identify unexploded ordnances; from the air, they look too much like rocks or debris. Even ultra-high-resolution cameras just don’t perform well enough. Rapid and remote airfield assessment is not the standard practice yet. We’re still only prepared to do this on foot, and that’s where my research comes in.”

Pietersen’s goal is to create drone-based automated systems for assessing airfield damage and detecting unexploded munitions. This has taken him down a number of research paths, from deep learning to small uncrewed aerial systems to “hyperspectral” imaging, which captures passive electromagnetic radiation across a broad spectrum of wavelengths. Hyperspectral imaging is getting cheaper, faster, and more durable, which could make Pietersen’s research increasingly useful in a range of applications including agriculture, emergency response, mining, and building assessments.

Finding computer science and community

Growing up in a suburb of Sacramento, California, Pietersen gravitated toward math and physics in school. But he was also a cross country athlete and an Eagle Scout, and he wanted a way to put his interests together.

“I liked the multifaceted challenge the Air Force Academy presented,” Pietersen says. “My family doesn’t have a history of serving, but the recruiters talked about the holistic education, where academics were one part, but so was athletic fitness and leadership. That well-rounded approach to the college experience appealed to me.”

Pietersen majored in civil engineering as an undergrad at the Air Force Academy, where he first began learning how to conduct academic research. This required him to learn a little bit of computer programming.

“In my senior year, the Air Force research labs had some pavement-related projects that fell into my scope as a civil engineer,” Pietersen recalls. “While my domain knowledge helped define the initial problems, it was very clear that developing the right solutions would require a deeper understanding of computer vision and remote sensing.”

The projects, which dealt with airfield pavement assessments and threat detection, also led Pietersen to start using hyperspectral imaging and machine learning, which he built on when he came to MIT to pursue his master’s and PhD in 2020.

“MIT was a clear choice for my research because the school has such a strong history of research partnerships and multidisciplinary thinking that helps you solve these unconventional problems,” Pietersen says. “There’s no better place in the world than MIT for cutting-edge work like this.”

By the time Pietersen got to MIT, he’d also embraced extreme sports like ultra-marathons, skydiving, and rock climbing. Some of that stemmed from his participation in infantry skills competitions as an undergrad. The multiday competitions are military-focused races in which teams from around the world traverse mountains and perform graded activities like tactical combat casualty care, orienteering, and marksmanship.

“The crowd I ran with in college was really into that stuff, so it was sort of a natural consequence of relationship-building,” Pietersen says. “These events would run you around for 48 or 72 hours, sometimes with some sleep mixed in, and you get to compete with your buddies and have a good time.”

Since coming to MIT with his wife and two children, Pietersen has embraced the local running community and even worked as an indoor skydiving instructor in New Hampshire, though he admits the East Coast winters have been tough for him and his family to adjust to.

Pietersen went remote between 2022 to 2024, but he wasn’t doing his research from the comfort of a home office. The training that showed him the reality of airfield assessments took place in Florida, and then he was deployed to Saudi Arabia. He happened to write one of his PhD journal publications from a tent in the desert.

Now back at MIT and nearing the completion of his doctorate this spring, Pietersen is thankful for all the people who have supported him in throughout his journey.

“It has been fun exploring all sorts of different engineering disciplines, trying to figure things out with the help of all the mentors at MIT and the resources available to work on these really niche problems,” Pietersen says.

Research with a purpose

In the summer of 2020, Pietersen did an internship with the HALO Trust, a humanitarian organization working to clear landmines and other explosives from areas impacted by war. The experience demonstrated another powerful application for his work at MIT.

“We have post-conflict regions around the world where kids are trying to play and there are landmines and unexploded ordnances in their backyards,” Pietersen says. “Ukraine is a good example of this in the news today. There are always remnants of war left behind. Right now, people have to go into these potentially dangerous areas and clear them, but new remote-sensing techniques could speed that process up and make it far safer.”

Although Pietersen’s master’s work primarily revolved around assessing normal wear and tear of pavement structures, his PhD has focused on ways to detect unexploded ordnances and more severe damage.

“If the runway is attacked, there would be bombs and craters all over it,” Pietersen says. “This makes for a challenging environment to assess. Different types of sensors extract different kinds of information and each has its pros and cons. There is still a lot of work to be done on both the hardware and software side of things, but so far, hyperspectral data appears to be a promising discriminator for deep learning object detectors.”

After graduation, Pietersen will be stationed in Guam, where Air Force engineers regularly perform the same airfield assessment simulations he participated in in Florida. He hopes someday soon, those assessments will be done not by humans in protective gear, but by drones.

“Right now, we rely on visible lines of site,” Pietersen says. “If we can move to spectral imaging and deep-learning solutions, we can finally conduct remote assessments that make everyone safer.”

2025 MacVicar Faculty Fellows named

Thu, 03/13/2025 - 12:00am

Three outstanding educators have been named MacVicar Faculty Fellows: associate professor in comparative media studies/writing Paloma Duong, associate professor of economics Frank Schilbach, and associate professor of urban studies and planning Justin Steil.

For more than 30 years, the MacVicar Faculty Fellows Program has recognized exemplary and sustained contributions to undergraduate education at MIT. The program is named in honor of Margaret MacVicar, MIT’s first dean for undergraduate education and founder of the Undergraduate Research Opportunities Program. Fellows are chosen through a highly competitive, annual nomination process. The MIT Registrar’s Office coordinates and administers the award on behalf of the Office of the Vice Chancellor; nominations are reviewed by an advisory committee, and final selections are made by the provost.

Paloma Duong: Equipping students with a holistic, global worldview

Paloma Duong is the Ford International Career Development Associate Professor of Latin American and Media Studies. Her work has helped to reinvigorate Latin American subject offerings, increase the number of Spanish minors, and build community at the Institute.

Duong brings an interdisciplinary perspective to teaching Latin American culture in dialogue with media theory and political philosophy in the Comparative Media Studies/Writing (CMS/W) program. Her approach is built on a foundation of respect for each student’s unique academic journey and underscores the importance of caring for the whole student, honoring where they can go as intellectuals, and connecting them to a world bigger than themselves.

Senior Alex Wardle says that Professor Duong “broadened my worldview and made me more receptive to new concepts and ideas … her class has deepened my critical thinking skills in a way that very few other classes at MIT have even attempted to.”

Duong’s Spanish language classes and seminars incorporate a wide range of practices — including cultural analyses, artifacts, guest speakers, and hands-on multimedia projects — to help students engage with the material, think critically, and challenge preconceived notions while learning about Latin American history. CMS/W head and professor of science writing Seth Mnookin notes, “students become conversant with region-specific vocabularies, worldviews, and challenges.” This approach makes students feel “deeply respected” and treats them as “learning partners — interlocutors in their own right,” observes Bruno Perreau, the Cynthia L. Reed Professor of French Studies and Language.

Outside the classroom, Duong takes the time to mentor and get to know students by supporting and attending programs connected to MIT Cubanos, Cena a las Seis, and Global Health Alliance. She also serves as an advisor for comparative media studies and Spanish majors, is the undergraduate officer for CMS/W, and is a member of the School of Humanities, Arts, and Social Sciences Education Advisory Committee and the Committee on Curricula.

“Subject areas like Spanish and Latin American Studies play an important role at MIT,” writes T.L. Taylor, professor in comparative media studies/writing and MacVicar Faculty Fellow. “Students find a sense of community and support in these spaces, something that should be at the heart of our attention more than ever these days. We are lucky to have such a dynamic and engaged educator like Professor Duong.”

On receiving this award, Duong says, “I’m positively elated! I’m very grateful to my students and colleagues for the nomination and am honored to become part of such a remarkable group of fellow teachers and mentors. Teaching undergraduates at MIT is always a beautiful challenge and an endless source of learning; I feel super lucky to be in this position.”

Frank Schilbach: Bringing energy and excitement to the curriculum

Frank Schilbach is the Gary Loveman Career Development Associate Professor of Economics. His connection and dedication to undergraduates, combined with his efforts in communicating the importance of economics as a field of study, were key components in the revitalization of Course 14.

When Schilbach arrived at MIT in 2015, there were only three sophomore economics majors. “A less committed teacher would have probably just taken it as a given and got on with their research,” writes professor of economics Abhijit Banerjee. “Frank, instead, took it as a challenge … his patient efforts in convincing students that they need to make economics a part of their general education was a key reason why innovations [to broaden the major] succeeded. The department now has more than 40 sophomores.”

In addition to bolstering enrollment, Schilbach had a hand in curricular improvements. Among them, he created a “next step” for students completing class 14.01 (Principles of Microeconomics) with a revised class 14.13 (Psychology and Economics) that goes beyond classic topics in behavioral economics to explore links with poverty, mental health, happiness, and identity.

Even more significant is the thoughtful and inclusive approach to teaching that Schilbach brings. “He is considerate and careful, listening to everyone, explaining concepts while making students understand that we care about them … it is just a joy to see how the students revel in the activities and the learning,” writes Esther Duflo, the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics. Erin Grela ’20 notes, “Professor Schilbach goes above and beyond to solicit student feedback so that he can make real-time changes to ensure that his classes are serving his students as best they can.”

His impacts extend beyond MIT as well. Professor of economics David Atkin writes: “Many of these students are inspired by their work with Frank to continue their studies at the graduate level, with an incredible 29 of his students going on to PhD studies at many of the best programs in the country. For someone who has only recently been promoted to a tenured professor, this is a remarkable record of advising.”

“I am delighted to be selected as a MacVicar Fellow,” says Schilbach. “I am thrilled that students find my courses valuable, and it brings me great joy to think that my teaching may help some students improve their well-being and inspire them to use their incredible talents to better the lives of others.”

Justin Steil: Experiential learning meets public service

“I am honored to join the MacVicar Faculty Fellows,” writes associate professor of law and urban planning Justin Steil. “I am deeply grateful to have the chance to teach and learn with such hard-working and creative students who are enthusiastic about collaborating to discover new knowledge and solve hard problems, in the classroom and beyond.”

Professor Steil uses his background as a lawyer, a sociologist, and an urban planner to combine experiential learning with opportunities for public service. In class 11.469 (Urban Sociology in Theory and Practice), he connects students with incarcerated individuals to examine inequality at one of the state’s largest prisons, MCI Norfolk. In another undergraduate seminar, students meet with leaders of local groups like GreenRoots in Chelsea, Massachusetts; Alternatives for Community and Environment in Roxbury, Massachusetts; and the Dudley Street Neighborhood Initiative in Roxbury to work on urban environmental hazards. Ford Professor of Urban Design and Planning and MacVicar Faculty Fellow Lawrence Vale calls Steil’s classes “life-altering.”

In addition to teaching, Steil is also a paramedic and has volunteered as an EMT for MIT Emergency Medical Service (EMS), where he continues to transform routine activities into teachable moments. “There are numerous opportunities at MIT to receive mentorship and perform research. Justin went beyond that. My conversations with Justin have inspired me to go to graduate school to research medical devices in the EMS context,” says Abigail Schipper ’24.

“Justin is truly devoted to the complete education of our undergraduate students in ways that meaningfully serve the broader MIT community as well as the residents of Cambridge and Boston,” says Andrew (1956) and Erna Viterbi Professor of Biological Engineering Katharina Ribbeck. Miho Mazereeuw, associate professor of architecture and urbanism and director of the Urban Risk Lab, concurs: “through his teaching, advising, mentoring, and connections with community-based organizations and public agencies, Justin has knit together diverse threads into a coherent undergraduate experience.”

Student testimonials also highlight Steil’s ability to make each student feel special by delivering undivided attention and individualized mentorship. A former student writes: “I was so grateful to have met an instructor who believed in his students so earnestly … despite being one of the busiest people I’ve ever known, [he] … unerringly made the students he works with feel certain that he always has time for them.”

Since joining MIT in 2015, Steil has received a Committed to Caring award in 2018; the Harold E. Edgerton Award for exceptional contributions in research, teaching, and service in 2021; and a First Year Advising Award from the Office of the First Year in 2022.

Learn more about the MacVicar Faculty Fellows Program on the Registrar’s Office website. 

QS World University Rankings rates MIT No. 1 in 11 subjects for 2025

Wed, 03/12/2025 - 6:00am

QS World University Rankings has placed MIT in the No. 1 spot in 11 subject areas for 2025, the organization announced today.

The Institute received a No. 1 ranking in the following QS subject areas: Chemical Engineering; Civil and Structural Engineering; Computer Science and Information Systems; Data Science and Artificial Intelligence; Electrical and Electronic Engineering; Linguistics; Materials Science; Mechanical, Aeronautical, and Manufacturing Engineering; Mathematics; Physics and Astronomy; and Statistics and Operational Research.

MIT also placed second in seven subject areas: Accounting and Finance; Architecture/Built Environment; Biological Sciences; Business and Management Studies; Chemistry; Earth and Marine Sciences; and Economics and Econometrics.

For 2024, universities were evaluated in 55 specific subjects and five broader subject areas. MIT was ranked No. 1 in the broader subject area of Engineering and Technology and No. 2 in Natural Sciences.

Quacquarelli Symonds Limited subject rankings, published annually, are designed to help prospective students find the leading schools in their field of interest. Rankings are based on research quality and accomplishments, academic reputation, and graduate employment.

MIT has been ranked as the No. 1 university in the world by QS World University Rankings for 13 straight years.

Want to climb the leadership ladder? Try debate training

Wed, 03/12/2025 - 12:00am

For those looking to climb the corporate ladder in the U.S., here’s an idea you might not have considered: debate training.

According to a new research paper, people who learn the basics of debate are more likely to advance to leadership roles in U.S. organizations, compared to those who do not receive this training. One key reason is that being equipped with debate skills makes people more assertive in the workplace.

“Debate training can promote leadership emergence and advancement by fostering individuals’ assertiveness, which is a key, valued leadership characteristic in U.S. organizations,” says MIT Associate Professor Jackson Lu, one of the scholars who conducted the study.

The research is based on two experiments and provides empirical insights into leadership development, a subject more often discussed anecdotally than studied systematically.

“Leadership development is a multi-billion-dollar industry, where people spend a lot of money trying to help individuals emerge as leaders,” Lu says. “But the public doesn’t actually know what would be effective, because there hasn’t been a lot of causal evidence. That’s exactly what we provide.”

The paper, “Breaking Ceilings: Debate Training Promotes Leadership Emergence by Increasing Assertiveness,” was published Monday in the Journal of Applied Psychology. The authors are Lu, an associate professor at the MIT Sloan School of Management; Michelle X. Zhao, an undergraduate student at the Olin Business School of Washington University in St. Louis; Hui Liao, a professor and assistant dean at the University of Maryland’s Robert H. Smith School of Business; and Lu Doris Zhang, a doctoral student at MIT Sloan.

Assertiveness in the attention economy

The researchers conducted two experiments. In the first, 471 employees in a Fortune 100 firm were randomly assigned to receive either nine weeks of debate training or no training. Examined 18 months later, those receiving debate training were more likely to have advanced to leadership roles, by about 12 percentage points. This effect was statistically explained by increased assertiveness among those with debate training.

The second experiment, conducted with 975 university participants, further tested the causal effects of debate training in a controlled setting. Participants were randomly assigned to receive debate training, an alternative non-debate training, or no training. Consistent with the first experiment, participants receiving the debate training were more likely to emerge as leaders in subsequent group activities, an effect statistically explained by their increased assertiveness.

“The inclusion of a non-debate training condition allowed us to causally claim that debate training, rather than just any training, improved assertiveness and increased leadership emergence,” Zhang says. 

To some people, increasing assertiveness might not seem like an ideal recipe for success in an organizational setting, as it might seem likely to increase tensions or decrease cooperation. But as the authors note, the American Psychological Association conceptualizes assertiveness as “an adaptive style of communication in which individuals express their feelings and needs directly, while maintaining respect for others.”

Lu adds: “Assertiveness is conceptually different from aggressiveness. To speak up in meetings or classrooms, people don’t need to be aggressive jerks. You can ask questions politely, yet still effectively express opinons. Of course, that’s different from not saying anything at all.”

Moreover, in the contemporary world where we all must compete for attention, refined communication skills may be more important than ever.

“Whether it is cutting filler or mastering pacing, knowing how to assert our opinions helps us sound more leader-like,” Zhang says.

How firms identify leaders

The research also finds that debate training benefits people across demographics: Its impact was not significantly different for men or women, for those born in the U.S. or outside it, or for different ethnic groups.

However, the findings raise still other questions about how firms identify leaders. As the results show, individuals might have incentive to seek debate training and other general workplace skills. But how much responsibility do firms have to understand and recognize the many kinds of skills, beyond assertiveness, that employees may have?

“We emphasize that the onus of breaking leadership barriers should not fall on individuals themelves,” Lu says. “Organizations should also recognize and appreciate different communication and leadership styles in the workplace.”

Lu also notes that ongoing work is needed to understand if those firms are properly valuing the attributes of their own leaders.

“There is an important distinction between leadership emergence and leadership effectiveness,” Lu says. “Our paper looks at leadership emergence. It’s possible that people who are better listeners, who are more cooperative, and humbler, should also be selected for leadership positions because they are more effective leaders.”

This research was partly funded by the Society for Personality and Social Psychology.

Building trust in science through conversation and empathy

Wed, 03/12/2025 - 12:00am

How do we foster trust in science in an increasingly polarized world? A group including scientists, journalists, policymakers and more gathered at MIT on March 10 to discuss how to bridge the gap between scientific expertise and understanding.

The conference, titled “Building Trust in Science for a More Informed Future,” was organized by the MIT Press and the nonprofit Aspen Institute’s Science and Society Program. It featured talks about the power of storytelling, the role of social media and generative artificial intelligence in our information landscape, and why discussions about certain science topics can become so emotionally heated.

A common theme was the importance of empathy between science communicators and the public.

“The idea that disagreement is often seen as disrespect is insightful,” said MIT’s Ford Professor of Political Science Lily Tsai. “One way to communicate respect is genuine curiosity along with the willingness to change one’s mind. We’re often focused on the facts and evidence and saying, ‘Don’t you understand the facts?’ But the ideal conversation is more like, ‘You value ‘x.’ Tell me why you value ‘x’ and let’s see if we can connect on how the science and research helps you to fulfill those values, even if I don’t agree with them.’”

Many participants discussed the threat of misinformation, a problem exacerbated by the emergence of social media and generative AI. But it’s not all bad news for the scientific community. MIT Provost Cindy Barnhart opened the event by citing surveys showing a high level of trust broadly in scientists across the globe. Still, she also pointed to a U.S. survey showing communication was seen as an area of relative weakness for scientists.

Barnhart noted MIT’s long commitment to science communication and commended communication efforts affiliated with MIT including MIT Press, MIT Technology Review, and MIT News.

“We’re working hard to communicate the value of science to society as we fight to build public support for the scientific research, discovery, and evidence that is needed in our society,” Barnhart said. “At MIT, an essential way we do that is by shining a bright light on the groundbreaking work of our faculty, research, scientists, staff, postdocs, and students.”

Another theme was the importance of storytelling in science communication, and participants including the two keynote speakers offered plenty of their own stories. Francis Collins, who directed the National Institutes of Health between 2009 and 2021, and Sudanese climate journalist Lina Yassin delivered a joint keynote address moderated by MIT Vice President for Communications Alfred Ironside.

Recalling his time leading the NIH through the Covid-19 pandemic, Collins said the Covid-19 vaccine development was a major success, but the scientific community failed to explain to the public the way science evolves based on new evidence.

“We missed a chance to use the pandemic as a teachable moment,” Collins said. “In March of 2020, we were just starting to learn about the virus and how it spread, but we had to make recommendations to the public, which would often change a month or two later. So people began to doubt the information they were getting was reliable because it kept changing. If you’re in a circumstance where you’re communicating scientific evidence, start by saying, ‘This is a work in progress.’”

Collins said the government should have had a better plan for communicating information to the public when the pandemic started.

“Our health system was badly broken at the time because it had been underinvested in for far too long, so community-based education wasn’t really possible,” Collins said, noting his agency should have done more to empower physicians who were trusted voices in rural communities. “Far too much of our communication was top down.”

In her keynote address, Yassin shared her experience trying to get people in her home country to evacuate ahead of natural disasters. She said many people initially ignored her advice, citing their faith in God’s plan for them. But when she reframed her messaging to incorporate the teachings of Islam, a religion most of the country practices, she said people were much more receptive.

That was another recurring lesson participants shared: Science discussions don’t occur in a vacuum. Any conversation that ignores a person’s existing values and experiences will be less effective.

“Personal experience, as well as personal faith and belief, are critically important filters that we encounter every time we talk to people about science,” Ironside said.

Making solar projects cheaper and faster with portable factories

Wed, 03/12/2025 - 12:00am

As the price of solar panels has plummeted in recent decades, installation costs have taken up a greater share of the technology’s overall price tag. The long installation process for solar farms is also emerging as a key bottleneck in the deployment of solar energy.

Now the startup Charge Robotics is developing solar installation factories to speed up the process of building large-scale solar farms. The company’s factories are shipped to the site of utility solar projects, where equipment including tracks, mounting brackets, and panels are fed into the system and automatically assembled. A robotic vehicle autonomously puts the finished product — which amounts to a completed section of solar farm — in its final place.

“We think of this as the Henry Ford moment for solar,” says CEO Banks Hunter ’15, who founded Charge Robotics with fellow MIT alumnus Max Justicz ’17. “We’re going from a very bespoke, hands on, manual installation process to something much more streamlined and set up for mass manufacturing. There are all kinds of benefits that come along with that, including consistency, quality, speed, cost, and safety.”

Last year, solar energy accounted for 81 percent of new electric capacity in the U.S., and Hunter and Justicz see their factories as necessary for continued acceleration in the industry.

The founders say they were met with skepticism when they first unveiled their plans. But in the beginning of last year, they deployed a prototype system that successfully built a solar farm with SOLV Energy, one of the largest solar installers in the U.S. Now, Charge has raised $22 million for its first commercial deployments later this year.

From surgical robots to solar robots

While majoring in mechanical engineering at MIT, Hunter found plenty of excuses to build things. One such excuse was Course 2.009 (Produce Engineering Processes), where he and his classmates built a smart watch for communication in remote areas.

After graduation, Hunter worked for the MIT alumni-founded startups Shaper Tools and Vicarious Surgical. Vicarious Surgical is a medical robotics company that has raised more than $450 million to date. Banks was the second employee and worked there for five years.

“A lot of really hands on, project-based classes at MIT translated directly into my first roles coming out of school and set me up to be very independent and run large engineering projects,” Banks says, “Course 2.009, in particular, was a big launch point for me. The founders of Vicarious Surgical got in touch with me through the 2.009 network.”

As early as 2017, Hunter and Justicz, who majored in mechanical engineering and computer science, had discussed starting a company together. But they had to decide where to apply their broad engineering and product skillsets.

“Both of us care a lot about climate change. We see climate change as the biggest problem impacting the greatest number of people on the planet,” Hunter says. “Our mentality was if we can build anything, we might as well build something that really matters.”

In the process of cold calling hundreds of people in the energy industry, the founders decided solar was the future of energy production because its price was decreasing so quickly.

“It’s becoming cheaper faster than any other form of energy production in human history,” Hunter says.

When the founders began visiting construction sites for the large, utility-scale solar farms that make up the bulk of energy generation, it wasn’t hard to find the bottlenecks. The first site they traveled to was in the Mojave Desert in California. Hunter describes it as a massive dust bowl where thousands of workers spent months repeating tasks like moving material and assembling the same parts, over and over again.

“The site had something like 2 million panels on it, and every single one was assembled and fastened the same way by hand,” Hunter says. “Max and I thought it was insane. There’s no way that can scale to transform the energy grid in a short window of time.”

Hunter says he heard from each of the largest solar companies in the U.S. that their biggest limitation for scaling was labor shortages. The problem was slowing growth and killing projects.

Hunter and Justicz founded Charge Robotics in 2021 to break through that bottleneck. Their first step was to order utility solar parts and assemble them by hand in their backyards.

“From there, we came up with this portable assembly line that we could ship out to construction sites and then feed in the entire solar system, including the steel tracks, mounting brackets, fasteners, and the solar panels,” Hunter explains. “The assembly line robotically assembles all those pieces to produce completed solar bays, which are chunks of a solar farm.”

Each bay represents a 40-foot piece of the solar farm and weighs about 800 pounds. A robotic vehicle brings it to its final location in the field. Banks says Charge’s system automates all mechanical installation except for the process of pile driving the first metal stakes into the ground.

Charge’s assembly lines also have machine-vision systems that scan each part to ensure quality, and the systems work with the most common solar parts and panel sizes.

From pilot to product

When the founders started pitching their plans to investors and construction companies, people didn’t believe it was possible.

“The initial feedback was basically, ‘This will never work,’” Hunter says. “But as soon as we took our first system out into the field and people saw it operating, they got much more excited and started believing it was real.”

Since that first deployment, Charge’s team has been making its system faster and easier to operate. The company plans to set up its factories at project sites and run them in partnership with solar construction companies. The factories could even run alongside human workers.

“With our system, people are operating robotic equipment remotely rather than putting in the screws themselves,” Hunter explains. “We can essentially deliver the assembled solar to customers. Their only responsibility is to deliver the materials and parts on big pallets that we feed into our system.”

Hunter says multiple factories could be deployed at the same site and could also operate 24/7 to dramatically speed up projects.

“We are hitting the limits of solar growth because these companies don’t have enough people,” Hunter says. “We can build much bigger sites much faster with the same number of people by just shipping out more of our factories. It’s a fundamentally new way of scaling solar energy.”

Compassionate leadership

Tue, 03/11/2025 - 5:25pm

Professors Emery Brown and Hamsa Balakrishnan work in vastly different fields, but are united by their deep commitment to mentoring students. While each has contributed to major advancements in their respective areas — statistical neuroscience for Brown, and large-scale transportation systems for Balakrishnan — their students might argue that their greatest impact comes from the guidance, empathy, and personal support they provide. 

Emery Brown: Holistic mentorship

Brown is the Edward Hood Professor of Medical Engineering and Computational Neuroscience at MIT and a practicing anesthesiologist at Massachusetts General Hospital. Brown’s experimental research has made important contributions toward understanding the neuroscience of how anesthetics act in the brain to create the states of general anesthesia. 

One of the biggest challenges in academic environments is knowing how to chart a course. Brown takes the time to connect with students individually, helping them identify meaningful pathways that they may not have considered for themselves. In addition to mentoring his graduate students and postdocs, Brown also hosts clinicians and faculty from around the world. Their presence in the lab exposes students to a number of career opportunities and connections outside of MIT’s academic environment.

Brown also continues to support former students beyond their time in his lab, offering guidance on personal and professional development even after they have moved on to other roles. “Knowing that I have Emery at my back as someone I can always turn to … is such a source of confidence and strength as I go forward into my own career,” one nominator wrote. 

When Brown faced a major career decision recently, he turned to his students to ask how his choice might affect them. He met with students individually to understand the personal impact that each might experience. Brown was adamant in ensuring that his professional advancement would not jeopardize his students, and invested a great deal of thought and effort in ensuring a positive outcome for them. 

Brown is deeply committed to the health and well-being of his students, with many nominators sharing examples of his constant support through challenging personal circumstances. When one student reached out to Brown, overwhelmed by research, recent personal loss, and career uncertainty, Brown created a safe space for vulnerable conversations. 

“He listened, supported me, and encouraged me to reflect on my aspirations for the next five years, assuring me that I should pursue them regardless of any obstacles,” the nominator shared. “Following our conversation, I felt more grounded and regained momentum in my research project.”

In summation, his student felt that Brown’s advice was “simple, yet enlightening, and exactly what I needed to hear at that moment.”

Hamsa Balakrishnan: Unequivocal advocacy

Balakrishnan is the William E. Leonhard Professor of Aeronautics and Astronautics at MIT. She leads the Dynamics, Infrastructure Networks, and Mobility (DINaMo) Research Group. Her current research interests are in the design, analysis, and implementation of control and optimization algorithms for large-scale cyber-physical infrastructures, with an emphasis on air transportation systems. 

Her nominators commended Balakrishnan for her efforts to support and advocate for all of her students. In particular, she connects her students to academic mentors within the community, which contributes to their sense of acceptance within the field. 

Balakrishnan’s mindfulness in respecting personal expression and her proactive approach to making everyone feel welcome have made a lasting impact on her students. “Hamsa’s efforts have encouraged me to bring my full self to the workplace,” one student wrote; “I will be forever grateful for her mentorship and kindness as an advisor.”

One student shared their experience of moving from a difficult advising situation to working with Balakrishnan, describing how her mentorship was crucial in the nominator’s successful return to research: “Hamsa’s mentorship has been vital to building up my confidence as a researcher, as she [often] provides helpful guidance and positive affirmation.”

Balakrishnan frequently gives her students freedom to independently explore and develop their research interests. When students wanted to delve into new areas like space research — far removed from her expertise in air traffic management and uncrewed aerial vehicles — Balakrishnan embraced the challenge and learned about these topics in order to provide better guidance. 

One student described how Balakrishnan consistently encouraged the lab to work on topics that interested them. This led the student to develop a novel research topic and publish a first author paper within months of joining the lab. 

Balakrishnan is deeply committed to promoting a healthy work-life balance for her students. She ensures that mentees do not feel compelled to overwork by encouraging them to take time off. Even if students do not have significant updates, Balakrishnan encourages weekly meetings to foster an open line of communication. She helps them set attainable goals, especially when it comes to tasks like paper reading and writing, and never pressures them to work late hours in order to meet paper or conference deadlines. 

How nature organizes itself, from brain cells to ecosystems

Mon, 03/10/2025 - 5:30pm

Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity — a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?

A new study from MIT Professor Ila Fiete suggests a surprising answer.

In findings published Feb. 18 in Nature, Fiete, an associate investigator in the McGovern Institute for Brain Research and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.

Joining two big ideas

“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona PhD '25, a former graduate student and K. Lisa Yang ICoN Center graduate fellow, and postdoc Sarthak Chandra also led the study.

Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition — small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.

Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.

Modular systems in the brain

The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale — they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.

No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.

Modular systems in nature

The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread, and also to vary, smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries — distinct ecological “neighborhoods” that don’t overlap.

Fiete’s study suggests why: local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection — and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.

A self-organizing world

One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same — they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.

The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.

Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”

Study: Climate change will reduce the number of satellites that can safely orbit in space

Mon, 03/10/2025 - 11:00am

MIT aerospace engineers have found that greenhouse gas emissions are changing the environment of near-Earth space in ways that, over time, will reduce the number of satellites that can sustainably operate there.

In a study appearing today in Nature Sustainability, the researchers report that carbon dioxide and other greenhouse gases can cause the upper atmosphere to shrink. An atmospheric layer of special interest is the thermosphere, where the International Space Station and most satellites orbit today. When the thermosphere contracts, the decreasing density reduces atmospheric drag — a force that pulls old satellites and other debris down to altitudes where they will encounter air molecules and burn up.

Less drag therefore means extended lifetimes for space junk, which will litter sought-after regions for decades and increase the potential for collisions in orbit.

The team carried out simulations of how carbon emissions affect the upper atmosphere and orbital dynamics, in order to estimate the “satellite carrying capacity” of low Earth orbit. These simulations predict that by the year 2100, the carrying capacity of the most popular regions could be reduced by 50-66 percent due to the effects of greenhouse gases.

“Our behavior with greenhouse gases here on Earth over the past 100 years is having an effect on how we operate satellites over the next 100 years,” says study author Richard Linares, associate professor in MIT’s Department of Aeronautics and Astronautics (AeroAstro).

“The upper atmosphere is in a fragile state as climate change disrupts the status quo,” adds lead author William Parker, a graduate student in AeroAstro. “At the same time, there’s been a massive increase in the number of satellites launched, especially for delivering broadband internet from space. If we don’t manage this activity carefully and work to reduce our emissions, space could become too crowded, leading to more collisions and debris.”

The study includes co-author Matthew Brown of the University of Birmingham.

Sky fall

The thermosphere naturally contracts and expands every 11 years in response to the sun’s regular activity cycle. When the sun’s activity is low, the Earth receives less radiation, and its outermost atmosphere temporarily cools and contracts before expanding again during solar maximum.

In the 1990s, scientists wondered what response the thermosphere might have to greenhouse gases. Their preliminary modeling showed that, while the gases trap heat in the lower atmosphere, where we experience global warming and weather, the same gases radiate heat at much higher altitudes, effectively cooling the thermosphere. With this cooling, the researchers predicted that the thermosphere should shrink, reducing atmospheric density at high altitudes.

In the last decade, scientists have been able to measure changes in drag on satellites, which has provided some evidence that the thermosphere is contracting in response to something more than the sun’s natural, 11-year cycle.

“The sky is quite literally falling — just at a rate that’s on the scale of decades,” Parker says. “And we can see this by how the drag on our satellites is changing.”

The MIT team wondered how that response will affect the number of satellites that can safely operate in Earth’s orbit. Today, there are over 10,000 satellites drifting through low Earth orbit, which describes the region of space up to 1,200 miles (2,000 kilometers), from Earth’s surface. These satellites deliver essential services, including internet, communications, navigation, weather forecasting, and banking. The satellite population has ballooned in recent years, requiring operators to perform regular collision-avoidance maneuvers to keep safe. Any collisions that do occur can generate debris that remains in orbit for decades or centuries, increasing the chance for follow-on collisions with satellites, both old and new.

“More satellites have been launched in the last five years than in the preceding 60 years combined,” Parker says. “One of key things we’re trying to understand is whether the path we’re on today is sustainable.”

Crowded shells

In their new study, the researchers simulated different greenhouse gas emissions scenarios over the next century to investigate impacts on atmospheric density and drag. For each “shell,” or altitude range of interest, they then modeled the orbital dynamics and the risk of satellite collisions based on the number of objects within the shell. They used this approach to identify each shell’s “carrying capacity” — a term that is typically used in studies of ecology to describe the number of individuals that an ecosystem can support.

“We’re taking that carrying capacity idea and translating it to this space sustainability problem, to understand how many satellites low Earth orbit can sustain,” Parker explains.

The team compared several scenarios: one in which greenhouse gas concentrations remain at their level from the year 2000 and others where emissions change according to the Intergovernmental Panel on Climate Change (IPCC) Shared Socioeconomic Pathways (SSPs). They found that scenarios with continuing increases in emissions would lead to a significantly reduced carrying capacity throughout low Earth orbit.

In particular, the team estimates that by the end of this century, the number of satellites safely accommodated within the altitudes of 200 and 1,000 kilometers could be reduced by 50 to 66 percent compared with a scenario in which emissions remain at year-2000 levels. If satellite capacity is exceeded, even in a local region, the researchers predict that the region will experience a “runaway instability,” or a cascade of collisions that would create so much debris that satellites could no longer safely operate there.

Their predictions forecast out to the year 2100, but the team says that certain shells in the atmosphere today are already crowding up with satellites, particularly from recent “megaconstellations” such as SpaceX’s Starlink, which comprises fleets of thousands of small internet satellites.

“The megaconstellation is a new trend, and we’re showing that because of climate change, we’re going to have a reduced capacity in orbit,” Linares says. “And in local regions, we’re close to approaching this capacity value today.”

“We rely on the atmosphere to clean up our debris. If the atmosphere is changing, then the debris environment will change too,” Parker adds. “We show the long-term outlook on orbital debris is critically dependent on curbing our greenhouse gas emissions.”

This research is supported, in part, by the U.S. National Science Foundation, the U.S. Air Force, and the U.K. Natural Environment Research Council.

Study: Tuberculosis relies on protective genes during airborne transmission

Mon, 03/10/2025 - 12:00am

Tuberculosis lives and thrives in the lungs. When the bacteria that cause the disease are coughed into the air, they are thrust into a comparatively hostile environment, with drastic changes to their surrounding pH and chemistry. How these bacteria survive their airborne journey is key to their persistence, but very little is known about how they protect themselves as they waft from one host to the next.

Now MIT researchers and their collaborators have discovered a family of genes that becomes essential for survival specifically when the pathogen is exposed to the air, likely protecting the bacterium during its flight.

Many of these genes were previously considered to be nonessential, as they didn’t seem to have any effect on the bacteria’s role in causing disease when injected into a host. The new work suggests that these genes are indeed essential, though for transmission rather than proliferation.

“There is a blind spot that we have toward airborne transmission, in terms of how a pathogen can survive these sudden changes as it circulates in the air,” says Lydia Bourouiba, who is the head of the Fluid Dynamics of Disease Transmission Laboratory, an associate professor of civil and environmental engineering and mechanical engineering, and a core faculty member in the Instiute for Medical Engineering and Science at MIT. “Now we have a sense, through these genes, of what tools tuberculosis uses to protect itself.”

The team’s results, appearing this week in the Proceedings of the National Academy of Sciences, could provide new targets for tuberculosis therapies that simultaneously treat infection and prevent transmission.

“If a drug were to target the product of these same genes, it could effectively treat an individual, and even before that person is cured, it could keep the infection from spreading to others,” says Carl Nathan, chair of the Department of Microbiology and Immunology and R.A. Rees Pritchett Professor of Microbiology at Weill Cornell Medicine.

Nathan and Bourouiba are co-senior authors of the study, which includes MIT co-authors and mentees of Bourouiba in the Fluids and Health Network: co-lead author postdoc Xiaoyi Hu, postdoc Eric Shen, and student mentees Robin Jahn and Luc Geurts. The study also includes collaborators from Weill Cornell Medicine, the University of California at San Diego, Rockefeller University, Hackensack Meridian Health, and the University of Washington.

Pathogen’s perspective

Tuberculosis is a respiratory disease caused by Mycobacterium tuberculosis, a bacterium that most commonly affects the lungs and is transmitted through droplets that an infected individual expels into the air, often through coughing or sneezing. Tuberculosis is the single leading cause of death from infection, except during the major global pandemics caused by viruses.

“In the last 100 years, we have had the 1918 influenza, the 1981 HIV AIDS epidemic, and the 2019 SARS Cov2 pandemic,” Nathan notes. “Each of those viruses has killed an enormous number of people. And as they have settled down, we are left with a ‘permanent pandemic’ of tuberculosis.”

Much of the research on tuberculosis centers on its pathophysiology — the mechanisms by which the bacteria take over and infect a host — as well as ways to diagnose and treat the disease. For their new study, Nathan and Bourouiba focused on transmission of tuberculosis, from the perspective of the bacterium itself, to investigate what defenses it might rely on to help it survive its airborne transmission.

“This is one of the first attempts to look at tuberculosis from the airborne perspective, in terms of what is happening to the organism, at the level of being protected from these sudden changes and very harsh biophysical conditions,” Bourouiba says.

Critical defense

At MIT, Bourouiba studies the physics of fluids and the ways in which droplet dynamics can spread particles and pathogens. She teamed up with Nathan, who studies tuberculosis, and the genes that the bacteria rely on throughout their life cycle.

To get a handle on how tuberculosis can survive in the air, the team aimed to mimic the conditions that the bacterium experiences during transmission. The researchers first looked to develop a fluid that is similar in viscosity and droplet sizes to what a patient would cough or sneeze out into the air. Bourouiba notes that much of the experimental work that has been done on tuberculosis in the past has been based on a liquid solution that scientists use to grow the bacteria. But the team found that this liquid has a chemical composition that is very different from the fluid that tuberculosis patients actually cough and sneeze into the air.

Additionally, Bourouiba notes that fluid commonly sampled from tuberculosis patients is based on sputum that a patient spits out, for instance for a diagnostic test. “The fluid is thick and gooey and it’s what most of the tuberculosis world considers to represent what is happening in the body,” she says. “But it’s extraordinarily inefficient in spreading to others because it’s too sticky to break into inhalable droplets.”

Through Bourouiba’s work with fluid and droplet physics, the team determined the more realistic viscosity and likely size distribution of tuberculosis-carrying microdroplets that would be transmitted through the air. The team also characterized the droplet compositions, based on analyses of patient samples of infected lung tissues. They then created a more realistic fluid, with a composition, viscosity, surface tension and droplet size that is similar to what would be released into the air from exhalations.

Then, the researchers deposited different fluid mixtures onto plates in tiny individual droplets and measured in detail how they evaporate and what internal structure they leave behind. They observed that the new fluid tended to shield the bacteria at the center of the droplet as the droplet evaporated, compared to conventional fluids where bacteria tended to be more exposed to the air. The more realistic fluid was also capable of retaining more water.

Additionally, the team infused each droplet with bacteria containing genes with various knockdowns, to see whether the absence of certain genes would affect the bacteria’s survival as the droplets evaporated.

In this way, the team assessed the activity of over 4,000 tuberculosis genes and discovered a family of several hundred genes that seemed to become important specifically as the bacteria adapted to airborne conditions. Many of these genes are involved in repairing damage to oxidized proteins, such as proteins that have been exposed to air. Other activated genes have to do with destroying damaged proteins that are beyond repair.

“What we turned up was a candidate list that’s very long,” Nathan says. “There are hundreds of genes, some more prominently implicated than others, that may be critically involved in helping tuberculosis survive its transmission phase.”

The team acknowledges the experiments are not a complete analog of the bacteria’s biophysical transmission. In reality, tuberculosis is carried in droplets that fly through the air, evaporating as they go. In order to carry out their genetic analyses, the team had to work with droplets sitting on a plate. Under these constraints, they mimicked the droplet transmission as best they could, by setting the plates in an extremely dry chamber to accelerate the droplets’ evaporation, analogous to what they would experience in flight.

Going forward, the researchers have started experimenting with platforms that allow them to study the droplets in flight, in a range of conditions. They plan to focus on the new family of genes in even more realistic experiments, to confirm whether the genes do indeed shield Mycobacterium tuberculosis as it is transmitted through the air, potentially opening the way to weakening its airborne defenses.

“The idea of waiting to find someone with tuberculosis, then treating and curing them, is a totally inefficient way to stop the pandemic,” Nathan says. “Most people who exhale tuberculosis do not yet have a diagnosis. So we have to interrupt its transmission. And how do you do that, if you don’t know anything about the process itself? We have some ideas now.”

This work was supported, in part, by the National Institutes of Health, the Abby and Howard P. Milstein Program in Chemical Biology and Translational Medicine, and the Potts Memorial Foundation, the National Science Foundation Center for Analysis and Prediction of Pandemic Expansion (APPEX)Inditex, NASA Translational Research Institute for Space Health , and Analog Devices, Inc.

Robotic helper making mistakes? Just nudge it in the right direction

Fri, 03/07/2025 - 12:00am

Imagine that a robot is helping you clean the dishes. You ask it to grab a soapy bowl out of the sink, but its gripper slightly misses the mark.

Using a new framework developed by MIT and NVIDIA researchers, you could correct that robot’s behavior with simple interactions. The method would allow you to point to the bowl or trace a trajectory to it on a screen, or simply give the robot’s arm a nudge in the right direction.

Unlike other methods for correcting robot behavior, this technique does not require users to collect new data and retrain the machine-learning model that powers the robot’s brain. It enables a robot to use intuitive, real-time human feedback to choose a feasible action sequence that gets as close as possible to satisfying the user’s intent.

When the researchers tested their framework, its success rate was 21 percent higher than an alternative method that did not leverage human interventions.

In the long run, this framework could enable a user to more easily guide a factory-trained robot to perform a wide variety of household tasks even though the robot has never seen their home or the objects in it.

“We can’t expect laypeople to perform data collection and fine-tune a neural network model. The consumer will expect the robot to work right out of the box, and if it doesn’t, they would want an intuitive mechanism to customize it. That is the challenge we tackled in this work,” says Felix Yanwei Wang, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this method.

His co-authors include Lirui Wang PhD ’24 and Yilun Du PhD ’24; senior author Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL); as well as Balakumar Sundaralingam, Xuning Yang, Yu-Wei Chao, Claudia Perez-D’Arpino PhD ’19, and Dieter Fox of NVIDIA. The research will be presented at the International Conference on Robots and Automation.

Mitigating misalignment

Recently, researchers have begun using pre-trained generative AI models to learn a “policy,” or a set of rules, that a robot follows to complete an action. Generative models can solve multiple complex tasks.

During training, the model only sees feasible robot motions, so it learns to generate valid trajectories for the robot to follow.

While these trajectories are valid, that doesn’t mean they always align with a user’s intent in the real world. The robot might have been trained to grab boxes off a shelf without knocking them over, but it could fail to reach the box on top of someone’s bookshelf if the shelf is oriented differently than those it saw in training.

To overcome these failures, engineers typically collect data demonstrating the new task and re-train the generative model, a costly and time-consuming process that requires machine-learning expertise.

Instead, the MIT researchers wanted to allow users to steer the robot’s behavior during deployment when it makes a mistake.

But if a human interacts with the robot to correct its behavior, that could inadvertently cause the generative model to choose an invalid action. It might reach the box the user wants, but knock books off the shelf in the process.

“We want to allow the user to interact with the robot without introducing those kinds of mistakes, so we get a behavior that is much more aligned with user intent during deployment, but that is also valid and feasible,” Wang says.

Their framework accomplishes this by providing the user with three intuitive ways to correct the robot’s behavior, each of which offers certain advantages.

First, the user can point to the object they want the robot to manipulate in an interface that shows its camera view. Second, they can trace a trajectory in that interface, allowing them to specify how they want the robot to reach the object. Third, they can physically move the robot’s arm in the direction they want it to follow.

“When you are mapping a 2D image of the environment to actions in a 3D space, some information is lost. Physically nudging the robot is the most direct way to specifying user intent without losing any of the information,” says Wang.

Sampling for success

To ensure these interactions don’t cause the robot to choose an invalid action, such as colliding with other objects, the researchers use a specific sampling procedure. This technique lets the model choose an action from the set of valid actions that most closely aligns with the user’s goal.

“Rather than just imposing the user’s will, we give the robot an idea of what the user intends but let the sampling procedure oscillate around its own set of learned behaviors,” Wang explains.

This sampling method enabled the researchers’ framework to outperform the other methods they compared it to during simulations and experiments with a real robot arm in a toy kitchen.

While their method might not always complete the task right away, it offers users the advantage of being able to immediately correct the robot if they see it doing something wrong, rather than waiting for it to finish and then giving it new instructions.

Moreover, after a user nudges the robot a few times until it picks up the correct bowl, it could log that corrective action and incorporate it into its behavior through future training. Then, the next day, the robot could pick up the correct bowl without needing a nudge.

“But the key to that continuous improvement is having a way for the user to interact with the robot, which is what we have shown here,” Wang says.

In the future, the researchers want to boost the speed of the sampling procedure while maintaining or improving its performance. They also want to experiment with robot policy generation in novel environments.

SMART researchers pioneer nanosensor for real-time iron detection in plants

Thu, 03/06/2025 - 11:00am

Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, in collaboration with Temasek Life Sciences Laboratory (TLL) and MIT, have developed a groundbreaking near-infrared (NIR) fluorescent nanosensor capable of simultaneously detecting and differentiating between iron forms — Fe(II) and Fe(III) — in living plants. 

Iron is crucial for plant health, supporting photosynthesis, respiration, and enzyme function. It primarily exists in two forms: Fe(II), which is readily available for plants to absorb and use, and Fe(III), which must first be converted into Fe(II) before plants can utilize it effectively. Traditional methods only measure total iron, missing the distinction between these forms — a key factor in plant nutrition. Distinguishing between Fe(II) and Fe(III) provides insights into iron uptake efficiency, helps diagnose deficiencies or toxicities, and enables precise fertilization strategies in agriculture, reducing waste and environmental impact while improving crop productivity.

The first-of-its-kind nanosensor developed by SMART researchers enables real-time, nondestructive monitoring of iron uptake, transport, and changes between its different forms — providing precise and detailed observations of iron dynamics. Its high spatial resolution allows precise localization of iron in plant tissues or subcellular compartments, enabling the measurement of even minute changes in iron levels within plants — changes that can inform how a plant handles stress and uses nutrients. 

Traditional detection methods are destructive, or limited to a single form of iron. This new technology enables the diagnosis of deficiencies and optimization of fertilization strategies. By identifying insufficient or excessive iron intake, adjustments can be made to enhance plant health, reduce waste, and support more sustainable agriculture. While the nanosensor was tested on spinach and bok choy, it is species-agnostic, allowing it to be applied across a diverse range of plant species without genetic modification. This capability enhances our understanding of iron dynamics in various ecological settings, providing comprehensive insights into plant health and nutrient management. As a result, it serves as a valuable tool for both fundamental plant research and agricultural applications, supporting precision nutrient management, reducing fertilizer waste, and improving crop health.

“Iron is essential for plant growth and development, but monitoring its levels in plants has been a challenge. This breakthrough sensor is the first of its kind to detect both Fe(II) and Fe(III) in living plants with real-time, high-resolution imaging. With this technology, we can ensure plants receive the right amount of iron, improving crop health and agricultural sustainability,” says Duc Thinh Khong, DiSTAP research scientist and co-lead author of the paper.

“In enabling non-destructive real-time tracking of iron speciation in plants, this sensor opens new avenues for understanding plant iron metabolism and the implications of different iron variations for plants. Such knowledge will help guide the development of tailored management approaches to improve crop yield and more cost-effective soil fertilization strategies,” says Grace Tan, TLL research scientist and co-lead author of the paper.

The research, recently published in Nano Letters and titled, “Nanosensor for Fe(II) and Fe(III) Allowing Spatiotemporal Sensing in Planta,” builds upon SMART DiSTAP’s established expertise in plant nanobionics, leveraging the Corona Phase Molecular Recognition (CoPhMoRe) platform pioneered by the Strano Lab at SMART DiSTAP and MIT. The new nanosensor features single-walled carbon nanotubes (SWNTs) wrapped in a negatively charged fluorescent polymer, forming a helical corona phase structure that interacts differently with Fe(II) and Fe(III). Upon introduction into plant tissues and interaction with iron, the sensor emits distinct NIR fluorescence signals based on the iron type, enabling real-time tracking of iron movement and chemical changes.

The CoPhMoRe technique was used to develop highly selective fluorescent responses, allowing precise detection of iron oxidation states. The NIR fluorescence of SWNTs offers superior sensitivity, selectivity, and tissue transparency while minimizing interference, making it more effective than conventional fluorescent sensors. This capability allows researchers to track iron movement and chemical changes in real time using NIR imaging. 

“This sensor provides a powerful tool to study plant metabolism, nutrient transport, and stress responses. It supports optimized fertilizer use, reduces costs and environmental impact, and contributes to more nutritious crops, better food security, and sustainable farming practices,” says Professor Daisuke Urano, TLL senior principal investigator, DiSTAP principal investigator, National University of Singapore adjunct assistant professor, and co-corresponding author of the paper.

“This set of sensors gives us access to an important type of signalling in plants, and a critical nutrient necessary for plants to make chlorophyll. This new tool will not just help farmers to detect nutrient deficiency, but also give access to certain messages within the plant. It expands our ability to understand the plant response to its growth environment,” says Professor Michael Strano, DiSTAP co-lead principal investigator, Carbon P. Dubbs Professor of Chemical Engineering at MIT, and co-corresponding author of the paper.

Beyond agriculture, this nanosensor holds promise for environmental monitoring, food safety, and health sciences, particularly in studying iron metabolism, iron deficiency, and iron-related diseases in humans and animals. Future research will focus on leveraging this nanosensor to advance fundamental plant studies on iron homeostasis, nutrient signaling, and redox dynamics. Efforts are also underway to integrate the nanosensor into automated nutrient management systems for hydroponic and soil-based farming and expand its functionality to detect other essential micronutrients. These advancements aim to enhance sustainability, precision, and efficiency in agriculture.

The research is carried out by SMART, and supported by the National Research Foundation under its Campus for Research Excellence And Technological Enterprise program.

3 Questions: Visualizing research in the age of AI

Thu, 03/06/2025 - 11:00am

For over 30 years, science photographer Felice Frankel has helped MIT professors, researchers, and students communicate their work visually. Throughout that time, she has seen the development of various tools to support the creation of compelling images: some helpful, and some antithetical to the effort of producing a trustworthy and complete representation of the research. In a recent opinion piece published in Nature magazine, Frankel discusses the burgeoning use of generative artificial intelligence (GenAI) in images and the challenges and implications it has for communicating research. On a more personal note, she questions whether there will still be a place for a science photographer in the research community.

Q: You’ve mentioned that as soon as a photo is taken, the image can be considered “manipulated.” There are ways you’ve manipulated your own images to create a visual that more successfully communicates the desired message. Where is the line between acceptable and unacceptable manipulation?

A: In the broadest sense, the decisions made on how to frame and structure the content of an image, along with which tools used to create the image, are already a manipulation of reality. We need to remember the image is merely a representation of the thing, and not the thing itself. Decisions have to be made when creating the image. The critical issue is not to manipulate the data, and in the case of most images, the data is the structure. For example, for an image I made some time ago, I digitally deleted the petri dish in which a yeast colony was growing, to bring attention to the stunning morphology of the colony. The data in the image is the morphology of the colony. I did not manipulate that data. However, I always indicate in the text if I have done something to an image. I discuss the idea of image enhancement in my handbook, “The Visual Elements, Photography.”

Q: What can researchers do to make sure their research is communicated correctly and ethically?

A: With the advent of AI, I see three main issues concerning visual representation: the difference between illustration and documentation, the ethics around digital manipulation, and a continuing need for researchers to be trained in visual communication. For years, I have been trying to develop a visual literacy program for the present and upcoming classes of science and engineering researchers. MIT has a communication requirement which mostly addresses writing, but what about the visual, which is no longer tangential to a journal submission? I will bet that most readers of scientific articles go right to the figures, after they read the abstract. 

We need to require students to learn how to critically look at a published graph or image and decide if there is something weird going on with it. We need to discuss the ethics of “nudging” an image to look a certain predetermined way. I describe in the article an incident when a student altered one of my images (without asking me) to match what the student wanted to visually communicate. I didn’t permit it, of course, and was disappointed that the ethics of such an alteration were not considered. We need to develop, at the very least, conversations on campus and, even better, create a visual literacy requirement along with the writing requirement.

Q: Generative AI is not going away. What do you see as the future for communicating science visually?

A: For the Nature article, I decided that a powerful way to question the use of AI in generating images was by example. I used one of the diffusion models to create an image using the following prompt:

“Create a photo of Moungi Bawendi’s nano crystals in vials against a black background, fluorescing at different wavelengths, depending on their size, when excited with UV light.”

The results of my AI experimentation were often cartoon-like images that could hardly pass as reality — let alone documentation — but there will be a time when they will be. In conversations with colleagues in research and computer-science communities, all agree that we should have clear standards on what is and is not allowed. And most importantly, a GenAI visual should never be allowed as documentation.

But AI-generated visuals will, in fact, be useful for illustration purposes. If an AI-generated visual is to be submitted to a journal (or, for that matter, be shown in a presentation), I believe the researcher MUST

  • clearly label if an image was created by an AI model;
  • indicate what model was used;
  • include what prompt was used; and
  • include the image, if there is one, that was used to help the prompt.

A leg up for STEM majors

Thu, 03/06/2025 - 11:00am

Senior Kevin Guo, a computer science major, and junior Erin Hovendon, studying mechanical engineering, are on widely divergent paths at MIT. But their lives do intersect in one dimension: They share an understanding that their political science and public policy minors provide crucial perspectives on their research and future careers.

For Guo, the connection between computer science and policy emerged through his work at MIT's Election Data and Science Lab. “When I started, I was just looking for a place to learn how to code and do data science,” he reflects. “But what I found was this fascinating intersection where technical skills could directly shape democratic processes.”

Hovendon is focused on sustainable methods for addressing climate change. She is currently participating in a multisemester research project at MIT's Environmental Dynamics Lab (ENDLab) developing monitoring technology for marine carbon dioxide removal (mCDR).

She believes the success of her research today and in the future depends on understanding its impact on society. Her academic track in policy provides that grounding. “When you’re developing a new technology, you need to focus as well on how it will be applied,” she says. “This means learning about the policies required to scale it up, and about the best ways to convey the value of what you’re working on to the public.”

Bridging STEM and policy

For both Hovendon and Guo, interdisciplinary study is proving to be a valuable platform for tangibly addressing real-world challenges.

Guo came to MIT from Andover, Massachusetts, the son of parents who specialize in semiconductors and computer science. While math and computer science were a natural track for him, Guo was also keenly interested in geopolitics. He enrolled in class 17.40 (American Foreign Policy). “It was my first engagement with MIT political science and I liked it a lot, because it dealt with historical episodes I wanted to learn more about, like World War II, the Korean War, and Vietnam,” says Guo.

He followed up with a class on American Military History and on the Rise of Asia, where he found himself enrolled with graduate students and active duty U.S. military officers. “I liked attending a course with people who had unusual insights,” Guo remarks. “I also liked that these humanities classes were small seminars, and focused a lot on individual students.”

From coding to elections

It was in class 17.835 (Machine Learning and Data Science in Politics) that Guo first realized he could directly connect his computer science and math expertise to the humanities. “They gave us big political science datasets to analyze, which was a pretty cool application of the skills I learned in my major,” he says.

Guo springboarded from this class to a three-year, undergraduate research project in the Election Data and Science Lab. “The hardest part is data collection, which I worked on for an election audit project that looked at whether there were significant differences between original vote counts and audit counts in all the states, at the precinct level,” says Guo. “We had to scrape data, raw PDFs, and create a unified dataset, standardized to our format, that we could publish.”

The data analysis skills he acquired in the lab have come in handy in the professional sphere in which he has begun training: investment finance.

“The workflow is very similar: clean the data to see what you want, analyze it to see if I can find an edge, and then write some code to implement it,” he says. “The biggest difference between finance and the lab research is that the development cycle is a lot faster, where you want to act on a dataset in a few days, rather than weeks or months.”

Engineering environmental solutions

Hovendon, a native of North Carolina with a deep love for the outdoors, arrived at MIT committed “to doing something related to sustainability and having a direct application in the world around me,” she says.

Initially, she headed toward environmental engineering, “but then I realized that pretty much every major can take a different approach to that topic,” she says. “So I ended up switching to mechanical engineering because I really enjoy the hands-on aspects of the field.”

In parallel to her design and manufacturing, and mechanics and materials courses, Hovendon also immersed herself in energy and environmental policy classes. One memorable anthropology class, 21A.404 (Living through Climate Change), asked students to consider whether technological or policy solutions could be fully effective on their own for combating climate change. “It was useful to apply holistic ways of exploring human relations to the environment,” says Hovendon.

Hovendon brings this well-rounded perspective to her research at ENDLab in marine carbon capture and fluid dynamics. She is helping to develop verification methods for mCDR at a pilot treatment plant in California. The facility aims to remove 100 tons of carbon dioxide directly from the ocean by enhancing natural processes. Hovendon hopes to design cost-efficient monitoring systems to demonstrate the efficacy of this new technology. If scaled up, mCDR could enable oceans to store significantly more atmospheric carbon, helping cool the planet.

But Hovendon is well aware that innovation with a major impact cannot emerge on the basis of technical efficacy alone.

“You're going to have people who think that you shouldn't be trying to replicate or interfere with a natural system, and if you're putting one of these facilities somewhere in water, then you're using public spaces and resources,” she says. “It's impossible to come up with any kind of technology, but especially any kind of climate-related technology, without first getting the public to buy into it.”

She recalls class 17.30J (Making Public Policy), which emphasized the importance of both economic and social analysis to the successful passage of highly impactful legislation, such as the Affordable Care Act.

“I think that breakthroughs in science and engineering should be evaluated not just through their technological prowess, but through the success of their implementation for general societal benefit,” she says. “Understanding the policy aspects is vital for improving accessibility for scientific advancements.”

Beyond the dome

Guo will soon set out for a career as a quantitative financial trader, and he views his political science background as essential to his success. While his expertise in data cleaning and analysis will come into play, he believes other skills will as well: “Understanding foreign policy, considering how U.S. policy impacts other places, that's actually very important in finance,” he explains. “Macroeconomic changes and politics affect trading volatility and markets in general, so it's very important to understand what's going on.”

With one year to go, Hovendon is contemplating graduate school in mechanical engineering, perhaps designing renewable energy technologies. “I just really hope that I'm working on something I'm genuinely passionate about, something that has a broader purpose,” she says. “In terms of politics and technology, I also hope that at least some government research and development will still go to climate work, because I'm sure there will be an urgent need for it.”

Knitted microtissue can accelerate healing

Wed, 03/05/2025 - 3:10pm

Treating severe or chronic injury to soft tissues such as skin and muscle is a challenge in health care. Current treatment methods can be costly and ineffective, and the frequency of chronic wounds in general from conditions such as diabetes and vascular disease, as well as an increasingly aging population, is only expected to rise.

One promising treatment method involves implanting biocompatible materials seeded with living cells (i.e., microtissue) into the wound. The materials provide a scaffolding for stem cells, or other precursor cells, to grow into the wounded tissue and aid in repair. However, current techniques to construct these scaffolding materials suffer a recurring setback. Human tissue moves and flexes in a unique way that traditional soft materials struggle to replicate, and if the scaffolds stretch, they can also stretch the embedded cells, often causing those cells to die. The dead cells hinder the healing process and can also trigger an inadvertent immune response in the body.

"The human body has this hierarchical structure that actually un-crimps or unfolds, rather than stretches," says Steve Gillmer, a researcher in MIT Lincoln Laboratory's Mechanical Engineering Group. "That's why if you stretch your own skin or muscles, your cells aren't dying. What's actually happening is your tissues are uncrimping a little bit before they stretch."

Gillmer is part of a multidisciplinary research team that is searching for a solution to this stretching setback. He is working with Professor Ming Guo from MIT's Department of Mechanical Engineering and the laboratory's Defense Fabric Discovery Center (DFDC) to knit new kinds of fabrics that can uncrimp and move just as human tissue does.

The idea for the collaboration came while Gillmer and Guo were teaching a course at MIT. Guo had been researching how to grow stem cells on new forms of materials that could mimic the uncrimping of natural tissue. He chose electrospun nanofibers, which worked well, but were difficult to fabricate at long lengths, preventing him from integrating the fibers into larger knit structures for larger-scale tissue repair.

"Steve mentioned that Lincoln Laboratory had access to industrial knitting machines," Guo says. These machines allowed him to switch focus to designing larger knits, rather than individual yarns. "We immediately started to test new ideas through internal support from the laboratory."

Gillmer and Guo worked with the DFDC to discover which knit patterns could move similarly to different types of soft tissue. They started with three basic knit constructions called interlock, rib, and jersey.

"For jersey, think of your T-shirt. When you stretch your shirt, the yarn loops are doing the stretching," says Emily Holtzman, a textile specialist at the DFDC. "The longer the loop length, the more stretch your fabric can accommodate. For ribbed, think of the cuff on your sweater. This fabric construction has a global stretch that allows the fabric to unfold like an accordion."

Interlock is similar to ribbed but is knitted in a denser pattern and contains twice as much yarn per inch of fabric. By having more yarn, there is more surface area on which to embed the cells. "Knit fabrics can also be designed to have specific porosities, or hydraulic permeability, created by the loops of the fabric and yarn sizes," says Erin Doran, another textile specialist on the team. "These pores can help with the healing process as well."

So far, the team has conducted a number of tests embedding mouse embryonic fibroblast cells and mesenchymal stem cells within the different knit patterns and seeing how they behave when the patterns are stretched. Each pattern had variations that affected how much the fabric could uncrimp, in addition to how stiff it became after it started stretching. All showed a high rate of cell survival, and in 2024 the team received an R&D 100 award for their knit designs.

Gillmer explains that although the project began with treating skin and muscle injuries in mind, their fabrics have the potential to mimic many different types of human soft tissue, such as cartilage or fat. The team recently filed a provisional patent that outlines how to create these patterns and identifies the appropriate materials that should be used to make the yarn. This information can be used as a toolbox to tune different knitted structures to match the mechanical properties of the injured tissue to which they are applied.

"This project has definitely been a learning experience for me," Gillmer says. "Each branch of this team has a unique expertise, and I think the project would be impossible without them all working together. Our collaboration as a whole enables us to expand the scope of the work to solve these larger, more complex problems."

Study: The ozone hole is healing, thanks to global reduction of CFCs

Wed, 03/05/2025 - 12:00pm

A new MIT-led study confirms that the Antarctic ozone layer is healing, as a direct result of global efforts to reduce ozone-depleting substances.

Scientists including the MIT team have observed signs of ozone recovery in the past. But the new study is the first to show, with high statistical confidence, that this recovery is due primarily to the reduction of ozone-depleting substances, versus other influences such as natural weather variability or increased greenhouse gas emissions to the stratosphere.

“There’s been a lot of qualitative evidence showing that the Antarctic ozone hole is getting better. This is really the first study that has quantified confidence in the recovery of the ozone hole,” says study author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry. “The conclusion is, with 95 percent confidence, it is recovering. Which is awesome. And it shows we can actually solve environmental problems.”

The new study appears today in the journal Nature. Graduate student Peidong Wang from the Solomon group in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) is the lead author. His co-authors include Solomon and EAPS Research Scientist Kane Stone, along with collaborators from multiple other institutions.

Roots of ozone recovery

Within the Earth’s stratosphere, ozone is a naturally occurring gas that acts as a sort of sunscreen, protecting the planet from the sun’s harmful ultraviolet radiation. In 1985, scientists discovered a “hole” in the ozone layer over Antarctica that opened up during the austral spring, between September and December. This seasonal ozone depletion was suddenly allowing UV rays to filter down to the surface, leading to skin cancer and other adverse health effects.

In 1986, Solomon, who was then working at the National Oceanic and Atmospheric Administration (NOAA), led expeditions to the Antarctic, where she and her colleagues gathered evidence that quickly confirmed the ozone hole’s cause: chlorofluorocarbons, or CFCs — chemicals that were then used in refrigeration, air conditioning, insulation, and aerosol propellants. When CFCs drift up into the stratosphere, they can break down ozone under certain seasonal conditions.

The following year, those relevations led to the drafting of the Montreal Protocol — an international treaty that aimed to phase out the production of CFCs and other ozone-depleting substances, in hopes of healing the ozone hole.

In 2016, Solomon led a study reporting key signs of ozone recovery. The ozone hole seemed to be shrinking with each year, especially in September, the time of year when it opens up. Still, these observations were qualitative. The study showed large uncertainties regarding how much of this recovery was due to concerted efforts to reduce ozone-depleting substances, or if the shrinking ozone hole was a result of other “forcings,” such as year-to-year weather variability from El Niño, La Niña, and the polar vortex.

“While detecting a statistically significant increase in ozone is relatively straightforward, attributing these changes to specific forcings is more challenging,” says Wang.

Anthropogenic healing

In their new study, the MIT team took a quantitative approach to identify the cause of Antarctic ozone recovery. The researchers borrowed a method from the climate change community, known as “fingerprinting,” which was pioneered by Klaus Hasselmann, who was awarded the Nobel Prize in Physics in 2021 for the technique. In the context of climate, fingerprinting refers to a method that isolates the influence of specific climate factors, apart from natural, meteorological noise. Hasselmann applied fingerprinting to identify, confirm, and quantify the anthropogenic fingerprint of climate change.

Solomon and Wang looked to apply the fingerprinting method to identify another anthropogenic signal: the effect of human reductions in ozone-depleting substances on the recovery of the ozone hole.

“The atmosphere has really chaotic variability within it,” Solomon says. “What we’re trying to detect is the emerging signal of ozone recovery against that kind of variability, which also occurs in the stratosphere.”

The researchers started with simulations of the Earth’s atmosphere and generated multiple “parallel worlds,” or simulations of the same global atmosphere, under different starting conditions. For instance, they ran simulations under conditions that assumed no increase in greenhouse gases or ozone-depleting substances. Under these conditions, any changes in ozone should be the result of natural weather variability. They also ran simulations with only increasing greenhouse gases, as well as only decreasing ozone-depleting substances.

They compared these simulations to observe how ozone in the Antarctic stratosphere changed, both with season, and across different altitudes, in response to different starting conditions. From these simulations, they mapped out the times and altitudes where ozone recovered from month to month, over several decades, and identified a key “fingerprint,” or pattern, of ozone recovery that was specifically due to conditions of declining ozone-depleting substances.

The team then looked for this fingerprint in actual satellite observations of the Antarctic ozone hole from 2005 to the present day. They found that, over time, the fingerprint that they identified in simulations became clearer and clearer in observations. In 2018, the fingerprint was at its strongest, and the team could say with 95 percent confidence that ozone recovery was due mainly to reductions in ozone-depleting substances.

“After 15 years of observational records, we see this signal to noise with 95 percent confidence, suggesting there’s only a very small chance that the observed pattern similarity can be explained by variability noise,” Wang says. “This gives us confidence in the fingerprint. It also gives us confidence that we can solve environmental problems. What we can learn from ozone studies is how different countries can swiftly follow these treaties to decrease emissions.”

If the trend continues, and the fingerprint of ozone recovery grows stronger, Solomon anticipates that soon there will be a year, here and there, when the ozone layer stays entirely intact. And eventually, the ozone hole should stay shut for good.

“By something like 2035, we might see a year when there’s no ozone hole depletion at all in the Antarctic. And that will be very exciting for me,” she says. “And some of you will see the ozone hole go away completely in your lifetimes. And people did that.”

This research was supported, in part, by the National Science Foundation and NASA.

Why rationality can push people in different directions

Wed, 03/05/2025 - 12:00am

It’s not a stretch to suggest that when we disagree with other people, we often regard them as being irrational. Kevin Dorst PhD ’19 has developed a body of research with surprising things to say about that.

Dorst, an associate professor of philosophy at MIT, studies rationality: how we apply it, or think we do, and how that bears out in society. The goal is to help us think clearly and perhaps with fresh eyes about something we may take for granted.

Throughout his work, Dorst specializes in exploring the nuances of rationality. To take just one instance, consider how ambiguity can interact with rationality. Suppose there are two studies about the effect of a new housing subdivision on local traffic patterns: One shows there will be a substantial increase in traffic, and one shows a minor effect. Even if both studies are sound in their methods and data, neither may have a totally airtight case. People who regard themselves as rationally assessing the numbers will likely disagree about which is most valid, and — though this may not be entirely rational — may use their prior beliefs to poke holes in the study that does not represent their prior beliefs. 

Among other things, this process also calls into question the widespread “Bayesian” conception that people’s views shift and come into alignment as they’re presented with new evidence. It may be that instead, people apply rationality while their views diverge, not converge.

This is also the kind of phenomenon Dorst explores in the paper “Rational Polarization,” published in The Philosophical Review in 2023; currently Dorst is working on a book about how people can take rational approaches but still wind up with different conclusions about the world. Dorst combines careful argumentation, mathematically structured descriptions of thinking, and even experimental evidence about cognition and people’s views, an increasing trend in philosophy.

“There’s something freeing about how methodologically open philosophy is,” says Dorst, a good-humored and genial conversationalist. “A question can be philosophical if it’s important and we don’t yet have settled methods for answering it, because in philosophy it’s always okay to ask what methods we should be using. It’s one of the exciting things about philosophy.”

For his research and teaching, Dorst was awarded tenure at MIT last year.

Show me your work

Dorst grew up in Missouri, not exactly expecting to become a philosopher, but he started following in the academic trail of his older brother, who had become interested in philosophy.

“We didn’t know what philosophy was growing up, but once my brother started getting interested, there was a little bootstrapping, egging each other on, and having someone to talk to,” Dorst says.

As an undergraduate at Washington University in St. Louis, Dorst majored in philosophy and political science. By graduation, he had become sold on studying philosophy full-time, and was accepted into MIT’s program as a doctoral student.

At the Institute, he started specializing in the problems he now studies full-time, about how we know things and how much we are thinking rationally, while working with Roger White as his primary adviser, along with faculty members Robert Stalnaker and Kieran Setiya of MIT and Branden Fitelson of Northeastern University.

After earning his PhD, Dorst spent a year as a fellow at Oxford University’s Magdalen College, then joined faculty of the University of Pittsburgh. He returned to MIT, this time on the faculty, in 2022. Now settled in the MIT philosophy faculty, Dorst tries to continue the department’s tradition of engaged teaching with his students.

“They wrestle like everyone does with the conceptual and philosophical questions, but the speed with which you can get through technical things in a course is astounding,” Dorst says of MIT undergraduates.

New methods, time-honored issues

At present Dorst, who has published widely in philosophy journals, is grinding through the process of writing a book manuscript about the complexity of rationality. Chapter subjects include hindsight bias, confirmation bias, overconfidence, and polarization.

In the process, Dorst is also developing and conducting more experiments than ever before, to look at the way people process information and regard themselves as being rational.

“There’s this whole movement of experimental philosophy, using experimental data, being sensitive to cognitive science and being interested in connecting questions we have to it,” Dorst says.

In his case, he adds, “The big picture is trying to connect the theoretical work on rationality with the more empirical work about what leads to polarization,” he says. The salience of the work, meanwhile, applies to a wide range of subjects:  “People have been polarized forever over everything.”

As he explains all of this, Dorst looks up at the whiteboard in his office, where an extensive set of equations represents the output of some experiments and his ongoing effort to comprehend the results, as part of the book project. When he finishes, he hopes to have work broadly useful in philosophy, cognitive science, and other fields.

“We might use some different models in philosophy,” he says, “but let’s all try to figure out how people process information and regard arguments.”

Pages