MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 7 hours 12 min ago

Four-peat: MIT students take first place in the 84th Putnam Math Competition

Fri, 03/01/2024 - 4:15pm

For the fourth time in the history of the annual William Lowell Putnam Mathematical Competition, and for the fourth year in a row, all five of the top spots in the contest, known as Putnam Fellows, came from a single school: MIT.

Putnam Fellows include three individuals who ranked in the top five in previous years — sophomores Papon Lapate and Luke Robitaille and junior Brian Liu — plus junior Ankit Bisain and first-year Jiangqi Dai. Each receives an award of $2,500.

MIT’s 2023 Putnam Team, made up of Bisain, Lapate, and Robitaille, also finished in first place — MIT’s eighth first-place win in the past 10 competitions. Teams are based on the three top scorers from each institution. The institution with the first-place team receives a $25,000 award, and each team member receives $1,000.

The competition's top-scoring woman, first-year Isabella Zhu, received the Elizabeth Lowell Putnam Prize, which includes a $1,000 award. She is the seventh MIT student to receive this honor since the award began in 1992.

In total, 68 out of the top 100 test-takers who took the exam on Dec. 2, 2023, were MIT students. Beyond the top five scorers, MIT students took eight of the next 11 spots (each awarded $1,000), seven of the next 10 after that (each awarded $250), and 48 out of a total of 75 honorable mentions. 

The contest also listed 29 MIT students who finished in the 101-200 spots, which means a total of 97 of the 200 top Putnam participants — nearly half — were MIT undergraduates. There were also 52 MIT students in the 201-500 finishers. 

“I am incredibly proud of our students’ amazing effort and performance at the Putnam Competition,” says associate professor of mathematics Yufei Zhao ’10, PhD ’15. Zhao is also a three-time Putnam Fellow.   

This exam is considered to be the most prestigious university-level mathematics competition in the United States and Canada. MIT students filled Walker Memorial in December to take what is notoriously a very difficult exam; while a perfect score is 120, the median score this year was just 10 points. But even just coming out to take the six-hour exam was applauded by the Department of Mathematics.

"Beyond the truly stellar achievements of our undergraduate population, it is also amazing to see the participation rate, another sign that MIT students love mathematics!" says Professor Michel Goemans, head of the MIT Department of Mathematics.

“Our performance is historically unprecedented and astonishing,” says MIT Math Community and Outreach Officer Michael King, who has also taken the exam. “The atmosphere in the testing room, with hundreds of students wrestling intensely with challenging problems, was amazing. Any student who participated, whether they made some progress on one problem or completely solved many, should be celebrated.”

There are several ways that students can prepare for the grueling test. The Undergraduate Mathematics Association hosts fun Putnam practice events, and Zhao teaches class 18.A34 (Mathematical Problem Solving), known as the Putnam Seminar, which brings together first-year students who are interested in the annual competition. Zhao notes that his seminar, and the competition in general, also helps new students to form a supportive community. 

The math department offers other ways to encourage students to bond over their love of problem-solving, such as Pi Day and Puzzle Nights. “MIT is truly a unique place to be a math major,” says Zhao.

Half of the top scorers are alumni of another STEM-student magnet, MIT math’s PRIMES (Program for Research in Mathematics, Engineering and Science) high school outreach program. Three of this year’s Putnam Fellows (Bisain, Liu, and Robitaille) are PRIMES alumni, as are four of the next top 11, and six out of the next 10 winners, along with many of the students receiving honorable mentions.

“Every year, former PRIMES students take a prominent place among Putnam winners,” says Pavel Etingof, a math professor who is also PRIMES’s chief research advisor. “For the third year in a row, three out of five Putnam Fellows are PRIMES alumni, all of them from MIT. Through PRIMES, MIT recruits the best mathematical talent in the nation.”

Many of the Putnam competition officials have MIT ties, including the Putnam Problems Committee’s Karl Mahlburg, a 2006 MIT math postdoc, and Greta Panova ’05; and among those contributing additional competition problems were math professor and former MIT Putnam coach Richard Stanley, Gabriel Drew Carroll PhD ’12,  and Darij Grinberg PhD ’16.

First-year MIT students gain hands-on research experience in supportive peer community

Fri, 03/01/2024 - 3:40pm

During MIT's Independent Activities Period (IAP) this January, first-year students interested in civil and environmental engineering (CEE) participated in a four-week undergraduate research opportunities program known as the mini-UROP (1.097). The six-unit subject pairs first-year students with a CEE graduate student or postdoc mentor, providing them with an inside look at the interdisciplinary research being conducted in the department. Overall, eight labs in the department opened their doors to the 2024 cohort, who were eager to take advantage of the opportunity to collaborate with current students and build a community around their interests.

“The mini-UROP presents an opportunity for first-year students to experience the diverse climate and sustainability research happening in our department,” says CEE department head and JR East Professor Ali Jadbabaie. “Fostering hands-on experiences in a collaborative, supportive educational environment is central to our mission of preparing students with the skills needed to positively shape the future of our society, systems, and planet.”

The mini-UROP also benefits the graduate students and postdocs who take on the role of mentor. Mentor support is a key component to completing a successful mini-UROP project and requires graduate students and postdocs to hone their leadership and teaching skills.

“I’m always interested in mentoring undergraduate students and to have someone help me with my project,” says postdoc and mentor Yue Hu. “Participating in this project made me excited that my research attracted undergraduates’ interest.”

Guiding students through interactive workshops

Preparation for this year’s mini-UROP began at the end of November, when participants attended the Lightning Lectures, an event that served as an opportunity for the mentors to give lightning-fast pitches on their research projects. First-year students then ranked the projects that they were interested in working on and were matched according to their preferences.

The interdisciplinary nature of the department’s research offered participants a wide range of projects to work on, from redefining autonomous vehicle deployment to mitigating the effects of drought on crops. Once each of the 11 participants were matched to a project, the mini-UROP Kick-off Lunch brought students and mentors together and ensured each group had an open line of communication.

Throughout the duration of the mini-UROP, participants attended three workshops led by Jared Berezin, the manager of the Civil and Environmental Engineering Communication Lab (CEE Comm Lab). The communication lab is a free resource to undergraduates, graduates, and postdocs in the CEE community, providing one-on-one coaching and interactive workshops. Held on Fridays during IAP, the workshops helped students contextualize their research and ensure they were able to explain the scientific concept of their work during presentations.

“Students were fortunate to have research mentors in the lab, and my goal was to provide communication mentorship outside of the lab,” says Berezin. “Our weekly workshops focused on scientific communication strategies, but perhaps more importantly I’d prompt them to talk about their projects, ask questions, and brainstorm together. They really embraced the opportunity to foster a supportive peer community, which I think is a core part of the CEE experience.”

A significant challenge students face while completing the program is condensing their research down to a clear and concise two-minute presentation. To assist with this task, the third workshop featured a presentation by CEE Comm Lab fellow Matthew Goss, providing students with a preview of how their own presentations may take shape. Students also had the option to meet with Comm Lab fellows to practice presenting and get feedback.

“The final talks were impressive, and I was proud of the students for approaching both their research and communication challenges with such curiosity and thoughtfulness,” Berezin remarks.

Reinforcing research interests

Iraira Rivera Rojas, a first-year student interested in materials science and environmental engineering, worked with Yue Hu, a postdoc in Associate Professor Benedetto Marelli’s lab. Their project used biodegradable polymers, specifically silk fibroin, to make particles that can be used to encapsulate agrochemicals, lessening their negative impact on the environment. Regenerated from silk cocoons, the silk fibroins are used as a building block to revolutionize the agriculture and food industry.

“When I saw the project description, it was a mix of both of my interests,” Rojas says. “I thought it would be a good way to try out both fields.” While she is still deciding which course she will major in, she says that participating in the mini-UROP confirmed her interest in the field.

Working with mentor Jie Yun, a graduate student in Associate Professor David Des Marais’s lab, Sheila Nguyen and Ved Ganesh used biodiversity to increase crop drought resistance. Nguyen and Ganesh studied barely, oat, wheat, and Brachypodium, and compared how these plants grow under conditions of drought stress. Currently, a separate model must be trained for each plant species and type of cell. The project aimed to develop a machine learning model that can generalize to different species of plants and cell types.

Vinn Nguyen and Diego Del Rio worked with mentor Cameron Hickert, a graduate student in Assistant Professor Cathy Wu’s Lab. Their project focused on making autonomous vehicles safer and more reliable, specifically in areas transitioning on and off highways. As self-driving cars gain popularity, reports of crashes and similar incidents demonstrate deficiencies in the current system. Nguyen and Del Rio sourced satellite imagery and applied computer vision techniques to investigate the safeness of these areas. The goal of their project was to design an infrastructure-supported approach to autonomous vehicles that allows passenger to comfortably work, play, and connect with partial autonomy.

Jordyn Goldson worked in the Terrer Lab with her mentor Kathryn Wheeler, a graduate student in Assistant Professor Cesar Terrer’s lab, on a project focused on plant senescence. As warmer temperatures lengthen plants’ growing period each year, total annual photosynthesis increases along with the amount of carbon plants remove from the atmosphere. Her project investigated if model performance differs between predicting visually assessed timing versus remotely sensed timing. The findings can help advance knowledge of the mechanisms behind forest canopy color change and the ability of forests to capture more carbon by growing longer, mitigating climate change.

Based on the success of her mini-UROP project, Mairin O’Shaughnessy, who worked in Professor Heidi Nepf’s lab with graduate student Ernie Lee, will be continuing her research on “Computer Vision for Plant Density Analysis” in the spring.

“When Heidi and Ernie, the grad student advisor for the project, proposed continuing the project in spring, I was interested in continuing to learn more and explore vision processing for counting real plants,” O’Shaughnessy says.

Jennifer Espinoza, another student who worked in the Nepf Lab, plans to continue her research with graduate student James Brice on “Characterizing Flow Conditions.”

“One of the main things I loved most about working in the lab was the passion that my mentor, James, portrayed for his work, as well as his willingness to teach me anything without complaint,” says Espinoza. “Most of all, though, I became extremely passionate about my work because it has the potential to make an impact in not only society but the natural environment. The significance of my work and the welcoming working environment have prompted me to continue researching at Nepf Lab in the spring.”

Three Lincoln Laboratory inventions named IEEE Milestones

Fri, 03/01/2024 - 3:00pm

The Institute of Electrical and Electronics Engineers (IEEE) designated three historical MIT Lincoln Laboratory technologies as IEEE Milestones. The technologies are the Mode S air traffic control (ATC) radar beacon system, 193-nanometer (nm) photolithography, and the semiconductor laser. The latter recognition is shared by Lincoln Laboratory, General Electric, and IBM.

As the world's largest technical professional organization, the IEEE's mission is to "advance technology for the benefit of humanity." The Milestone program commemorates innovations developed at least 25 years ago that have done just that.

All three technologies are integral to everyday life. Anyone who has flown on commercial aircraft has benefited from Mode S, the system that air traffic controllers use to track planes. The integrated circuits that power modern computing and communication devices were manufactured using 193 nm photolithography. Perhaps most ubiquitous of all is the semiconductor laser — a micrometer-sized light-emitting device that has made possible high-speed internet, among many other technologies underpinning today's information society.

"MIT Lincoln Laboratory has been a leader in fostering innovations that were previously only considered possible in science fiction. The three IEEE Milestones presented are a testament to those accomplishments and a celebration of the diversity of ingenuity and teamwork that created these game-changing technologies," says Karen Panetta, vice chair of IEEE Boston Section, which presented the awards to Lincoln Laboratory at a ceremony on Feb. 2.

Lincoln Laboratory holds three previous IEEE Milestones for pioneering the use of packet networks for speech communications, for developing the nation's first air defense system, and for creating the Whirlwind high-speed digital computer in collaboration with MIT campus.

Tracking aircraft globally

The Mode S ATC radar beacon system was developed to address the challenges posed to the existing ATC beacon-radar system used in the late 1960s. Commercial air traffic was growing quickly, causing interference between beacon replies and interrogations from ATC ground radars. This interference threatened to disrupt aircraft surveillance in high-density airspace.

Under Federal Aviation Administration (FAA) sponsorship, Lincoln Laboratory led the technology developments necessary to address this safety issue. The advanced communication architecture of Mode S allowed radars to select a specific aircraft to interrogate. To selectively communicate, the system design included improved aircraft transponders, each assigned a unique address code. Upgrades to radar antennas and signal processing also allowed Mode S to accurately determine airplane position with far fewer air-to-ground messages than required by prior systems. Mode S also provided a datalink capability that enabled other key safety systems, such as the Traffic Alert and Collision Avoidance System.

Today, Mode S is a worldwide industry standard. An estimated 100,000 aircraft are equipped with Mode S transponders, and more than 900 Mode S radars are deployed across the globe. The technology is also the foundation for the FAA's newest ATC surveillance system, which allows continuous flight tracking independent of ground radars by using aircraft-broadcast position and velocity information.

"This technology touches everybody who flies, every time they fly, for the entire duration of their flight," says Wesley Olson, a group leader in the laboratory's Homeland Protection and Air Traffic Control Division, where Mode S was first envisioned. "If it wasn't for Mode S, we would have a very different air transportation system today, one that would be far less efficient and far less safe."

Powering the microelectronics industry

The 193 nm projection photolithography technique has enabled the fabrication of every chip in every laptop, smartphone, military system, and data center for the past 20 years.

Photolithography uses light to print tiny patterns onto a silicon chip. The patterns are projected over a silicon wafer, which is coated with a chemical that changes its solubility when exposed to light. The soluble parts are etched out, leaving behind tiny structures that become the transistors and other devices on the chip. 

Shorter wavelengths of light allow for printing smaller features, enabling more densely packed chips. By the 1980s, the accepted wisdom in the industry was that 248 nm was the shortest wavelength possible for photolithography.

Despite widespread skepticism and technical obstacles, Lincoln Laboratory pioneered photolithography at the 193 nm wavelength, fabricating the world's first microelectronic devices using the technique. The first-ever 193 nm projection system was installed at the laboratory in 1993. Soon after, the laboratory opened its doors to industrial partners to guide 193 nm semiconductor manufacturing and pave the way toward its widespread adoption. Today, it is the industry's mainstream technique and has enabled increasingly powerful integrated circuits.

"Photolithography at 193 nm has enabled the microelectronics industry to continue its path of miniaturization as charted by Moore's law, thus impacting every aspect of our increasingly digital lives. It is also a prime example of the impact that close collaborations between Lincoln Laboratory and industrial partners have had on society," says Mordechai Rothschild, who was one of the key developers of the 193 nm technique and today is a principal staff member in the Advanced Technology Division.

Lighting up a world of new technologies

In fall 1962, General Electric, IBM, and Lincoln Laboratory each independently reported the first demonstrations of the semiconductor laser. In the 62 years since, it has become the most widespread laser in the world and a foundational element in a vast range of technologies: DVDs, CDs, computer mice, laser pointers, barcode scanners, medical imagers, and printers, to name a few. However, its greatest impact is arguably in communications. Every second, a semiconductor laser encodes information onto light that is transmitted through fiber-optic cables across oceans and into many homes, forming the backbone of the internet.

While lasers were invented a few years earlier in 1960, the semiconductor type was exceptional because it realized all laser elements — light generation and amplification, lenses, and mirrors — within a piece of semiconducting material no bigger than a grain of rice. When injected with electrical current, the material is extremely efficient at converting the electrical energy to light. These attributes attracted the imagination of scientists and engineers worldwide.

"I'm pretty sure that we wouldn't be streaming movies to our homes or searching for the best restaurants from our phones without the low cost and manufacturability of semiconductor lasers," says Paul Juodawlkis, an expert in photonic devices and integrated circuits, and leader of the laboratory's Quantum Information and Integrated Nanosystems Group. "It's great to know that Lincoln Laboratory has played an important role in advancing this technology for government and commercial applications for the past 60-plus years and is poised to continue doing so in the future."

Honoring inventors and their legacy

The 2024 IEEE President-elect Kathleen Kramer presented the three awards to Lincoln Laboratory Director Eric Evans during the dedication ceremony. The ceremony was held in the auditorium at Lincoln Laboratory in Lexington, Massachusetts. Evans was joined on stage by inventors or their descendants to receive each plaque. Many Lincoln Laboratory staff and retirees who contributed to these innovations were also in attendance.

Vincent Orlando, who devoted his 50-year career at the laboratory to developing Mode S technology, joined Evans to accept that award. Mordechai Rothschild and David Shaver unveiled the 193 nm photolithography plaque. Both were lead developers of that technology.

For some, the ceremony was a touching celebration of their parent's legacy, and a return to fond memories. Richard Rediker, a son of semiconductor laser inventor Robert Rediker, recalled playing in a lab as a child with his father more than 60 years ago, the last time he visited Lincoln Laboratory. He accepted the semiconductor plaque alongside Susan Zeiger and Robert Lax, children of co-inventors Herbert Zeiger and Benjamin Lax respectively.

"It was so rewarding to meet the other children of my father's colleagues and to fully appreciate what the inventions of our fathers mean to society today. Although my father passed away five years ago, this ceremony brought him back to life for an afternoon," says Rediker, adding that it was an experience he will never forget.

Likewise, these technologies have left an indelible mark on the world.

"By celebrating the pride and prestige of our profession's contributions to history, we demonstrate how engineers, scientists, and technologists have contributed not only to our local communities, but also to our global community," Kramer said, before presenting the plaques. "It is my pleasure to recognize these pioneering events and people behind them. They serve as landmarks in the progress of technology and civilization."

A careful rethinking of the Iraq War

Fri, 03/01/2024 - 12:00am

The term “fog of war” expresses the chaos and uncertainty of the battlefield. Often, it is only in hindsight that people can grasp what was unfolding around them.

Now, additional clarity about the Iraq War has arrived in the form of a new book by MIT political scientist Roger Petersen, which dives into the war’s battlefield operations, political dynamics, and long-term impact. The U.S. launched the Iraq War in 2003 and formally wrapped it up in 2011, but Petersen analyzes the situation in Iraq through the current day and considers what the future holds for the country.

After a decade of research, Petersen identifies four key factors for understanding Iraq’s situation. First, the U.S. invasion created chaos and a lack of clarity in terms of the hierarchy among Shia, Sunni, and Kurdish groups. Second, given these conditions, organizations that comprised a mix of militias, political groups, and religious groups came to the fore and captured elements of the new state the U.S. was attempting to set up. Third, by about 2018, the Shia groups became dominant, establishing a hierarchy, and along with that dominance, sectarian violence has fallen. Finally, the hybrid organizations established many years ago are now highly integrated into the Iraqi state.

Petersen has also come to believe two things about the Iraq War are not fully appreciated. One is how widely U.S. strategy varied over time in response to shifting circumstances.

“This was not one war,” says Petersen. “This was many different wars going on. We had at least five strategies on the U.S. side.”

And while the expressed goal of many U.S. officials was to build a functioning democracy in Iraq, the intense factionalism of Iraqi society led to further military struggles, between and among religious and ethnic groups. Thus, U.S. military strategy shifted as this multisided conflict evolved.

“What really happened in Iraq, and the thing the United States and Westerners did not understand at first, is how much this would become a struggle for dominance among Shias, Sunnis, and Kurds,” says Petersen. “The United States thought they would build a state, and the state would push down and penetrate society. But it was society that created militias and captured the state.”

Attempts to construct a well-functioning state, in Iraq or elsewhere must confront this factor, Petersen adds. Most people think in terms of groups. They think in terms of group hierarchies, and they’re motivated when they believe their own group is not in a proper space in the hierarchy. This is this emotion of resentment. I think this is just human nature.”

Petersen’s book, “Death, Dominance, and State-Building: The U.S. in Iraq and the Future of American Military Intervention,” is published today by Oxford University Press. Petersen is the Arthur and Ruth Sloan Professor of Political Science at MIT and a member of the Security Studies Program based at MIT’s Center for International Studies.

Research on the ground

Petersen spent years interviewing people who were on the ground in Iraq during the war, from U.S. military personnel to former insurgents to regular Iraqi citizens, while extensively analyzing data about the conflict.

“I didn’t really come to conclusions about Iraq until six or seven years of applying this method,” he says.

Ultimately, one core fact about the country heavily influenced the trajectory of the war. Iraq’s Sunni Muslims made up about 20 percent or less of the country’s population but had been politically dominant before the U.S. took military action. After the U.S. toppled former dictator Saddam Hussein, it created an opening for the Shia majority to grasp more power.

“The United States said, ‘We’re going to have democracy and think in individual terms,’ but this is not the way it played out,” Petersen says. “The way it played out was, over the years, the Shia organizations became the dominant force. The Sunnis and Kurds are now basically subordinate within this Shia-dominated state. The Shias also had advantages in organizing violence over the Sunnis, and they’re the majority. They were going to win.”

As Petersen details in the book, a central unit of power became the political militia, based on ethnic and religious identification. One Shia militia, the Badr Organization, had trained professionally for years in Iran. The local Iraqi leader Moqtada al-Sadr could recruit Shia fighters from among the 2 million people living in the Sadr City slum. And no political militia wanted to back a strong multiethnic government.

“They liked this weaker state,” Petersen says. “The United States wanted to build a new Iraqi state, but what we did was create a situation where multiple and large Shia militia make deals with each other.”

A captain’s war

In turn, these dynamics meant the U.S. had to shift military strategies numerous times, occasionally in high-profile ways. The five strategies Petersen identifies are clear, hold, build (CHB); decapitation; community mobilization; homogenization; and war-fighting.

“The war from the U.S. side was highly decentralized,” Petersen says. Military captains, who typically command about 140 to 150 soldiers, had fairly wide berth in terms of how they were choosing to fight.  

“It was a captain’s war in a lot of ways,” Petersen adds.

The point is emphatically driven home in one chapter, “Captain Wright goes to Baghdad,” co-authored with Col. Timothy Wright PhD ’18, who wrote his MIT political science dissertation based on his experience and company command during the surge period.

As Petersen also notes, drawing on government data, the U.S. also managed to suppress violence fairly effectively at times, particularly before 2006 and after 2008. “The professional soldiers tried to do a good job, but some of the problems they weren’t going to solve,” Petersen says.

Still, all of this raises a conundrum. If trying to start a new state in Iraq was always likely to lead to an increase in Shia power, is there really much the U.S. could have done differently?

“That’s a million-dollar question,” Petersen says.

Perhaps the best way to engage with it, Petersen notes, is to recognize the importance of studying how factional groups grasp power through use of violence, and how that emerges in society. It is a key issue running throughout Petersen’s work, and one, he notes, that has often been studied by his graduate students in MIT’s Security Studies Program.

“Death, Dominance, and State-Building” has received praise from foreign-policy scholars. Paul Staniland, a political scientist at the University of Chicago, has said the work combines “intellectual creativity with careful attention to on-the ground dynamics,” and is “a fascinating macro-level account of the politics of group competition in Iraq. This book is required reading for anyone interested in civil war, U.S. foreign policy, or the politics of violent state-building."

Petersen, for his part, allows that he was pleased when one marine who served in Iraq read the manuscript in advance and found it interesting.

“He said, ‘This is good, and it’s not the way we think about it,’” Petersen says. “That’s my biggest compliment, to have a practitioner say it make them think. If I can get that kind of reaction, I’ll be pleased.”

Startup accelerates progress toward light-speed computing

Fri, 03/01/2024 - 12:00am

Our ability to cram ever-smaller transistors onto a chip has enabled today’s age of ubiquitous computing. But that approach is finally running into limits, with some experts declaring an end to Moore’s Law and a related principle, known as Dennard’s Scaling.

Those developments couldn’t be coming at a worse time. Demand for computing power has skyrocketed in recent years thanks in large part to the rise of artificial intelligence, and it shows no signs of slowing down.

Now Lightmatter, a company founded by three MIT alumni, is continuing the remarkable progress of computing by rethinking the lifeblood of the chip. Instead of relying solely on electricity, the company also uses light for data processing and transport. The company’s first two products, a chip specializing in artificial intelligence operations and an interconnect that facilitates data transfer between chips, use both photons and electrons to drive more efficient operations.

“The two problems we are solving are ‘How do chips talk?’ and ‘How do you do these [AI] calculations?’” Lightmatter co-founder and CEO Nicholas Harris PhD ’17 says. “With our first two products, Envise and Passage, we’re addressing both of those questions.”

In a nod to the size of the problem and the demand for AI, Lightmatter raised just north of $300 million in 2023 at a valuation of $1.2 billion. Now the company is demonstrating its technology with some of the largest technology companies in the world in hopes of reducing the massive energy demand of data centers and AI models.

"We’re going to enable platforms on top of our interconnect technology that are made up of hundreds of thousands of next-generation compute units,” Harris says. “That simply wouldn’t be possible without the technology that we’re building.”

From idea to $100K

Prior to MIT, Harris worked at the semiconductor company Micron Technology, where he studied the fundamental devices behind integrated chips. The experience made him see how the traditional approach for improving computer performance — cramming more transistors onto each chip — was hitting its limits.

“I saw how the roadmap for computing was slowing, and I wanted to figure out how I could continue it,” Harris says. “What approaches can augment computers? Quantum computing and photonics were two of those pathways.”

Harris came to MIT to work on photonic quantum computing for his PhD under Dirk Englund, an associate professor in the Department of Electrical Engineering and Computer Science. As part of that work, he built silicon-based integrated photonic chips that could send and process information using light instead of electricity.

The work led to dozens of patents and more than 80 research papers in prestigious journals like Nature. But another technology also caught Harris’s attention at MIT.

“I remember walking down the hall and seeing students just piling out of these auditorium-sized classrooms, watching relayed live videos of lectures to see professors teach deep learning,” Harris recalls, referring to the artificial intelligence technique. “Everybody on campus knew that deep learning was going to be a huge deal, so I started learning more about it, and we realized that the systems I was building for photonic quantum computing could actually be leveraged to do deep learning.”

Harris had planned to become a professor after his PhD, but he realized he could attract more funding and innovate more quickly through a startup, so he teamed up with Darius Bunandar PhD ’18, who was also studying in Englund’s lab, and Thomas Graham MBA ’18. The co-founders successfully launched into the startup world by winning the 2017 MIT $100K Entrepreneurship Competition.

Seeing the light

Lightmatter’s Envise chip takes the part of computing that electrons do well, like memory, and combines it with what light does well, like performing the massive matrix multiplications of deep-learning models.

“With photonics, you can perform multiple calculations at the same time because the data is coming in on different colors of light,” Harris explains. “In one color, you could have a photo of a dog. In another color, you could have a photo of a cat. In another color, maybe a tree, and you could have all three of those operations going through the same optical computing unit, this matrix accelerator, at the same time. That drives up operations per area, and it reuses the hardware that's there, driving up energy efficiency.”

Passage takes advantage of light’s latency and bandwidth advantages to link processors in a manner similar to how fiber optic cables use light to send data over long distances. It also enables chips as big as entire wafers to act as a single processor. Sending information between chips is central to running the massive server farms that power cloud computing and run AI systems like ChatGPT.

Both products are designed to bring energy efficiencies to computing, which Harris says are needed to keep up with rising demand without bringing huge increases in power consumption.

“By 2040, some predict that around 80 percent of all energy usage on the planet will be devoted to data centers and computing, and AI is going to be a huge fraction of that,” Harris says. “When you look at computing deployments for training these large AI models, they’re headed toward using hundreds of megawatts. Their power usage is on the scale of cities.”

Lightmatter is currently working with chipmakers and cloud service providers for mass deployment. Harris notes that because the company’s equipment runs on silicon, it can be produced by existing semiconductor fabrication facilities without massive changes in process.

The ambitious plans are designed to open up a new path forward for computing that would have huge implications for the environment and economy.

“We’re going to continue looking at all of the pieces of computers to figure out where light can accelerate them, make them more energy efficient, and faster, and we’re going to continue to replace those parts,” Harris says. “Right now, we’re focused on interconnect with Passage and on compute with Envise. But over time, we’re going to build out the next generation of computers, and it’s all going to be centered around light.”

Dealing with the limitations of our noisy world

Fri, 03/01/2024 - 12:00am

Tamara Broderick first set foot on MIT’s campus when she was a high school student, as a participant in the inaugural Women’s Technology Program. The monthlong summer academic experience gives young women a hands-on introduction to engineering and computer science.

What is the probability that she would return to MIT years later, this time as a faculty member?

That’s a question Broderick could probably answer quantitatively using Bayesian inference, a statistical approach to probability that tries to quantify uncertainty by continuously updating one’s assumptions as new data are obtained.

In her lab at MIT, the newly tenured associate professor in the Department of Electrical Engineering and Computer Science (EECS) uses Bayesian inference to quantify uncertainty and measure the robustness of data analysis techniques.

“I’ve always been really interested in understanding not just ‘What do we know from data analysis,’ but ‘How well do we know it?’” says Broderick, who is also a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society. “The reality is that we live in a noisy world, and we can’t always get exactly the data that we want. How do we learn from data but at the same time recognize that there are limitations and deal appropriately with them?”

Broadly, her focus is on helping people understand the confines of the statistical tools available to them and, sometimes, working with them to craft better tools for a particular situation.

For instance, her group recently collaborated with oceanographers to develop a machine-learning model that can make more accurate predictions about ocean currents. In another project, she and others worked with degenerative disease specialists on a tool that helps severely motor-impaired individuals utilize a computer’s graphical user interface by manipulating a single switch.

A common thread woven through her work is an emphasis on collaboration.

“Working in data analysis, you get to hang out in everybody’s backyard, so to speak. You really can’t get bored because you can always be learning about some other field and thinking about how we can apply machine learning there,” she says.

Hanging out in many academic “backyards” is especially appealing to Broderick, who struggled even from a young age to narrow down her interests.

A math mindset

Growing up in a suburb of Cleveland, Ohio, Broderick had an interest in math for as long as she can remember. She recalls being fascinated by the idea of what would happen if you kept adding a number to itself, starting with 1+1=2 and then 2+2=4.

“I was maybe 5 years old, so I didn’t know what ‘powers of two’ were or anything like that. I was just really into math,” she says.

Her father recognized her interest in the subject and enrolled her in a Johns Hopkins program called the Center for Talented Youth, which gave Broderick the opportunity to take three-week summer classes on a range of subjects, from astronomy to number theory to computer science.

Later, in high school, she conducted astrophysics research with a postdoc at Case Western University. In the summer of 2002, she spent four weeks at MIT as a member of the first class of the Women’s Technology Program.

She especially enjoyed the freedom offered by the program, and its focus on using intuition and ingenuity to achieve high-level goals. For instance, the cohort was tasked with building a device with LEGOs that they could use to biopsy a grape suspended in Jell-O.

The program showed her how much creativity is involved in engineering and computer science, and piqued her interest in pursuing an academic career.

“But when I got into college at Princeton, I could not decide — math, physics, computer science — they all seemed super-cool. I wanted to do all of it,” she says.

She settled on pursuing an undergraduate math degree but took all the physics and computer science courses she could cram into her schedule.

Digging into data analysis

After receiving a Marshall Scholarship, Broderick spent two years at Cambridge University in the United Kingdom, earning a master of advanced study in mathematics and a master of philosophy in physics.

In the UK, she took a number of statistics and data analysis classes, including her first class on Bayesian data analysis in the field of machine learning.

It was a transformative experience, she recalls.

“During my time in the U.K., I realized that I really like solving real-world problems that matter to people, and Bayesian inference was being used in some of the most important problems out there,” she says.

Back in the U.S., Broderick headed to the University of California at Berkeley, where she joined the lab of Professor Michael I. Jordan as a grad student. She earned a PhD in statistics with a focus on Bayesian data analysis. 

She decided to pursue a career in academia and was drawn to MIT by the collaborative nature of the EECS department and by how passionate and friendly her would-be colleagues were.

Her first impressions panned out, and Broderick says she has found a community at MIT that helps her be creative and explore hard, impactful problems with wide-ranging applications.

“I’ve been lucky to work with a really amazing set of students and postdocs in my lab — brilliant and hard-working people whose hearts are in the right place,” she says.

One of her team’s recent projects involves a collaboration with an economist who studies the use of microcredit, or the lending of small amounts of money at very low interest rates, in impoverished areas.

The goal of microcredit programs is to raise people out of poverty. Economists run randomized control trials of villages in a region that receive or don’t receive microcredit. They want to generalize the study results, predicting the expected outcome if one applies microcredit to other villages outside of their study.

But Broderick and her collaborators have found that results of some microcredit studies can be very brittle. Removing one or a few data points from the dataset can completely change the results. One issue is that researchers often use empirical averages, where a few very high or low data points can skew the results.

Using machine learning, she and her collaborators developed a method that can determine how many data points must be dropped to change the substantive conclusion of the study. With their tool, a scientist can see how brittle the results are.

“Sometimes dropping a very small fraction of data can change the major results of a data analysis, and then we might worry how far those conclusions generalize to new scenarios. Are there ways we can flag that for people? That is what we are getting at with this work,” she explains.

At the same time, she is continuing to collaborate with researchers in a range of fields, such as genetics, to understand the pros and cons of different machine-learning techniques and other data analysis tools.

Happy trails

Exploration is what drives Broderick as a researcher, and it also fuels one of her passions outside the lab. She and her husband enjoy collecting patches they earn by hiking all the trails in a park or trail system.

“I think my hobby really combines my interests of being outdoors and spreadsheets,” she says. “With these hiking patches, you have to explore everything and then you see areas you wouldn’t normally see. It is adventurous, in that way.”

They’ve discovered some amazing hikes they would never have known about, but also embarked on more than a few “total disaster hikes,” she says. But each hike, whether a hidden gem or an overgrown mess, offers its own rewards.

And just like in her research, curiosity, open-mindedness, and a passion for problem-solving have never led her astray.

Eight from MIT named 2024 Sloan Research Fellows

Thu, 02/29/2024 - 4:50pm

Eight members of the MIT faculty are among 126 early-career researchers honored with 2024 Sloan Research Fellowships by the Alfred P. Sloan Foundation. Representing the departments of Chemistry, Electrical Engineering and Computer Science, and Physics, and the MIT Sloan School of Management, the awardees will receive a two-year, $75,000 fellowship to advance their research.

“Sloan Research Fellowships are extraordinarily competitive awards involving the nominations of the most inventive and impactful early-career scientists across the U.S. and Canada,” says Adam F. Falk, president of the Alfred P. Sloan Foundation. “We look forward to seeing how fellows take leading roles shaping the research agenda within their respective fields.”

Jacob Andreas is an associate professor in the Department of Electrical Engineering and Computer Science (EECS) as well as the Computer Science and Artificial Intelligence Laboratory (CSAIL). His research aims to build intelligent systems that can communicate effectively using language and learn from human guidance. Jacob has been named a Kavli Fellow by the National Academy of Sciences, and has received the NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.

Adam Belay, Jamieson Career Development Associate Professor of EECS in CSAIL, focuses on operating systems and networking, specifically developing practical and efficient methods for microsecond-scale distributed computing, which has many applications pertaining to resource management in data centers. His operating system, Caladan, reallocates server resources on a microsecond scale, resulting in high CPU utilization with low tail latency. Additionally, Belay has contributed to load balancing, and Application-Integrated Far Memory in OS designs.

Soonwon Choi, assistant professor of physics, is a researcher in the Center for Theoretical Physics, a division of the Laboratory for Nuclear Science. His research is focused on the intersection of quantum information and out-of-equilibrium dynamics of quantum many-body systems, specifically exploring the dynamical phenomena that occur in strongly interacting quantum many-body systems far from equilibrium and designing their novel applications for quantum information science. Recent contributions from Choi, recipient of the Inchon Award, include the development of simple methods to benchmark the quality of analog quantum simulators. His work allows for efficiently and easily characterizing quantum simulators, accelerating the goal of utilizing them in studying exotic phenomena in quantum materials that are difficult to synthesize in a laboratory.

Maryam Farboodi, the Jon D. Gruber Career Development Assistant Professor of Finance in the MIT Sloan School of Management, studies the economics of big data. She explores how big data technologies have changed trading strategies and financial outcomes, as well as the consequences of the emergence of big data for technological growth in the real economy. She also works on developing methodologies to estimate the value of data. Furthermore, Farboodi studies intermediation and network formation among financial institutions, and the spillovers to the real economy. She is also interested in how information frictions shape the local and global economic cycles.

Lina Necib PhD ’17, an assistant professor of physics and a member of the MIT Kavli Institute for Astrophysics and Space Research, explores the origin of dark matter through a combination of simulations and observational data that correlate the dynamics of dark matter with that of the stars in the Milky Way. She has investigated the local dynamic structures in the solar neighborhood using the Gaia satellite, contributed to building a catalog of local accreted stars using machine learning techniques, and discovered a new stream called Nyx. Necib is interested in employing Gaia in conjunction with other spectroscopic surveys to understand the dark matter profile in the local solar neighborhood, the center of the galaxy, and in dwarf galaxies.

Arvind Satyanarayan in an assistant professor of computer science and leader of the CSAIL Visualization Group. Satyanarayan uses interactive data visualization as a petri dish to study intelligence augmentation, asking how computational representations and software systems help amplify our cognition and creativity while respecting our agency. His work has been recognized with an NSF CAREER award, best paper awards at academic venues such as ACM CHI and IEEE VIS, and honorable mentions among practitioners including Kantar’s Information is Beautiful Awards. Systems he helped develop are widely used in industry, on Wikipedia, and in the Jupyter/Python data science communities.

Assistant professor of physics and a member of the Kavli Institute Andrew Vanderburg explores the use of machine learning, especially deep neural networks, in the detection of exoplanets, or planets which orbit stars other than the sun. He is interested in developing cutting-edge techniques and methods to discover new planets outside of our solar system, and studying the planets we find to learn their detailed properties. Vanderburg conducts astronomical observations using facilities on Earth like the Magellan Telescopes in Chile as well as space-based observatories like the Transiting Exoplanet Survey Satellite and the James Webb Space Telescope. Once the data from these telescopes are in hand, they develop new analysis methods that help extract as much scientific value as possible.

Xiao Wang is a core institute member of the Broad Institute of MIT and Harvard, and the Thomas D. and Virginia Cabot Assistant Professor of Chemistry. She started her lab in 2019 to develop and apply new chemical, biophysical, and genomic tools to better probe and understand tissue function and dysfunction at the molecular level. Specifically, with in situ sequencing of nucleic acids as the core approach, Wang aims to develop high-resolution and highly-multiplexed molecular imaging methods across multiple scales toward understanding the physical and chemical basis of brain wiring and function. She is the recipient of a Packard Fellowship, NIH Director’s New Innovator Award, and is a Searle Scholar.

Power when the sun doesn’t shine

Thu, 02/29/2024 - 4:45pm

In 2016, at the huge Houston energy conference CERAWeek, MIT materials scientist Yet-Ming Chiang found himself talking to a Tesla executive about a thorny problem: how to store the output of solar panels and wind turbines for long durations.        

Chiang, the Kyocera Professor of Materials Science and Engineering, and Mateo Jaramillo, a vice president at Tesla, knew that utilities lacked a cost-effective way to store renewable energy to cover peak levels of demand and to bridge the gaps during windless and cloudy days. They also knew that the scarcity of raw materials used in conventional energy storage devices needed to be addressed if renewables were ever going to displace fossil fuels on the grid at scale.

Energy storage technologies can facilitate access to renewable energy sources, boost the stability and reliability of power grids, and ultimately accelerate grid decarbonization. The global market for these systems — essentially large batteries — is expected to grow tremendously in the coming years. A study by the nonprofit LDES (Long Duration Energy Storage) Council pegs the long-duration energy storage market at between 80 and 140 terawatt-hours by 2040. “That’s a really big number,” Chiang notes. “Every 10 people on the planet will need access to the equivalent of one EV [electric vehicle] battery to support their energy needs.”

In 2017, one year after they met in Houston, Chiang and Jaramillo joined forces to co-found Form Energy in Somerville, Massachusetts, with MIT graduates Marco Ferrara SM ’06, PhD ’08 and William Woodford PhD ’13, and energy storage veteran Ted Wiley.

“There is a burgeoning market for electrical energy storage because we want to achieve decarbonization as fast and as cost-effectively as possible,” says Ferrara, Form’s senior vice president in charge of software and analytics.

Investors agreed. Over the next six years, Form Energy would raise more than $800 million in venture capital.

Bridging gaps

The simplest battery consists of an anode, a cathode, and an electrolyte. During discharge, with the help of the electrolyte, electrons flow from the negative anode to the positive cathode. During charge, external voltage reverses the process. The anode becomes the positive terminal, the cathode becomes the negative terminal, and electrons move back to where they started. Materials used for the anode, cathode, and electrolyte determine the battery’s weight, power, and cost “entitlement,” which is the total cost at the component level.

During the 1980s and 1990s, the use of lithium revolutionized batteries, making them smaller, lighter, and able to hold a charge for longer. The storage devices Form Energy has devised are rechargeable batteries based on iron, which has several advantages over lithium. A big one is cost.

Chiang once declared to the MIT Club of Northern California, “I love lithium-ion.” Two of the four MIT spinoffs Chiang founded center on innovative lithium-ion batteries. But at hundreds of dollars a kilowatt-hour (kWh) and with a storage capacity typically measured in hours, lithium-ion was ill-suited for the use he now had in mind.

The approach Chiang envisioned had to be cost-effective enough to boost the attractiveness of renewables. Making solar and wind energy reliable enough for millions of customers meant storing it long enough to fill the gaps created by extreme weather conditions, grid outages, and when there is a lull in the wind or a few days of clouds.

To be competitive with legacy power plants, Chiang’s method had to come in at around $20 per kilowatt-hour of stored energy — one-tenth the cost of lithium-ion battery storage.

But how to transition from expensive batteries that store and discharge over a couple of hours to some as-yet-undefined, cheap, longer-duration technology?

“One big ball of iron”

That’s where Ferrara comes in. Ferrara has a PhD in nuclear engineering from MIT and a PhD in electrical engineering and computer science from the University of L’Aquila in his native Italy. In 2017, as a research affiliate at the MIT Department of Materials Science and Engineering, he worked with Chiang to model the grid’s need to manage renewables’ intermittency.

How intermittent depends on where you are. In the United States, for instance, there’s the windy Great Plains; the sun-drenched, relatively low-wind deserts of Arizona, New Mexico, and Nevada; and the often-cloudy Pacific Northwest.

Ferrara, in collaboration with Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society and her MIT team, modeled four representative locations in the United States and concluded that energy storage with capacity costs below roughly $20/kWh and discharge durations of multiple days would allow a wind-solar mix to provide cost-competitive, firm electricity in resource-abundant locations.

Now that they had a time frame, they turned their attention to materials. At the price point Form Energy was aiming for, lithium was out of the question. Chiang looked at plentiful and cheap sulfur. But a sulfur, sodium, water, and air battery had technical challenges.

Thomas Edison once used iron as an electrode, and iron-air batteries were first studied in the 1960s. They were too heavy to make good transportation batteries. But this time, Chiang and team were looking at a battery that sat on the ground, so weight didn’t matter. Their priorities were cost and availability.

“Iron is produced, mined, and processed on every continent,” Chiang says. “The Earth is one big ball of iron. We wouldn’t ever have to worry about even the most ambitious projections of how much storage that the world might use by mid-century.” If Form ever moves into the residential market, “it’ll be the safest battery you’ve ever parked at your house,” Chiang laughs. “Just iron, air, and water.”

Scientists call it reversible rusting. While discharging, the battery takes in oxygen and converts iron to rust. Applying an electrical current converts the rusty pellets back to iron, and the battery “breathes out” oxygen as it charges. “In chemical terms, you have iron, and it becomes iron hydroxide,” Chiang says. “That means electrons were extracted. You get those electrons to go through the external circuit, and now you have a battery.”

Form Energy’s battery modules are approximately the size of a washer-and-dryer unit. They are stacked in 40-foot containers, and several containers are electrically connected with power conversion systems to build storage plants that can cover several acres.

The right place at the right time

The modules don’t look or act like anything utilities have contracted for before.

That’s one of Form’s key challenges. “There is not widespread knowledge of needing these new tools for decarbonized grids,” Ferrara says. “That’s not the way utilities have typically planned. They’re looking at all the tools in the toolkit that exist today, which may not contemplate a multi-day energy storage asset.”

Form Energy’s customers are largely traditional power companies seeking to expand their portfolios of renewable electricity. Some are in the process of decommissioning coal plants and shifting to renewables.

Ferrara’s research pinpointing the need for very low-cost multi-day storage provides key data for power suppliers seeking to determine the most cost-effective way to integrate more renewable energy.

Using the same modeling techniques, Ferrara and team show potential customers how the technology fits in with their existing system, how it competes with other technologies, and how, in some cases, it can operate synergistically with other storage technologies.

“They may need a portfolio of storage technologies to fully balance renewables on different timescales of intermittency,” he says. But other than the technology developed at Form, “there isn’t much out there, certainly not within the cost entitlement of what we’re bringing to market.”  Thanks to Chiang and Jaramillo’s chance encounter in Houston, Form has a several-year lead on other companies working to address this challenge. 

In June 2023, Form Energy closed its biggest deal to date for a single project: Georgia Power’s order for a 15-megawatt/1,500-megawatt-hour system. That order brings Form’s total amount of energy storage under contracts with utility customers to 40 megawatts/4 gigawatt-hours. To meet the demand, Form is building a new commercial-scale battery manufacturing facility in West Virginia.

The fact that Form Energy is creating jobs in an area that lost more than 10,000 steel jobs over the past decade is not lost on Chiang. “And these new jobs are in clean tech. It’s super exciting to me personally to be doing something that benefits communities outside of our traditional technology centers.

“This is the right time for so many reasons,” Chiang says. He says he and his Form Energy co-founders feel “tremendous urgency to get these batteries out into the world.”

This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative.

Brain surgery training from an avatar

Thu, 02/29/2024 - 4:30pm

Benjamin Warf, a renowned neurosurgeon at Boston Children’s Hospital, stands in the MIT.nano Immersion Lab. More than 3,000 miles away, his virtual avatar stands next to Matheus Vasconcelos in Brazil as the resident practices delicate surgery on a doll-like model of a baby’s brain.

With a pair of virtual-reality goggles, Vasconcelos is able to watch Warf’s avatar demonstrate a brain surgery procedure before replicating the technique himself and while asking questions of Warf’s digital twin.

“It’s an almost out-of-body experience,” Warf says of watching his avatar interact with the residents. “Maybe it’s how it feels to have an identical twin?”

And that’s the goal: Warf’s digital twin bridged the distance, allowing him to be functionally in two places at once. “It was my first training using this model, and it had excellent performance,” says Vasconcelos, a neurosurgery resident at Santa Casa de São Paulo School of Medical Sciences in São Paulo, Brazil. “As a resident, I now feel more confident and comfortable applying the technique in a real patient under the guidance of a professor.”

Warf’s avatar arrived via a new project launched by medical simulator and augmented reality (AR) company EDUCSIM. The company is part of the 2023 cohort of START.nano, MIT.nano’s deep-tech accelerator that offers early-stage startups discounted access to MIT.nano’s laboratories.

In March 2023, Giselle Coelho, EDUCSIM’s scientific director and a pediatric neurosurgeon at Santa Casa de São Paulo and Sabará Children’s Hospital, began working with technical staff in the MIT.nano Immersion Lab to create Warf’s avatar. By November, the avatar was training future surgeons like Vasconcelos.

“I had this idea to create the avatar of Dr. Warf as a proof of concept, and asked, ‘What would be the place in the world where they are working on technologies like that?’” Coelho says. “Then I found MIT.nano.”

Capturing a Surgeon

As a neurosurgery resident, Coelho was so frustrated by the lack of practical training options for complex surgeries that she built her own model of a baby brain. The physical model contains all the structures of the brain and can even bleed, “simulating all the steps of a surgery, from incision to skin closure,” she says.

She soon found that simulators and virtual reality (VR) demonstrations reduced the learning curve for her own residents. Coelho launched EDUCSIM in 2017 to expand the variety and reach of the training for residents and experts looking to learn new techniques.

Those techniques include a procedure to treat infant hydrocephalus that was pioneered by Warf, the director of neonatal and congenital neurosurgery at Boston Children’s Hospital. Coelho had learned the technique directly from Warf and thought his avatar might be the way for surgeons who couldn’t travel to Boston to benefit from his expertise.

To create the avatar, Coelho worked with Talis Reks, the AR/VR/gaming/big data IT technologist in the Immersion Lab.

“A lot of technology and hardware can be very expensive for startups to access as they start their company journey,” Reks explains. “START.nano is one way of enabling them to utilize and afford the tools and technologies we have at MIT.nano’s Immersion Lab.”

Coelho and her colleagues needed high-fidelity and high-resolution motion-capture technology, volumetric video capture, and a range of other VR/AR technologies to capture Warf’s dexterous finger motions and facial expressions. Warf visited MIT.nano on several occasions to be digitally “captured,” including performing an operation on the physical baby model while wearing special gloves and clothing embedded with sensors.

“These technologies have mostly been used for entertainment or VFX [visual effects] or CGI [computer-generated imagery],” says Reks, “But this is a unique project, because we’re applying it now for real medical practice and real learning.”

One of the biggest challenges, Reks says, was helping to develop what Coelho calls “holoportation”— transmitting the 3D, volumetric video capture of Warf in real-time over the internet so that his avatar can appear in transcontinental medical training.

The Warf avatar has synchronous and asynchronous modes. The training that Vasconcelos received was in the asynchronous mode, where residents can observe the avatar’s demonstrations and ask it questions. The answers, delivered in a variety of languages, come from AI algorithms that draw from previous research and an extensive bank of questions and answers provided by Warf.

In the synchronous mode, Warf operates his avatar from a distance in real time, Coelho says. “He could walk around the room, he could talk to me, he could orient me. It’s amazing.”

Coelho, Warf, Reks, and other team members demonstrated a combination of the modes in a second session in late December. This demo consisted of volumetric live video capture between the Immersion Lab and Brazil, spatialized and visible in real-time through AR headsets. It significantly expanded upon the previous demo, which had only streamed volumetric data in one direction through a two-dimensional display.

Powerful impacts

Warf has a long history of training desperately needed pediatric neurosurgeons around the world, most recently through his nonprofit Neurokids. Remote and simulated training has been an increasingly large part of training since the pandemic, he says, although he doesn’t feel it will ever completely replace personal hands-on instruction and collaboration.

“But if in fact one day we could have avatars, like this one from Giselle, in remote places showing people how to do things and answering questions for them, without the cost of travel, without the time cost and so forth, I think it could be really powerful,” Warf says.

The avatar project is especially important for surgeons serving remote and underserved areas like the Amazon region of Brazil, Coelho says. “This is a way to give them the same level of education that they would get in other places, and the same opportunity to be in touch with Dr. Warf.”

One baby treated for hydrocephalus at a recent Amazon clinic had traveled by boat 30 hours for the surgery, according to Coelho.

Training surgeons with the avatar, she says, “can change reality for this baby and can change the future.”

Professor Edward Roberts, management scholar, champion of entrepreneurship, and “MIT icon,” dies at 88

Thu, 02/29/2024 - 10:30am

Edward B. Roberts ’58, SM ’58, SM ’60, PhD ’62, a visionary management professor who studied entrepreneurship while building a flourishing innovation ecosystem at MIT, died on Tuesday. He was 88 years old.

Over a remarkable seven-decade career at the Institute, Roberts was a prolific scholar and mentor who founded what is now the Martin Trust Center for MIT Entrepreneurship, a unique resource that has guided thousands of innovators as they have brought inventions and ideas to the market.

Roberts, the David Sarnoff Professor of Management of Technology at the MIT Sloan School of Management, was an energetic and encouraging presence who espoused the value of founding companies organized around a clear core idea, and of having significant new technology to apply to that idea. Generations of MIT students as well as faculty found a path forward for their startups as a result, benefitting from the structure of the Martin Trust Center and influenced by Roberts’ work.

“It is not too much to say that MIT’s flourishing entrepreneurial culture and global reputation as a source of influential start-ups grew from seeds Ed planted here 50 years ago,” MIT President Sally Kornbluth wrote in a letter to the MIT community yesterday.

Kornbluth called Roberts an “MIT icon” who was “always doing things no one had done before,” including “pioneering the very idea that entrepreneurship is a craft that can be systematically studied and successfully taught.”

In 2015 Roberts co-authored a report estimating that, as of 2014, MIT alumni had launched 30,200 active companies employing roughly 4.6 million people and generating roughly $1.9 trillion in annual revenues, a figure that would have ranked among the top 10 countries in the world in GDP.

“I have helped MIT to become a much more entrepreneurial place,” Roberts said — in something of an understatement — during a 2011 interview for an MIT Sloan oral history series.

Wide-ranging intellect, entrepreneurial spirit

Born in 1935, Roberts grew up in nearby Chelsea, Massachusetts, commuting to MIT as an undergraduate. Through his intellectual life as a student, as well as his later career as a scholar, Roberts personified the interdisciplinary possibilities of MIT.

Even while earning his undergraduate degree and a master’s degree in electrical engineering, Roberts was often taking two additional courses in economics and at MIT Sloan — despite, as he once recalled, the vocal concerns of his faculty advisor.

As a graduate student, by the late 1950s, Roberts had begun working with MIT faculty member Jay Forrester, a computing pioneer who had started developing many core ideas now integral to the study of system dynamics. Roberts became increasingly interested in the application of those ideas to management, also helping to create a framework for the field then known as industrial dynamics.

Assisted by the extra courses he had already been taking, Roberts earned his master’s in management from MIT Sloan, and then his PhD in economics, with his doctoral work focused on applying system dynamics to the management of research and development. It was MIT’s first doctoral dissertation in system dynamics.

Having joined MIT as a student, Roberts never left. He took a position as a faculty member at MIT Sloan and began working on wide-ranging and important studies of organizational practices in areas that included health care management, among other things.

Along the way, Roberts practiced what he advocated: In the 1960s, while still a junior faculty member, he co-founded his own firm, Pugh-Roberts Associates, which took the ideas of system dynamics to partners in the private sector and government. The firm still exists today, as the Sage Analysis Group.

The books Roberts co-authored early in his career include “The Persistent Poppy” (1975), examining the social and economic impact of heroin use, and “The Dynamics of Human Service Delivery” (1976), applying system dynamics analysis to the service sector.

Over time, Roberts’ work became increasingly focused on the components of successful entrepreneurship. His high-profile 1991 book, Entrepreneurs in High-Technology: Lessons from MIT and Beyond,” was based on a thorough examination of 113 companies founded by entrepreneurs, moving the field forward through its extensive empirical work.

That overlapped with Roberts’ work building a framework for encouraging entrepreneurship at MIT. The MIT Center for Entrepreneurship opened in 1990, providing an essential resource for potential firm founders at the Institute. As the center grew, Roberts himself became a vital figure to many budding entrepreneurs, a vigorous presence offering input based on expert analysis.

“Ed will always be remembered at MIT Sloan as a campus pillar,” wrote Georgia Perakis, interim John C. Head III Dean of MIT Sloan, along with Deputy Dean Michael Cusumano, in a letter to the MIT Sloan community on Tuesday. “He could be found walking the halls, visiting faculty, staff, students, and alumni at the school, and sharing with them parts of the history of MIT Sloan. He remained connected to generations of MIT entrepreneurs, offering advice and guidance as companies were launched. Those of us who knew Ed count ourselves lucky to have had his counsel and will miss him dearly.”

“Virtually everything today in the MIT entrepreneurial ecosystem, from classes to extracurricular activities, has some level of Ed’s DNA at it core,” says Bill Aulet, professor of the practice at MIT Sloan and the managing director of the Martin Trust Center for MIT Entrepreneurship. “But his impact also went well beyond MIT, where Ed Roberts was a generational figure in entrepreneurship as a field of research and instruction.”

MIT faculty who studied with Roberts also recall the impact his teaching had on their own careers.

“I, and many others in the system dynamics group here, took Ed’s course as a doctoral student and learned a great deal about how to work with policymakers and other leaders to increase the chances that the results of modeling would be implemented and have sustained beneficial impact in organizations,” recalls John Sterman, the Jay W. Forrester Professor of Management at MIT Sloan and a professor in the Institute for Data, Systems, and Society.

A celebration of MIT pioneers

In all, Roberts published 12 books and over 160 articles on entrepreneurship and management, with an audience both inside academia and in technology-driven growth industries.

Among his recent works, Roberts’ 2020 book, Celebrating Entrepreneurs: How MIT Nurtured Pioneering Entrepreneurs Who Built Great Companies,” examined how the Institute developed its formal framework and culture of entrepreneurship across a variety of industries.

In addition to founding the Martin Trust Center for MIT Entrepreneurship, Roberts at one point chaired the MIT Management of Technology (MOT) program. He also co-created the MIT Sloan Entrepreneurship and Innovation Certificate program.

Roberts was also an active presence as a co-founder, board member, and investor in startups, including the health care information firm Medical Information Technology, Inc. In addition, Roberts co-founded a group of Zero Stage Capital equity funds, which provided early-stage capital for promising tech startups. All told, Roberts was a board member for more than 40 firms and a co-founder of 14 companies.

Roberts is survived by his wife, Nancy; his children, Valerie and her husband, Mark Friedman, Mitchell and his wife, Jill, and Andrea and her husband, Marc Foster; and nine grandchildren. Donations can be made to the Combined Jewish Philanthropies of Boston in his memory.

How cognition changes before dementia hits

Thu, 02/29/2024 - 12:00am

Individuals with mild cognitive impairment, especially of the “amnestic subtype” (aMCI), are at increased risk for dementia due to Alzheimer’s disease relative to cognitively healthy older adults. Now, a study co-authored by researchers from MIT, Cornell University, and Massachusetts General Hospital has identified a key deficit in people with aMCI, which relates to producing complex language.

This deficit is independent of the memory deficit that characterizes this group and may provide an additional “cognitive biomarker” to aid in early detection — the time when treatments, as they continue to be developed, are likely to be most effective.

The researchers found that while individuals with aMCI could appreciate the basic structure of sentences (syntax) and their meaning (semantics), they struggled with processing certain ambiguous sentences in which pronouns alluded to people not referenced in the sentences themselves.

“These results are among the first to deal with complex syntax and really get at the abstract computation that’s involved in processing these linguistic structures,” says MIT linguistics scholar Suzanne Flynn, co-author of a paper detailing the results.

The focus on subtleties in language processing, in relation to aMCI and its potential transition to dementia such as Alzheimer’s disease is novel, the researchers say.

“Previous research has looked most often at single words and vocabulary,” says co-author Barbara Lust, a professor emerita at Cornell University. “We looked at a more complex level of language knowledge. When we process a sentence, we have to both grasp its syntax and construct a meaning. We found a breakdown at that higher level where you’re integrating form and meaning.”

The paper, “Disintegration at the syntax-semantics interface in prodromal Alzheimer’s disease: New evidence from complex sentence anaphora in amnestic Mild Cognitive Impairment (aMCI),” appears in the Journal of Neurolinguistics.

The paper’s authors are Flynn, a professor in MIT’s Department of Linguistics and Philosophy; Lust, a professor emerita in the Department of Psychology at Cornell and a visiting scholar and research affiliate in the MIT Department of Linguistics and Philosophy; Janet Cohen Sherman, an associate professor of psychology in the Department of Psychiatry at Massachusetts General Hospital and director of the MGH Psychology Assessment Center; and, posthumously, the scholars James Gair and Charles Henderson of Cornell University.

Anaphora and ambiguity

To conduct the study, the scholars ran experiments comparing the cognitive performance of aMCI patients to cognitively healthy individuals in separate younger and older control groups. The research involved 61 aMCI patients of Massachusetts General Hospital, with control group research conducted at Cornell and MIT.

The study pinpointed how well people process and reproduce sentences involving “anaphora.” In linguistics terms, this generally refers to the relation between a word and another form in the sentence, such the use of “his” in the sentence, “The electrician repaired his equipment.” (The term “anaphora” has another related use in the field of rhetoric, involving the repetition of terms.)

In the study, the researchers ran a variety of sentence constructions past aMCI patients and the control groups. For instance, in the sentence, “The electrician fixed the light switch when he visited the tenant,” it is not actually clear if “he” refers to the electrician, or somebody else entirely. The “he” could be a family member, friend, or landlord, among other possibilities.

On the other hand, in the sentence, “He visited the tenant when the electrician repaired the light switch,” “he” and the electrician cannot be the same person. Alternately, in the sentence, “The babysitter emptied the bottle and prepared the formula,” there is no reference at all to a person beyond the sentence.

Ultimately, aMCI patients performed significantly worse than the control groups when producing sentences with “anaphoric coreference,” the ones with ambiguity about the identity of the person referred to via a pronoun.

“It’s not that aMCI patients have lost the ability to process syntax or put complex sentences together, or lost words; it’s that they’re showing a deficit when the mind has to figure out whether to stay in the sentence or go outside it, to figure out who we’re talking about,” Lust explains. “When they didn’t have to go outside the sentence for context, sentence production was preserved in the individuals with aMCI whom we studied.”

Flynn notes: “This adds to our understanding of the deterioration that occurs in early stages of the dementia process. Deficits extend beyond memory loss. While the participants we studied have memory deficits, their memory difficulties do not explain our language findings, as evidenced by a lack of correlation in their performance on the language task and their performances on measures of memory. This suggests that in addition to the memory difficulties that individuals with aMCI experience, they are also struggling with this central aspect of language.”

Looking for a path to treatment

The current paper is part of an ongoing series of studies that Flynn, Lust, Sherman, and their colleagues have performed. The findings have implications for potentially steering neuroscience studies toward regions of the brain that process language, when investigating MCI and other forms  of dementia, such as primary progressive aphasia. The study may also help inform linguistics theory concerning various forms of anaphora.

Looking ahead, the scholars say they would like to increase the size of the studies as part of an effort to continue to define how it is that diseases progress and how language may be a predictor of that.

“Our data is a small population but very richly theoretically guided,” Lust says. “You need hypotheses that are linguistically informed to make advances in neurolinguistics. There’s so much interest in the years before Alzheimer’s disease is diagnosed, to see if it can be caught and its progression stopped.”

As Flynn adds, “The more precise we can become about the neuronal locus of deterioration, that’s going to make a big difference in terms of developing treatment.”

Support for the research was provided by the Cornell University Podell Award, Shamitha Somashekar and Apple Corporation, Federal Formula Funds, Brad Hyman at Massachusetts General Hospital, the Cornell Bronfenbrenner Center for Life Course Development, the Cornell Institute for Translational Research on Aging, the Cornell Institute for Social Science Research, and the Cornell Cognitive Science Program.

The MIT Press announces Grant Program for Diverse Voices recipients for 2024

Wed, 02/28/2024 - 5:00pm

Launched in 2021, the Grant Program for Diverse Voices from the MIT Press provides direct support for new work by authors who bring excluded or chronically underrepresented perspectives to the fields in which the press publishes, which include the sciences, arts, and humanities.

Recipients are selected after submitting a book proposal and completing a successful peer review. Grants can support a variety of needs, including research travel, copyright permission fees, parental/family care, developmental editing, and other costs associated with the research and writing process. 

For 2024, the press will support five projects, including “Our Own Language: The Power of Kreyòl and Other Native Languages for Liberation and Justice in Haiti and Beyond,” by MIT professor of linguistics Michel DeGraff. The book will provide a much-needed reassessment of what learning might look like in Kreyòl-based, as opposed to French-language, classrooms in Haiti. 

Additionally, Kimberly Juanita Brown has been selected for “Black Elegies,” which will be the second book in the “On Seeing” series, which is published in simultaneous print and expanded digital formats. Brown says, “I am thrilled to be a recipient of the Grant Program for Diverse Voices. This award is an investment in the work that we do; work that responds to sites of inquiry that deserve illumination.”

“The recipients of this year’s grant program have produced exceptional proposals that surface new ideas, voices, and perspectives within their respective fields,” says Amy Brand, director and publisher, the MIT Press. “We are proud to lend our support and look forward to publishing these works in the near future.”

Recipients for 2024 include: 

“Black Elegies,” by Kimberly Juanita Brown

“Black Elegies” explores the art of mourning in contemporary cultural productions. Structured around the sensorial, the book moves through sight, sound, and touch in order to complicate what Okwui Enwezor calls the “national emergency of black grief.” Using fiction, photography, music, film, and poetry, “Black Elegies” delves into explorations of mourning that take into account the multiple losses sustained by black subjects, from forced migration and enslavement to bodily violations, imprisonment, and death. “Black Elegies” is in the “On Seeing” series and will be published in collaboration with Brown University Digital Publications.

Kimberly Juanita Brown is the inaugural director of the Institute for Black Intellectual and Cultural Life at Dartmouth College, where she is also an associate professor of English and creative writing. She is the author of “The Repeating Body: Slavery's Visual Resonance in the Contemporary” and “Mortevivum.”

“Our Own Language: The Power of Kreyòl and Other Native Languages for Liberation and Justice in Haiti and Beyond,” by Michel DeGraff

Kreyòl is the only language spoken by all Haitians in Haiti. Yet, most schoolchildren in Haiti are still being taught with manuals written in a language they do not speak — French. DeGraff challenges and corrects the assumptions and errors in the linguistics discipline that regard Creole languages as inferior, and puts forth what learning might look like in Kreyòl-based classrooms in Haiti. Published in a dual-language edition,“Our Own Language” will use Haiti and Kreyòl as a case study of linguistic and educational justice for human rights, liberation, sovereignty, and nation building.

Michel DeGraff is an MIT professor of linguistics, co-founder and co-director of the MIT-Haiti Initiative, founding member of Akademi Kreyòl Ayisyen, and in 2022 was named a fellow of the Linguistic Society of America. 

“Glitchy Vision: A Feminist History of the Social Photo,” by Amanda K. Greene

“Glitchy Vision” examines how new photographic social media cultures can change human bodies through the glitches they introduce into quotidian habits of feeling and seeing. Focusing on glitchiness provides new, needed vantages on the familiar by troubling the typical trajectories of bodies and technologies. Greene’s research operates at the nexus of visual culture, digital studies, and the health humanities, attending especially to the relationship between new media and chronic pain and vulnerability. Shining a light on an underserved area of analysis, her scholarship focuses on how illness, pain, and disability are encountered and “read” in everyday life.

Amanda Greene is a researcher at the Center for Bioethics and Social Sciences in Medicine at the University of Michigan.

“Data by Design: A Counterhistory of Data Visualization, 1789-1900,” by Silas Munro, et al.

“Data by Design: A Counterhistory of Data Visualization, 1789-1900” excavates the hidden history of data visualization through evocative argument and bold visual detail. Developed by the project team of Lauren F. Klein with Tanvi Sharma, Jay Varner, Nicholas Yang, Dan Jutan, Jianing Fu, Anna Mola, Zhou Fang, Marguerite Adams, Shiyao Li, Yang Li, and Silas Munro, “Data by Design” is both an interactive website and a lavishly illustrated book expertly adapted for print by Munro. The project interweaves cultural-critical analyses of historical visualization examples, culled from archival research, with new visualizations. 

Silas Munro is founder of the LGBTQ+ and BIPOC (Black, Indigenous, and people of color)-owned graphic design studio Polymode, based in Los Angeles and Raleigh, North Carolina. Munro is faculty co-chair for the Museum of Fine Arts Program in Graphic Design at the Vermont College of Fine Arts.

“Attention is Discovery: The Life and Work of Henrietta Leavitt,” by Anna Von Mertens

“Attention is Discovery” is a layered portrait of Henrietta Leavitt, the woman who laid the foundation for modern cosmology. Through her attentive study of the two-dimensional surface of thousands of glass plates, Leavitt revealed a way to calculate the distance to faraway stars and envision a previously inconceivable three-dimensional universe. In this compelling story of an underrecognized female scientist, Leavitt’s achievement, long subsumed under the headlining work of Edwin Hubble, receives its due spotlight. 

Anna Von Mertens received her MFA from the California College of the Arts and her BA from Brown University.

3 Questions: Shaping the future of work in an age of AI

Wed, 02/28/2024 - 4:40pm

The MIT Shaping the Future of Work Initiative, co-directed by MIT professors Daron Acemoglu, David Autor, and Simon Johnson, celebrated its official launch on Jan. 22. The new initiative’s mission is to analyze the forces that are eroding job quality and labor market opportunities for non-college workers and identify innovative ways to move the economy onto a more equitable trajectory. Here, Acemoglu, Autor, and Johnson speak about the origins, goals, and plans for their new initiative.

Q: What was the impetus for creating the MIT Shaping the Future of Work Initiative?

David Autor: The last 40 years have been increasingly difficult for the 65 percent of U.S. workers who do not have a four-year college degree. Globalization, automation, deindustrialization, de-unionization, and changes in policy and ideology have led to fewer jobs, declining wages, and lower job quality, resulting in widening inequality and shrinking opportunities.

The prevailing economic view has been that this erosion is inevitable — that the best we can do is focus on the supply side, educating workers to meet market demands, or perhaps providing some offsetting transfers to those who have lost employment opportunities.

Underpinning this fatalism is a paradigm which says that the factors shaping demand for work, such as technological change, are immutable: workers must adapt to these forces or be left behind. This assumption is false. The direction of technology is something we choose, and the institutions that shape how these forces play out (e.g., minimum wage laws, regulations, collective bargaining, public investments, social norms) are also endogenous.

To challenge a prevailing narrative, it is not enough to simply say that it is wrong — to truly change a paradigm we must lead by showing a viable alternative pathway. We must answer what sort of work we want and how we can make policies and shape technology that builds that future.

Q: What are your goals for the initiative?

Daron Acemoglu: The initiative's ambition is not modest. Simon, David, and I are hoping to make advances in new empirical work to interpret what has happened in the recent past and understand how different types of technologies could be impacting prosperity and inequality. We want to contribute to the emergence of a coherent framework that can inform us about how institutions and social forces shape the trajectory of technology, and that helps us to identify, empirically and conceptually, the inefficiencies and the misdirections of technology. And on this basis, we are hoping to contribute to policy discussions in which policy, institutions, and norms are part of what shapes the future of technology in a more beneficial direction. Last but not least, our mission is not just to do our own research, but to help build an ecosystem in which other, especially younger, researchers are inspired to explore these issues.

Q: What are your next steps?

Simon Johnson: David, Daron, and I plan for this initiative to move beyond producing insightful and groundbreaking research — our aim is to identify innovative pro-worker ideas that policymakers, the private sector, and civil society can use. We will continue to translate research into practice by regularly convening students, scholars, policymakers, and practitioners who are shaping the future of work — to include fortifying and diversifying the pipeline of emerging scholars who produce policy-relevant research around our core themes.

We will also produce a range of resources to bring our work to wider audiences. Last fall, David, Daron, and I wrote the initiative’s inaugural policy memo, entitled “Can we Have Pro-Worker AI? Choosing a path of machines in service of minds.” Our thesis is that, instead of focusing on replacing workers by automating job tasks as quickly as possible, the best path forward is to focus on developing worker-augmenting AI tools that enable less-educated or less-skilled workers to perform more expert tasks — as well as creating work, in the form of new productive tasks, for workers across skill and education levels.

As we move forward, we will also look for opportunities to engage globally with a wide range of scholars working on related issues.

How early-stage cancer cells hide from the immune system

Wed, 02/28/2024 - 11:00am

One of the immune system’s primary roles is to detect and kill cells that have acquired cancerous mutations. However, some early-stage cancer cells manage to evade this surveillance and develop into more advanced tumors.

A new study from MIT and Dana-Farber Cancer Institute has identified one strategy that helps these precancerous cells avoid immune detection. The researchers found that early in colon cancer development, cells that turn on a gene called SOX17 can become essentially invisible to the immune system.

If scientists could find a way to block SOX17 function or the pathway that it activates, this may offer a new way to treat early-stage cancers before they grow into larger tumors, the researchers say.

“Activation of the SOX17 program in the earliest innings of colorectal cancer formation is a critical step that shields precancerous cells from the immune system. If we can inhibit the SOX17 program, we might be better able to prevent colon cancer, particularly in patients that are prone to developing colon polyps,” says Omer Yilmaz, an MIT associate professor of biology, a member of MIT’s Koch Institute for Integrative Cancer Research, and one of the senior authors of the study.

Judith Agudo, a principal investigator at Dana-Farber Cancer Institute and an assistant professor at Harvard Medical School, is also a senior author of the study, which appears today in Nature. The paper’s lead author is MIT Research Scientist Norihiro Goto. Other collaborators include Tyler Jacks, a professor of biology and a member of MIT’s Koch Institute; Peter Westcott, a former Jacks lab postdoc who is now an assistant professor at Cold Spring Harbor Laboratory; and Saori Goto, an MIT postdoc in the Yilmaz lab.

Immune evasion

Colon cancer usually arises in long-lived cells called intestinal stem cells, whose job is to continually regenerate the lining of the intestines. Over their long lifetime, these cells can accumulate cancerous mutations that lead to the formation of polyps, a type of premalignant growth that can eventually become metastatic colon cancer.

To learn more about how these precancerous growths evade the immune system, the researchers used a technique they had previously developed for growing mini colon tumors in a lab dish and then implanting them into mice. In this case, the researchers engineered the tumors to express mutated versions of cancer-linked genes Kras, p53, and APC, which are often found in human colon cancers.

Once these tumors were implanted in mice, the researchers observed a dramatic increase in the tumors’ expression of SOX17. This gene encodes a transcription factor that is normally active only during embryonic development, when it helps to control development of the intestines and the formation of blood vessels.

The researchers’ experiments revealed that when SOX17 is turned on in cancer cells, it helps the cells to create an immunosuppressive environment. Among its effects, SOX17 prevents cells from synthesizing the receptor that normally detects interferon gamma, a molecule that is one of the immune system’s primary weapons against cancer cells. 

Without those interferon gamma receptors, cancerous and precancerous cells can simply ignore messages from the immune system, which would normally direct them to undergo programmed cell death.

“One of SOX17’s main roles is to turn off the interferon gamma signaling pathway in colorectal cancer cells and in precancerous adenoma cells. By turning off interferon gamma receptor signaling in the tumor cells, the tumor cells become hidden from T cells and can grow in the presence of an immune system,” Yilmaz says.

Without interferon gamma signaling, cancer cells also minimize their production of molecules called MHC proteins, which are responsible for displaying cancerous antigens to the immune system. The cells’ insensitivity to interferon gamma also prevents them from producing immune molecules called chemokines, which normally recruit T cells that would help destroy the cancerous cells.

Targeting SOX17

When the researchers generated colon tumor organoids with SOX17 knocked out, and implanted those into mice, the immune system was able to attack those tumors much more effectively. This suggests that preventing cancer cells from turning off SOX17 could offer a way to treat colon cancer in its earliest stages.

“Just by turning off SOX17 in fairly complex tumors, we were able to essentially obliterate the ability of these tumor cells to persist,” Goto says.

As part of their study, the researchers also analyzed gene expression data from patients with colon cancer and found that SOX17 tended to be highly expressed in early-stage colon cancers but dropped off as the tumors became more invasive and metastatic.

“We think this makes a lot of sense because as colorectal cancers become more invasive and metastatic, there are other mechanisms that create an immunosuppressive environment,” Yilmaz says. “As the colon cancer becomes more aggressive and activates these other mechanisms, then there’s less importance for SOX17.”

Transcription factors such as SOX17 are considered difficult to target using drugs, in part because of their disorganized structure, so the researchers now plan to identify other proteins that SOX17 interacts with, in hopes that it might be easier to block some of those interactions.

The researchers also plan to investigate what triggers SOX17 to turn on in precancerous cells.

The research was funded by the MIT Stem Cell Initiative via Fondation MIT, the National Institutes of Health/National Cancer Institute, and a Koch Institute-Dana Farber Harvard Cancer Center Bridge Project grant.

Investigating and preserving Quechua

Wed, 02/28/2024 - 10:10am

Soledad Chango, a native of Ecuador and a graduate student in MIT’s Indigenous Language Initiative, began preparations for her Quechua course with a clear idea about its purpose.

“Our language matters,” she says. “It’s worth studying and spreading.”

Quechua at MIT, a new two-week introductory class hosted by MIT Global Languages during the Institute’s Independent Activities Period in January, introduced students to the basics of Kichwa, a Quechua variant that is the most widely spoken language in the Americas. The class, which featured an interactive approach, focused on oral and written skills, emphasizing tasks based on familiar contexts. “I prepared conversations that reflect cultural values,” Chango emphasizes. 

Chango, a scholar of language acquisition, credited her advisor, MIT Linguistics professor Norvin Richards, and postdoc Cora Lesure with helping shape the course. Global Languages section head Per Urlaub helped ready the course for the classroom. “They helped me refine my ideas about what to teach and how to teach it,” she says.

Cultural immersion, value, and language acquisition

Because language can often be better understood when connected with its cultural context, Chango introduced students to the history, culture, and geography of the Andes mountains where the language’s speakers live, work, and play. Cultural discussions and interactions with artifacts were designed to help students understand the value of the endangered language.

“Every day, we dedicated time to individually review our writing and grammar skills,” says Isabel Naty Sanchez Taipe, a computer science and education student at Wellesley College and a cross-registered student and student researcher at MIT. “We practiced the pronunciation of new vocabulary and sentences out loud, and engaged in diverse group activities and games where we spoke Quechua as much as possible.” 

Chango sought to emphasize the importance of keeping Kichwa and Quechua alive. When endangered languages disappear, so do the communities and culture from which they rose. 

“In 2014, I was investigating Indigenous language advancement, tracking changes and usage,” she says. “Research shows the youngest Indigenous people retain and value their native languages the least.” 

Multilingualism as a tool for improvement

Multilingualism can prove valuable both academically and professionally.

“I would definitely recommend that people explore languages taught in this manner,” says Prahlad Balaji Iyengar, a PhD student in electrical engineering and computer science who took the course. “I think this was a great opportunity for me to learn a new mode of thought.”

As Chango continues to review and refine the course, she looks to technology to both help retain Quechua’s distinctive traits and reverse its trajectory toward extinction. She wants to ensure languages like Kichwa find interested audiences outside of their native cultures.

“Technology can help spread the word and increase interest in Indigenous languages like Quechua,” she says. “I want to expand its reach from oral tradition and transmission and develop it so it supports quantifiable and replicable language instruction.”

Study unlocks nanoscale secrets for designing next-generation solar cells

Wed, 02/28/2024 - 5:00am

Perovskites, a broad class of compounds with a particular kind of crystal structure, have long been seen as a promising alternative or supplement to today’s silicon or cadmium telluride solar panels. They could be far more lightweight and inexpensive, and could be coated onto virtually any substrate, including paper or flexible plastic that could be rolled up for easy transport.

In their efficiency at converting sunlight to electricity, perovskites are becoming comparable to silicon, whose manufacture still requires long, complex, and energy-intensive processes. One big remaining drawback is longevity: They tend to break down in a matter of months to years, while silicon solar panels can last more than two decades. And their efficiency over large module areas still lags behind silicon. Now, a team of researchers at MIT and several other institutions has revealed ways to optimize efficiency and better control degradation, by engineering the nanoscale structure of perovskite devices.

The study reveals new insights on how to make high-efficiency perovskite solar cells, and also provides new directions for engineers working to bring these solar cells to the commercial marketplace. The work is described today in the journal Nature Energy, in a paper by Dane deQuilettes, a recent MIT postdoc who is now co-founder and chief science officer of the MIT spinout Optigon, along with MIT professors Vladimir Bulovic and Moungi Bawendi, and 10 others at MIT and in Washington state, the U.K., and Korea.

“Ten years ago, if you had asked us what would be the ultimate solution to the rapid development of solar technologies, the answer would have been something that works as well as silicon but whose manufacturing is much simpler,” Bulovic says. “And before we knew it, the field of perovskite photovoltaics appeared. They were as efficient as silicon, and they were as easy to paint on as it is to paint on a piece of paper. The result was tremendous excitement in the field.”

Nonetheless, “there are some significant technical challenges of handling and managing this material in ways we’ve never done before,” he says. But the promise is so great that many hundreds of researchers around the world have been working on this technology. The new study looks at a very small but key detail: how to “passivate” the material’s surface, changing its properties in such a way that the perovskite no longer degrades so rapidly or loses efficiency.

“The key is identifying the chemistry of the interfaces, the place where the perovskite meets other materials,” Bulovic says, referring to the places where different materials are stacked next to perovskite in order to facilitate the flow of current through the device.

Engineers have developed methods for passivation, for example by using a solution that creates a thin passivating coating. But they’ve lacked a detailed understanding of how this process works — which is essential to make further progress in finding better coatings. The new study “addressed the ability to passivate those interfaces and elucidate the physics and science behind why this passivation works as well as it does,” Bulovic says.

The team used some of the most powerful instruments available at laboratories around the world to observe the interfaces between the perovskite layer and other materials, and how they develop, in unprecedented detail. This close examination of the passivation coating process and its effects resulted in “the clearest roadmap as of yet of what we can do to fine-tune the energy alignment at the interfaces of perovskites and neighboring materials,” and thus improve their overall performance, Bulovic says.

While the bulk of a perovskite material is in the form of a perfectly ordered crystalline lattice of atoms, this order breaks down at the surface. There may be extra atoms sticking out or vacancies where atoms are missing, and these defects cause losses in the material’s efficiency. That’s where the need for passivation comes in.

“This paper is essentially revealing a guidebook for how to tune surfaces, where a lot of these defects are, to make sure that energy is not lost at surfaces,” deQuilettes says. “It’s a really big discovery for the field,” he says. “This is the first paper that demonstrates how to systematically control and engineer surface fields in perovskites.”

The common passivation method is to bathe the surface in a solution of a salt called hexylammonium bromide, a technique developed at MIT several years ago by Jason Jungwan Yoo PhD ’20, who is a co-author of this paper, that led to multiple new world-record efficiencies. By doing that “you form a very thin layer on top of your defective surface, and that thin layer actually passivates a lot of the defects really well,” deQuilettes says. “And then the bromine, which is part of the salt, actually penetrates into the three-dimensional layer in a controllable way.” That penetration helps to prevent electrons from losing energy to defects at the surface.

These two effects, produced by a single processing step, produces the two beneficial changes simultaneously. “It’s really beautiful because usually you need to do that in two steps,” deQuilettes says.

The passivation reduces the energy loss of electrons at the surface after they have been knocked loose by sunlight. These losses reduce the overall efficiency of the conversion of sunlight to electricity, so reducing the losses boosts the net efficiency of the cells.

That could rapidly lead to improvements in the materials’ efficiency in converting sunlight to electricity, he says. The recent efficiency records for a single perovskite layer, several of them set at MIT, have ranged from about 24 to 26 percent, while the maximum theoretical efficiency that could be reached is about 30 percent, according to deQuilettes.

An increase of a few percent may not sound like much, but in the solar photovoltaic industry such improvements are highly sought after. “In the silicon photovoltaic industry, if you’re gaining half of a percent in efficiency, that’s worth hundreds of millions of dollars on the global market,” he says. A recent shift in silicon cell design, essentially adding a thin passivating layer and changing the doping profile, provides an efficiency gain of about half of a percent. As a result, “the whole industry is shifting and rapidly trying to push to get there.” The overall efficiency of silicon solar cells has only seen very small incremental improvements for the last 30 years, he says.

The record efficiencies for perovskites have mostly been set in controlled laboratory settings with small postage-stamp-size samples of the material. “Translating a record efficiency to commercial scale takes a long time,” deQuilettes says. “Another big hope is that with this understanding, people will be able to better engineer large areas to have these passivating effects.”

There are hundreds of different kinds of passivating salts and many different kinds of perovskites, so the basic understanding of the passivation process provided by this new work could help guide researchers to find even better combinations of materials, the researchers suggest. “There are so many different ways you could engineer the materials,” he says.

“I think we are on the doorstep of the first practical demonstrations of perovskites in the commercial applications,” Bulovic says. “And those first applications will be a far cry from what we’ll be able to do a few years from now.” He adds that perovskites “should not be seen as a displacement of silicon photovoltaics. It should be seen as an augmentation — yet another way to bring about more rapid deployment of solar electricity.”

“A lot of progress has been made in the last two years on finding surface treatments that improve perovskite solar cells,” says Michael McGehee, a professor of chemical engineering at the University of Colorado who was not associated with this research. “A lot of the research has been empirical with the mechanisms behind the improvements not being fully understood. This detailed study shows that treatments can not only passivate defects, but can also create a surface field that repels carriers that should be collected at the other side of the device. This understanding might help further improve the interfaces.”

The team included researchers at the Korea Research Institute of Chemical Technology, Cambridge University, the University of Washington in Seattle, and Sungkyunkwan University in Korea. The work was supported by the Tata Trust, the MIT Institute for Soldier Nanotechnologies, the U.S. Department of Energy, and the U.S. National Science Foundation.

Explained: Carbon credits

Wed, 02/28/2024 - 12:00am

One of the most contentious issues faced at the 28th Conference of Parties (COP28) on climate change last December was a proposal for a U.N.-sanctioned market for trading carbon credits. Such a mechanism would allow nations and industries making slow progress in reducing their own carbon emissions to pay others to take emissions-reducing measures, such as improving energy efficiency or protecting forests.

Such trading systems have already grown to a multibillion-dollar market despite a lack of clear international regulations to define and monitor the claimed emissions reductions. During weeks of feverish negotiations, some nations, including the U.S., advocated for a somewhat looser approach to regulations in the interests of getting a system in place quickly. Others, including the European Union, advocated much tighter regulation, in light of a history of questionable or even counterproductive projects of this kind in the past. In the end, no agreement was reached on the subject, which will be revisited at a later meeting.

The concept seems simple enough: Offset emissions in one place by preventing or capturing an equal amount of emissions elsewhere. But implementing that idea has turned out to be far more complex and fraught with problems than many expected.

For example, projects that aim to preserve a section of forest — which can remove carbon dioxide from the air and sequester it in the soil — face numerous issues. Will the preservation of one parcel just lead to the clearcutting of an adjacent parcel? Would the preserved land have been left uncut anyway? And what if it ends up being destroyed by wildfire, drought, or insect infestation — all of which are expected to become more likely with climate change?

Similarly, projects that aim to capture carbon dioxide emissions and inject them into the ground are sometimes used to justify increasing the production of petroleum or natural gas, negating the intended climate mitigation of the process.

Several experts at MIT now say that the system could be effective, at least in certain circumstances, but it must be thoroughly evaluated and regulated.

Carbon removal, natural or mechanical

Sergey Paltsev, deputy director of MIT’s Joint Program on the Science and Policy of Global Change, co-led a study and workshop last year that included policymakers, industry representatives, and researchers. They focused on one kind of carbon offsets, those based on natural climate solutions — restoration or preservation of natural systems that not only sequester carbon but also provide other benefits, such as greater biodiversity. “We find a lot of confusion and misperceptions and misinformation, even about how you define the term carbon credit or offset,” he says.

He points out that there has been a lot of criticism of the whole idea of carbon offsets, “and that criticism is well-placed. I think that’s a very healthy conversation, to clarify what makes sense and what doesn’t make sense. What are the real actions versus what is greenwashing?”

He says that government-mandated and managed carbon trading programs in some places, including British Columbia and parts of Europe, have been somewhat effective because they have clear standards in place, whereas unregulated carbon credit systems have often been abused.

Charles Harvey, an MIT professor of civil and environmental engineering, should know, having been actively involved in both sides of the issue over the last two decades. He co-founded a company in 2008 that was the first private U.S. company to attempt to remove carbon dioxide from emissions on a commercial scale, a process called carbon capture and sequestration, or CCS. Such projects have been a major recipient of federal subsidies aimed at combatting climate change, but Harvey now says these are largely a waste of money and in most cases do not achieve their stated objective.

In fact, he says that according to industry sources, as of 2021 more than 90 percent of CCS projects in the U.S. have been used for the production of more fossil fuels — oil and natural gas. Here's how it works: Natural gas wells often produce methane mixed with carbon dioxide, which must be removed to produce a marketable natural gas. This carbon dioxide is then injected into oil wells to stimulate more production. So, the net effect is the creation of more total greenhouse gas emissions rather than less, explains Harvey, who recently received a grant from the Rockefeller Foundation to explore CCS projects and whether they can be made to contribute to true emissions reductions.

What went wrong with the ambitious startup CCS company Harvey co-founded? “What happened is that the prices of renewables and energy storage are now incredibly cheap,” he says. “It makes no sense to do this, ever, on power plants because honestly, fossil fuel power plants don’t even really make economic sense anymore.”

Where does Harvey see potential for carbon credits to work? One possibility is the preservation or restoration of tropical peatlands, which he has received another grant to study. These are vast areas of permanently waterlogged land in which dead plant matter —and the carbon it contains — remains in place because the water prevents the normal decomposition processes that would otherwise release the stored carbon back into the air.

While it is virtually impossible to quantify the amount of carbon stored in the soil of forest or farmland, in peatlands that’s easy to do because essentially all of the submerged material is carbon-based. Simply measuring changes in the elevation of such land, which can be done remotely by plane or satellite, gives a precise measure of how much carbon has been stored or released. When a patch of peat forest that has been clear-cut to build plantations or roads is reforested, the amount of carbon emissions that were prevented can be measured accurately.

Because of that potential for accurate documentation, protecting or restoring peat bogs can also be a good way to achieve meaningful offsets for carbon emissions elsewhere, Harvey says. Rewetting a previously drained peat forest can immediately counteract the release of its stored carbon and can keep it there as long as it is not drained again — something that can be verified using satellite data.

Paltsev adds that while such nature-based systems for countering carbon emissions can be a key component of addressing climate change, especially in very difficult-to-decarbonize industries such as aviation, carbon credits for such programs “shouldn’t be a replacement for our efforts at emissions reduction. It should be in addition.”

Criteria for meaningful offsets

John Sterman, the Jay W. Forrester Professor of Management at the MIT Sloan School of Management, has published a set of criteria for evaluating proposed carbon offset plans to make sure they would provide the benefits they claim. At present, “there’s no regulation, there’s no oversight” for carbon offsets, he says. “There have been many scandals over this.”

For example, one company was providing what it claimed was certification for carbon offset projects but was found to have such lax standards that the claimed offsets were often not real. For example, there were multiple claims to protect the same piece of forest and claims to protect land that was already legally protected.

Sterman’s proposed set of criteria goes by the acronym AVID+. “It stands for four principles that you have to meet in order for your offset to be legitimate: It has to be additional, verifiable, immediate, and durable,” he says. “And then I call it AVID+,” he adds, the “plus” being for plans that have additional benefits as well, such as improving health, creating jobs, or helping historically disadvantaged communities.

Offsets can be useful, he says, for addressing especially hard-to-abate industries such as steel or cement manufacturing, or aviation. But it is essential to meet all four of the criteria, or else real emissions are not really being offset. For example, planting trees today, while often a good thing to do, would take decades to offset emissions going into the atmosphere now, where they may persist for centuries — so that fails to meet the “immediate” requirement.

And protecting existing forests, while also desirable, is very hard to prove as being additional, because “that requires a counterfactual that you can never observe,” he says. “That’s where a lot of squirrely accounting and a lot of fraud comes in, because how do you know that the forest would have been cut down but for the offset?” In one well-documented case, he points out, a company tried to sell carbon offsets for a section of forest that was already an established nature preserve.

Are there offsets that can meet all the criteria and provide real benefits in helping to address climate change? Yes, Sterman and Harvey say, but they need to be evaluated carefully.

“My favorite example,” Sterman says, “is doing deep energy retrofits and putting solar panels on low-income housing.” These measures can help address the so-called landlord-tenant problem: If tenants typically pay the utility bills, landlords have little incentive to pay for efficiency improvements, and the tenants don’t have the capital to make such improvements on their own. “Policies that would make this possible are pretty good candidates for legitimate offsets, because they are additional — low-income households can’t afford to do it without assistance, so it’s not going to happen without a program. It’s verifiable, because you’ve got the utility bills pre and post.” They are also quite immediate, typically taking only a year or so to implement, and “they’re pretty durable,” he says.

Another example is a recent plan in Alaska that allows cruise ships to offset the emissions caused by their trips by paying into a fund that provides subsidies for Alaskan citizens to install heat pumps in their homes, thus preventing emissions from wood or fossil fuel heating systems. “I think this is a pretty good candidate to meet the criteria, certainly a lot better than much of what’s being done today,” Sterman says.

But eventually, what is really needed, the researchers agree, are real, enforceable standards. After COP28, carbon offsets are still allowed, Sterman says, “but there is still no widely accepted mandatory regulation. We’re still in the wild West.”

Paltsev nevertheless sees reasons for optimism about nature-based carbon offset systems. For example, he says the aviation industry has recently agreed to implement a set of standards for offsetting their emissions, known as CORSIA, for carbon offsetting and reduction scheme for international aviation. “It’s a point for optimism,” he says, “because they issued very tough guidelines as to what projects are eligible and what projects are not.”

He adds, “There is a solution if you want to find a good solution. It is doable, when there is a will and there is the need.”

Moving past the Iron Age

Wed, 02/28/2024 - 12:00am

MIT graduate student Sydney Rose Johnson has never seen the steel mills in central India. She’s never toured the American Midwest’s hulking steel plants or the mini mills dotting the Mississippi River. But in the past year, she’s become more familiar with steel production than she ever imagined.

A fourth-year dual degree MBA and PhD candidate in chemical engineering and a graduate research assistant with the MIT Energy Initiative (MITEI) as well as a 2022-23 Shell Energy Fellow, Johnson looks at ways to reduce carbon dioxide (CO2) emissions generated by industrial processes in hard-to-abate industries. Those include steel.

Almost every aspect of infrastructure and transportation — buildings, bridges, cars, trains, mass transit — contains steel. The manufacture of steel hasn’t changed much since the Iron Age, with some steel plants in the United States and India operating almost continually for more than a century, their massive blast furnaces re-lined periodically with carbon and graphite to keep them going.

According to the World Economic Forum, steel demand is projected to increase 30 percent by 2050, spurred in part by population growth and economic development in China, India, Africa, and Southeast Asia.

The steel industry is among the three biggest producers of CO2 worldwide. Every ton of steel produced in 2020 emitted, on average, 1.89 tons of CO2 into the atmosphere — around 8 percent of global CO2 emissions, according to the World Steel Association.

A combination of technical strategies and financial investments, Johnson notes, will be needed to wrestle that 8 percent figure down to something more planet-friendly.

Johnson’s thesis focuses on modeling and analyzing ways to decarbonize steel. Using data mined from academic and industry sources, she builds models to calculate emissions, costs, and energy consumption for plant-level production.

“I optimize steel production pathways using emission goals, industry commitments, and cost,” she says. Based on the projected growth of India’s steel industry, she applies this approach to case studies that predict outcomes for some of the country’s thousand-plus factories, which together have a production capacity of 154 million metric tons of steel. For the United States, she looks at the effect of Inflation Reduction Act (IRA) credits. The 2022 IRA provides incentives that could accelerate the steel industry’s efforts to minimize its carbon emissions.

Johnson compares emissions and costs across different production pathways, asking questions such as: “If we start today, what would a cost-optimal production scenario look like years from now? How would it change if we added in credits? What would have to happen to cut 2005 levels of emissions in half by 2030?”

“My goal is to gain an understanding of how current and emerging decarbonization strategies will be integrated into the industry,” Johnson says.

Grappling with industrial problems

Growing up in Marietta, Georgia, outside Atlanta, the closest she ever came to a plant of any kind was through her father, a chemical engineer working in logistics and procuring steel for an aerospace company, and during high school, when she spent a semester working alongside chemical engineers tweaking the pH of an anti-foaming agent.

At Kennesaw Mountain High School, a STEM magnet program in Cobb County, students devote an entire semester of their senior year to an internship and research project.

Johnson chose to work at Kemira Chemicals, which develops chemical solutions for water-intensive industries with a focus on pulp and paper, water treatment, and energy systems.

“My goal was to understand why a polymer product was falling out of suspension — essentially, why it was less stable,” she recalls. She learned how to formulate a lab-scale version of the product and conduct tests to measure its viscosity and acidity. Comparing the lab-scale and regular product results revealed that acidity was an important factor. “Through conversations with my mentor, I learned this was connected with the holding conditions, which led to the product being oxidized,” she says. With the anti-foaming agent’s problem identified, steps could be taken to fix it.

“I learned how to apply problem-solving. I got to learn more about working in an industrial environment by connecting with the team in quality control as well as with R&D and chemical engineers at the plant site,” Johnson says. “This experience confirmed I wanted to pursue engineering in college.”

As an undergraduate at Stanford University, she learned about the different fields — biotechnology, environmental science, electrochemistry, and energy, among others — open to chemical engineers. “It seemed like a very diverse field and application range,” she says. “I was just so intrigued by the different things I saw people doing and all these different sets of issues.”

Turning up the heat

At MIT, she turned her attention to how certain industries can offset their detrimental effects on climate.

“I’m interested in the impact of technology on global communities, the environment, and policy. Energy applications affect every field. My goal as a chemical engineer is to have a broad perspective on problem-solving and to find solutions that benefit as many people, especially those under-resourced, as possible,” says Johnson, who has served on the MIT Chemical Engineering Graduate Student Advisory Board, the MIT Energy and Climate Club, and is involved with diversity and inclusion initiatives.

The steel industry, Johnson acknowledges, is not what she first imagined when she saw herself working toward mitigating climate change.

“But now, understanding the role the material has in infrastructure development, combined with its heavy use of coal, has illuminated how the sector, along with other hard-to-abate industries, is important in the climate change conversation,” Johnson says.

Despite the advanced age of many steel mills, some are quite energy-efficient, she notes. Yet these operations, which produce heat upwards of 3,000 degrees Fahrenheit, are still emission-intensive.

Steel is made from iron ore, a mixture of iron, oxygen, and other minerals found on virtually every continent, with Brazil and Australia alone exporting millions of metric tons per year. Commonly based on a process dating back to the 19th century, iron is extracted from the ore through smelting — heating the ore with blast furnaces until the metal becomes spongy and its chemical components begin to break down.

A reducing agent is needed to release the oxygen trapped in the ore, transforming it from its raw form to pure iron. That’s where most emissions come from, Johnson notes.

“We want to reduce emissions, and we want to make a cleaner and safer environment for everyone,” she says. “It’s not just the CO2 emissions. It’s also sometimes NOx and SOx [nitrogen oxides and sulfur oxides] and air pollution particulate matter at some of these production facilities that can affect people as well.”

In 2020, the International Energy Agency released a roadmap exploring potential technologies and strategies that would make the iron and steel sector more compatible with the agency’s vision of increased sustainability. Emission reductions can be accomplished with more modern technology, the agency suggests, or by substituting the fuels producing the immense heat needed to process ore. Traditionally, the fuels used for iron reduction have been coal and natural gas. Alternative fuels include clean hydrogen, electricity, and biomass.

Using the MITEI Sustainable Energy System Analysis Modeling Environment (SESAME), Johnson analyzes various decarbonization strategies. She considers options such as switching fuel for furnaces to hydrogen with a little bit of natural gas or adding carbon-capture devices. The models demonstrate how effective these tactics are likely to be. The answers aren’t always encouraging.

“Upstream emissions can determine how effective the strategies are,” Johnson says. Charcoal derived from forestry biomass seemed to be a promising alternative fuel, but her models showed that processing the charcoal for use in the blast furnace limited its effectiveness in negating emissions.

Despite the challenges, “there are definitely ways of moving forward,” Johnson says. “It’s been an intriguing journey in terms of understanding where the industry is at. There’s still a long way to go, but it’s doable.”

Johnson is heartened by the steel industry’s efforts to recycle scrap into new steel products and incorporate more emission-friendly technologies and practices, some of which result in significantly lower CO2 emissions than conventional production.

A major issue is that low-carbon steel can be more than 50 percent more costly than conventionally produced steel. “There are costs associated with making the transition, but in the context of the environmental implications, I think it’s well worth it to adopt these technologies,” she says.

After graduation, Johnson plans to continue to work in the energy field. “I definitely want to use a combination of engineering knowledge and business knowledge to work toward mitigating climate change, potentially in the startup space with clean technology or even in a policy context,” she says. “I’m interested in connecting the private and public sectors to implement measures for improving our environment and benefiting as many people as possible.”

Sadhana Lolla named 2024 Gates Cambridge Scholar

Tue, 02/27/2024 - 4:10pm

MIT senior Sadhana Lolla has won the prestigious Gates Cambridge Scholarship, which offers students an opportunity to pursue graduate study in the field of their choice at Cambridge University in the U.K.

Established in 2000, the Gates Cambridge Scholarship offers full-cost post-graduate scholarships to outstanding applicants from countries outside of the U.K. The mission of the scholarship is to build a global network of future leaders committed to improving the lives of others.

Lolla, a senior from Clarksburg, Maryland, is majoring in computer science and minoring in mathematics and literature. At Cambridge, she will pursue an MPhil in technology policy.

In the future, Lolla aims to lead conversations on deploying and developing technology for marginalized communities, such as the rural Indian village that her family calls home, while also conducting research in embodied intelligence.

At MIT, Lolla conducts research on safe and trustworthy robotics and deep learning at the Distributed Robotics Laboratory with Professor Daniela Rus. Her research has spanned debiasing strategies for autonomous vehicles and accelerating robotic design processes. At Microsoft Research and Themis AI, she works on creating uncertainty-aware frameworks for deep learning, which has impacts across computational biology, language modeling, and robotics. She has presented her work at the Neural Information Processing Systems (NeurIPS) conference and the International Conference on Machine Learning (ICML). 

Outside of research, Lolla leads initiatives to make computer science education more accessible globally. She is an instructor for class 6.s191 (MIT Introduction to Deep Learning), one of the largest AI courses in the world, which reaches millions of students annually. She serves as the curriculum lead for Momentum AI, the only U.S. program that teaches AI to underserved students for free, and she has taught hundreds of students in Northern Scotland as part of the MIT Global Teaching Labs program.

Lolla was also the director for xFair, MIT’s largest student-run career fair, and is an executive board member for Next Sing, where she works to make a cappella more accessible for students across musical backgrounds. In her free time, she enjoys singing, solving crossword puzzles, and baking. 

“Between Sadhana's impressive research in the Distributed Robotics Group, her volunteer teaching with Momentum AI, and her internship and extracurricular experiences, she has developed the skills to be a leader,” says Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development. “Her work at Cambridge will allow her the time to think about reducing bias in systems and the ethical implications of her work. I am proud that she will be representing MIT in the Gates Cambridge community.”

New AI model could streamline operations in a robotic warehouse

Tue, 02/27/2024 - 12:00am

Hundreds of robots zip back and forth across the floor of a colossal robotic warehouse, grabbing items and delivering them to human workers for packing and shipping. Such warehouses are increasingly becoming part of the supply chain in many industries, from e-commerce to automotive production.

However, getting 800 robots to and from their destinations efficiently while keeping them from crashing into each other is no easy task. It is such a complex problem that even the best path-finding algorithms struggle to keep up with the breakneck pace of e-commerce or manufacturing. 

In a sense, these robots are like cars trying to navigate a crowded city center. So, a group of MIT researchers who use AI to mitigate traffic congestion applied ideas from that domain to tackle this problem.

They built a deep-learning model that encodes important information about the warehouse, including the robots, planned paths, tasks, and obstacles, and uses it to predict the best areas of the warehouse to decongest to improve overall efficiency.

Their technique divides the warehouse robots into groups, so these smaller groups of robots can be decongested faster with traditional algorithms used to coordinate robots. In the end, their method decongests the robots nearly four times faster than a strong random search method.

In addition to streamlining warehouse operations, this deep learning approach could be used in other complex planning tasks, like computer chip design or pipe routing in large buildings.

“We devised a new neural network architecture that is actually suitable for real-time operations at the scale and complexity of these warehouses. It can encode hundreds of robots in terms of their trajectories, origins, destinations, and relationships with other robots, and it can do this in an efficient manner that reuses computation across groups of robots,” says Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering (CEE), and a member of a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society (IDSS).

Wu, senior author of a paper on this technique, is joined by lead author Zhongxia Yan, a graduate student in electrical engineering and computer science. The work will be presented at the International Conference on Learning Representations.

Robotic Tetris

From a bird’s eye view, the floor of a robotic e-commerce warehouse looks a bit like a fast-paced game of “Tetris.”

When a customer order comes in, a robot travels to an area of the warehouse, grabs the shelf that holds the requested item, and delivers it to a human operator who picks and packs the item. Hundreds of robots do this simultaneously, and if two robots’ paths conflict as they cross the massive warehouse, they might crash.

Traditional search-based algorithms avoid potential crashes by keeping one robot on its course and replanning a trajectory for the other. But with so many robots and potential collisions, the problem quickly grows exponentially.

“Because the warehouse is operating online, the robots are replanned about every 100 milliseconds. That means that every second, a robot is replanned 10 times. So, these operations need to be very fast,” Wu says.

Because time is so critical during replanning, the MIT researchers use machine learning to focus the replanning on the most actionable areas of congestion — where there exists the most potential to reduce the total travel time of robots.

Wu and Yan built a neural network architecture that considers smaller groups of robots at the same time. For instance, in a warehouse with 800 robots, the network might cut the warehouse floor into smaller groups that contain 40 robots each.

Then, it predicts which group has the most potential to improve the overall solution if a search-based solver were used to coordinate trajectories of robots in that group.

An iterative process, the overall algorithm picks the most promising robot group with the neural network, decongests the group with the search-based solver, then picks the next most promising group with the neural network, and so on.

Considering relationships

The neural network can reason about groups of robots efficiently because it captures complicated relationships that exist between individual robots. For example, even though one robot may be far away from another initially, their paths could still cross during their trips.

The technique also streamlines computation by encoding constraints only once, rather than repeating the process for each subproblem. For instance, in a warehouse with 800 robots, decongesting a group of 40 robots requires holding the other 760 robots as constraints. Other approaches require reasoning about all 800 robots once per group in each iteration.

Instead, the researchers’ approach only requires reasoning about the 800 robots once across all groups in each iteration.

“The warehouse is one big setting, so a lot of these robot groups will have some shared aspects of the larger problem. We designed our architecture to make use of this common information,” she adds.

They tested their technique in several simulated environments, including some set up like warehouses, some with random obstacles, and even maze-like settings that emulate building interiors.

By identifying more effective groups to decongest, their learning-based approach decongests the warehouse up to four times faster than strong, non-learning-based approaches. Even when they factored in the additional computational overhead of running the neural network, their approach still solved the problem 3.5 times faster.

In the future, the researchers want to derive simple, rule-based insights from their neural model, since the decisions of the neural network can be opaque and difficult to interpret. Simpler, rule-based methods could also be easier to implement and maintain in actual robotic warehouse settings.

“This approach is based on a novel architecture where convolution and attention mechanisms interact effectively and efficiently. Impressively, this leads to being able to take into account the spatiotemporal component of the constructed paths without the need of problem-specific feature engineering. The results are outstanding: Not only is it possible to improve on state-of-the-art large neighborhood search methods in terms of quality of the solution and speed, but the model generalizes to unseen cases wonderfully,” says Andrea Lodi, the Andrew H. and Ann R. Tisch Professor at Cornell Tech, and who was not involved with this research.

This work was supported by Amazon and the MIT Amazon Science Hub.

Pages