MIT Latest News
Professor Emeritus Peter Schiller, a pioneer researcher of the visual system, dies at 92
Peter Schiller, professor emeritus in the Department of Brain and Cognitive Sciences and a member of the MIT faculty since 1964, died on Dec. 23, 2023. He was 92.
Born in Berlin to Hungarian parents in 1931, Schiller and his family returned to Budapest in 1934, where they endured World War II; in 1947 he moved to the United States with his father and stepmother. Schiller attended college at Duke University, where he was on the soccer and tennis teams and received his bachelor’s degree in 1955. He then went on to earn his PhD with Morton Weiner at Clark University, where he studied cortical involvement in visual masking. In 1962, he came to what was then the Department of Psychology at MIT for postdoctoral research. Schiller was appointed an assistant professor in 1964 and full professor in 1971. He was appointed to the Dorothy Poitras Chair for Medical Physiology in 1986 and retired in 2013.
“Peter Schiller was a towering figure in the field of visual neurophysiology,” says Mriganka Sur, the Newton Professor of Neuroscience. “He was one of the pioneers of experimental studies in nonhuman primates, and his laboratory, together with those of Emilio Bizzi and Ann Graybiel, established MIT as a leading center of research in brain mechanisms of visual and motor function.”
Recalls John Maunsell, the Albert D. Lasker Distinguished Service Professor of Neurobiology at the University of Chicago, who did postdoctoral research with Schiller, “Peter was the boldest experimentalist I’ve ever known. Once he engaged with a question, he was unintimidated by how exacting, intricate, or extensive the required experiments might be. Over the years he produced an impressive range of results that others viewed as beyond reach.”
Schiller’s former PhD student Michael Stryker, the W.F. Ganong Professor of Physiology at the University of California at San Francisco, writes, “Schiller was merciless in his criticism of weakly supported conclusions, whether by students or by major figures in the field. He demanded good data, real measurements, no matter how hard they were to make.”
Schiller’s research spanned multiple areas. As a graduate student, he designed an apparatus, the five-field tachitoscope, that rigorously controlled the timing and sequence of images shown to each eye in order to study visual masking and the generation of optical illusions. With it, Schiller demonstrated that several well-known optical illusions are generated in the cortex of the brain rather than by processes in the peripheral visual system.
Seeking postdoctoral research, he turned to his father’s friend, Hans-Lukas Teuber, who had just accepted an offer to be founding head of the Department of Psychology at MIT. Schiller learned how to make single-unit electrophysiological recordings from the brains of awake animals, which added a new dimension to his studies of the circuitry and mechanisms of cortical processing in the visual system. Among other findings, he saw that brightness masking in the visual system was caused by interactions among retinal neurons, in contrast to the cortical mechanism of illusions.
In 1964, Schiller was appointed assistant professor. Soon after, he embarked on productive collaborations with Emilio Bizzi, who had just arrived in the Department of Psychology. Schiller and Bizzi, who is now an Institute Professor Emeritus, shared an interest in the neural control of movement; they set to work on the oculomotor system and how it guides saccades, the rapid eye movements that center objects of interest in the visual field. They quantified the firing patterns of motor neurons that generate saccadic eye movements; paired with studies of the superior colliculus, the brain center that guides saccades in primates, and the frontal eye fields of the cortex, they outlined a fundamental scheme for the control of saccades, in which one system identifies targets in the visual scene and another generates eye movements to direct the gaze toward the target.
Continuing his dissection of visual circuitry, Schiller and his colleagues traced the connections that two different types of retinal cells, known as parasol cells and midget cells, send from the retina to the lateral geniculate nucleus of the thalamus. They discovered that each cell type connects to a different area, and that this physical segregation reflects a functional difference: Midget cells process color and fine texture while parasol cells carry motion and depth information. He then turned to the ON and OFF channels of the visual system — channels originating in different types of retinal neurons: some which respond to the onset of light, others that respond to the offset of light, and others that respond to both on and off. Building on earlier work by others, and inspired by recent discoveries of ways to pharmacologically isolate ON and OFF systems, Schiller and several of his students extended the previous studies to primates and developed an explanation for the evolutionary benefit of what seems at first like a paradoxical system: that the ON/OFF system allows animals to perceive both increments and decrements in contrast and brightness more rapidly, a beneficial attribute if those shifts, for instance, represent the approach of a predator.
At the same time, the Schiller lab delved further into the role of various parts of the cortex in visual processing, especially the areas known as V4 and MT, later steps in visual processing pathways. Through single-neuron recordings and by making lesions in specific areas of the brain in the animals they studied, they revealed that area V4 has a major role in the selection of visual targets that are smaller or have lower contrast compared to other stimuli in a scene, an ability that, for example, helps an animal unmask a camouflaged predator or prey. Strikingly, he showed that many variations in images that are important for perception have a delayed influence on the responses of neurons in the primary visual cortex, indicating that they are produced by feedback from higher stages of visual processing.
Schiller’s many significant contributions to vision science were recognized with his election to the National Academy of Sciences and the American Academy of Arts and Sciences in 2007, and, in his home country, he was made an honorary member of the Magyar Tudományos Akadémia, the Hungarian Academy of Sciences, in 2008.
Schiller’s legacy is also evident in his students and trainees. Schiller counted more than 50 students and postdocs who passed through his lab in its 50 years. Four of his trainees have since been elected to the National Academy of Sciences: graduate students Larry Squire and Stryker, and postdocs Maunsell and Nikos Logothetis.
His mentorship also extended to faculty colleagues, recalls Picower professor of neuroscience Earl Miller: “He generously took me under his wing when I began at MIT, offering invaluable advice that steered me in the right direction. I will forever be grateful to him. His mentorship style was not coddling. It was direct and frank, just like Peter always was. I remember early in my nascent career when I was rattled by finding myself in a scientific disagreement with a senior investigator. Peter calmed me down, in his way. He said, ‘Don't worry, controversy is great for a career.’ But he quickly added, ‘As long as you are right; otherwise, well ...’
Schiller’s creative streak did not just influence his scientific thinking; he was an accomplished guitar and piano player, and he loved building complex and abstract sculptures, many of them constructed from angular pieces of colored glass. He is survived by his three children, David, Kyle, and Sarah, and five grandchildren. His wife, Ann Howell, died in 1999.
Award shines a spotlight on local science journalism
Local reporting is a critical tool in the battle against disinformation and misinformation. It can also provide valuable data about everything from environmental damage derived from questionable agribusiness practices to the long-term effects of logging on communities.
Reporting like this requires more than just journalistic chops. It needs a network that can share these important stories, access to readers, and financial support. That’s why organizations like the Knight Science Journalism Program at MIT and its Victor K. McElheny Award are important.
Founded in 2018 with a gift from Knight Science Journalism (KSJ) Program founding director Victor McElheny and his wife, Ruth McElheny, the KSJ Victor K. McElheny Award rewards local science journalists for their pioneering work and their stories’ impacts.
“The prize can help illustrate a continuing contribution to the maximum level of public understanding of what technology and science are achieving, and what these achievements imply for humanity,” McElheny says.
The award comes with a $10,000 prize.
“Local science journalism has value, in part, because consolidation in this sector has meant fewer journalists and a shrinking pool of resources with which to do this important work,” notes editor Cathy Clabby, a Knight Science Journalism Fellowship Program alumna (2008). Clabby was part of the team at The Charlotte Observer and The Raleigh News and Observer that earned the McElheny Award in 2023 for its poultry farm investigation.
"The award demonstrated a commitment to high journalistic standards," Clabby says.
These journalistic standards and the accompanying national recognition for awardees can lend further legitimacy to long-form science journalism.
Features and outcomes
Additionally, while some news outlets are starved of the resources necessary to produce deeply-researched, high-quality stories, receiving the McElheny Award can help raise the visibility of small and nonprofit newsrooms, which can help with circulation, operating expenses, and fundraising.
“The award has a very real value to our audience, especially as we develop our digital subscriber model,” notes journalist Tony Bartelme, one of several Charleston Post and Courier reporters whose feature on the Gulf Stream won the inaugural award in 2019. “If readers see this kind of national recognition, they’re more likely to see the value of subscribing.”
“The financial element of the award is certainly a delightful surprise, particularly for a team project like this with a small budget,” says journalist Aaron Scott, whose team at Oregon Public Broadcasting won for its “Timber Wars” podcast series in 2021. “It filled me with joy getting to tell my colleagues they'd be getting bonus checks in the mail.”
Deborah Blum — the Pulitzer Prize-winning director of the Knight Science Journalism Program and founder of Undark Magazine — argues that local and regional journalists play a central role in promoting science literacy and critical thinking skills among their readers. Blum describes an information ecosystem worthy of preservation, with local science journalism acting as a fundamental building block of public consciousness and shared understanding.
"Science stories told by reporters in the home community, known and trusted by their neighbors, have a special ability to reach readers and listeners," Blum says.
Value, vision, and recognition
Storytelling has value beyond views, clicks, and shares, according to McElheny Award winners.
“An informed electorate helps ensure a functional and accountable government,” Clabby asserts.
Journalists point to the skills necessary to produce thoughtful, reasoned stories that can impact readers, communities, and other journalists as valuable assets for creating powerful pieces.
“Science journalism is hard to do because it takes time to wade through it all and understand the science with enough depth to tell the story properly,” Bartelme says. “But, what’s more important than a planet on fire?”
Further, recognition from their peers can serve as validation for what can sometimes become months of research and reporting to produce such important stories.
“Recognition [as evidenced by] the Victor K. McElheny Award is deeply rewarding,” Scott believes, “because it means some of our most accomplished and thoughtful peers are listening to, reading, and thinking deeply about a story we've invested so much in telling.”
Outcomes and impacts
The Victor K. McElheny Award for Local Science Journalism confers national recognition on journalists performing a critical function in producing an informed electorate. Local science journalism can have lasting impacts on readers, apprise audiences of advances and challenges related to science and technology, and help secure funding for current and future efforts.
“Fact-based journalism has value for audiences,” Clabby says.
Scott, noting the value of balanced science reporting, described science journalism as “both more important, and more under threat by politicization, than ever before.”
“The McElheny Award is really the only award that celebrates science stories that reach this important audience,” Bartelme concludes. “Local journalists have a special and often more intimate relationship with readers than national organizations.”
Remembering Elise O’Hara, Media Lab staff member
Elise O’Hara, a cherished member of the Media Lab community, died on Dec. 12, 2023, as a result of complications following the birth of a healthy child.
As an administrative assistant for multiple research groups and initiatives — most recently, the Space Exploration Initiative and Tangible Media Group — O’Hara managed a variety of complex, high-priority projects with skill, patience, and good humor.
In her time at the Media Lab, O’Hara was perhaps best known for her warmth and her kindness. Professor Hiroshi Ishii says of her, “Elise was not just a colleague to us but a dear friend whose presence brought light and warmth to the Tangible Media Group and MIT Media Lab. All my colleagues loved and respected Elise; her beautiful soul positively touched many lives. We feel fortunate to have worked alongside Elise and to have witnessed the remarkable person she was.”
Samantha Gutierrez-Arango, a research assistant in the Biomechatronics group, says, “Elise was an integral and treasured part of the Media Lab; throughout her time, she touched many lives and hearts and was instrumental in many processes during Covid-19, ensuring the students were happy and healthy when coming back. She covered various groups, so her skills in managing the Media Lab’s day-to-day business were greatly appreciated. Elise was very passionate about theater, acting, and teaching; she loved practicing her Spanish and had a great sense of humor. She always approached life with an optimistic and playful spirit despite obstacles, and was always innovating ways to help her children have a fun time.”
Outside of her work at the Lab, O’Hara was actively involved in local community theater, performing with companies including The Fringe Theater in Needham, Massachusetts, and the Milton Players in Canton.
O’Hara earned her bachelor’s degree from Sacred Heart University in Connecticut and her master's in theater education from Emerson College.
She is survived by her husband, Sean O’Hara; her parents, Robert and Jean Valerio; her sister, Julie Cornell; and her three children.
Solving complex problems with technology and varied perspectives at Sphere Las Vegas
Something new, large, and round has dominated the Las Vegas skyline since July: Sphere.
After debuting this summer, the state-of-the-art entertainment venue became instantly recognizable thanks to pictures and videos on social media and Reddit. Some of the most viral posts depict the 580,000-square-foot, fully programmable LED Exosphere projecting a giant yellow emoji that smiles, sleeps, and follows airplanes flying overhead with a look of wonder.
According to Jared Miller ’98, MBA ’03, SM ’03, Sphere’s growing popularity even before its official opening last September — when the Irish rock band U2 began its months-long residency — is a testament to the work of the creative team that made it happen.
“The team we have assembled in many ways reflects my experience at MIT,” says Miller, who is executive vice president and CIO at Sphere Entertainment.
“We have deep technology experts, engineers, scientists, artists, creative technologists, and people who have worked in many different industries who have come together to embrace this vision,” adds Miller. “The diversity of the people you’re surrounded with … brings different perspectives [and an] enthusiasm to come together and collaborate on a solution. This is what’s really special about Sphere, and it applies to MIT as well.”
Embracing the pivot
As an undergraduate, Miller majored in chemical engineering and interned in the oil and gas industry, after which he decided to pursue an alternative career path. This led to a job at Intel during the race to build the first microprocessor capable of achieving 1 gigahertz.
Miller learned a lot about himself and his professional interests during the experience, and he was eager for more. “I wanted to learn more about the business aspects; to move from being an engineer into a broader management and strategy role,” he says.
He applied to the program then known as Leaders for Manufacturing (LFM) and matriculated in 2001. The program was then focused on “Big M manufacturing,” but as Miller recalls, LFM was growing and evolving toward its eventual renaming as Leaders for Global Operations (LGO). As a result, the student experience was expanding far beyond manufacturing and into other disciplines.
For Miller, this meant the airline industry. “The intersection of technology and guest experience was taking hold in the industry because it required a pretty rapid shift in how airports and airlines were thinking about … how they were moving people through their journey,” he says.
LGO students participate in six-month internships at LGO partner companies that serve as a basis for their thesis projects. Miller interned at Continental Airlines, where he studied the use of self-service check-in kiosks and their impact on traveler experience.
After graduation, he remained at Continental — which merged with United Airlines in 2010 — for almost a decade, until he pivoted to designing and building new venues in the sports and entertainment industry.
“MIT constantly encouraged and challenged us to think very openly about the opportunities that lie ahead. In my case, these pivots didn’t seem that odd or awkward between the different engineering fields and industries. It was just another step in the journey,” says Miller. “The intersection of technology and the guest experience was at the heart of what I was doing.”
Merging invention with varied perspectives
Until the venue’s official launch, all the public knew about Sphere was what they could see displayed on its massive Exosphere. Once U2 played their first of 40 shows and filmmaker Darren Aronofsky’s “Postcard from Earth” premiered as part of The Sphere Experience, audiences were granted access to what Miller and his team had also been working on.
These include a fully immersive display plane with 16k x 16k resolution, 4D technologies like haptic systems and atmospheric effects to influence what guests are literally feeling, the world’s largest beamforming audio system, and more.
“So much of what we’ve done at Sphere has been about invention,” says Miller.
By “invention,” Miller means the sense of identifying potential experiences for the audience and working back from that point when developing the necessary technologies. Though he is quick to explain that technology is not always the solution to a problem, but simply one of many tools that can be used.
“A lot of it comes through process improvements,” explains Miller. “You’ve got to analyze what didn’t work, using a lot of data to come back and say, ‘You know what? This is what needs to change. This is why this approach didn’t work.’ Then get right back up and find another way to tackle the problem.”
From using systems thinking and data analytics to address complex problems — like how to guarantee that 18,000 people in a spherical structure will have the same experience — to building teams that collaborate well to produce possible solutions, Miller credits many of the tools at his disposal to his learnings at MIT.
He learned how to think about complex problems more broadly, and how to think collaboratively with others from a wide variety of backgrounds — much like the team at Sphere.
“At LGO, we discussed and worked on problems that hadn’t been solved yet. We needed a diverse group of people to come together and use all their experiences and expertise to create that solve,” says Miller. “It’s bringing together that diverse group of people to work together that ultimately gets to a great solution.”
How the brain responds to reward is linked to socioeconomic background
MIT neuroscientists have found that the brain’s sensitivity to rewarding experiences — a critical factor in motivation and attention — can be shaped by socioeconomic conditions.
In a study of 12 to 14-year-olds whose socioeconomic status (SES) varied widely, the researchers found that children from lower SES backgrounds showed less sensitivity to reward than those from more affluent backgrounds.
Using functional magnetic resonance imaging (fMRI), the research team measured brain activity as the children played a guessing game in which they earned extra money for each correct guess. When participants from higher SES backgrounds guessed correctly, a part of the brain called the striatum, which is linked to reward, lit up much more than in children from lower SES backgrounds.
The brain imaging results also coincided with behavioral differences in how participants from lower and higher SES backgrounds responded to correct guesses. The findings suggest that lower SES circumstances may prompt the brain to adapt to the environment by dampening its response to rewards, which are often scarcer in low SES environments.
“If you’re in a highly resourced environment, with many rewards available, your brain gets tuned in a certain way. If you’re in an environment in which rewards are more scarce, then your brain accommodates the environment in which you live. Instead of being overresponsive to rewards, it seems like these brains, on average, are less responsive, because probably their environment has been less consistent in the availability of rewards,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.
Gabrieli and Rachel Romeo, a former MIT postdoc who is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland, are the senior authors of the study. MIT postdoc Alexandra Decker is the lead author of the paper, which appears today in the Journal of Neuroscience.
Reward response
Previous research has shown that children from lower SES backgrounds tend to perform worse on tests of attention and memory, and they are more likely to experience depression and anxiety. However, until now, few studies have looked at the possible association between SES and reward sensitivity.
In the new study, the researchers focused on a part of the brain called the striatum, which plays a significant role in reward response and decision-making. Studies in people and animal models have shown that this region becomes highly active during rewarding experiences.
To investigate potential links between reward sensitivity, the striatum, and socioeconomic status, the researchers recruited more than 100 adolescents from a range of SES backgrounds, as measured by household income and how much education their parents received.
Each of the participants underwent fMRI scanning while they played a guessing game. The participants were shown a series of numbers between 1 and 9, and before each trial, they were asked to guess whether the next number would be greater than or less than 5. They were told that for each correct guess, they would earn an extra dollar, and for each incorrect guess, they would lose 50 cents.
Unbeknownst to the participants, the game was set up to control whether the guess would be correct or incorrect. This allowed the researchers to ensure that each participant had a similar experience, which included periods of abundant rewards or few rewards. In the end, everyone ended up winning the same amount of money (in addition to a stipend that each participant received for participating in the study).
Previous work has shown that the brain appears to track the rate of rewards available. When rewards are abundant, people or animals tend to respond more quickly because they don’t want to miss out on the many available rewards. The researchers saw that in this study as well: When participants were in a period when most of their responses were correct, they tended to respond more quickly.
“If your brain is telling you there’s a really high chance that you’re going to receive a reward in this environment, it's going to motivate you to collect rewards, because if you don’t act, you’re missing out on a lot of rewards,” Decker says.
Brain scans showed that the degree of activation in the striatum appeared to track fluctuations in the rate of rewards across time, which the researchers think could act as a motivational signal that there are many rewards to collect. The striatum lit up more during periods in which rewards were abundant and less during periods in which rewards were scarce. However, this effect was less pronounced in the children from lower SES backgrounds, suggesting their brains were less attuned to fluctuations in the rate of reward over time.
The researchers also found that during periods of scarce rewards, participants tended to take longer to respond after a correct guess, another phenomenon that has been shown before. It’s unknown exactly why this happens, but two possible explanations are that people are savoring their reward or that they are pausing to update the reward rate. However, once again, this effect was less pronounced in the children from lower SES backgrounds — that is, they did not pause as long after a correct guess during the scarce-reward periods.
“There was a reduced response to reward, which is really striking. It may be that if you’re from a lower SES environment, you’re not as hopeful that the next response will gain similar benefits, because you may have a less reliable environment for earning rewards,” Gabrieli says. “It just points out the power of the environment. In these adolescents, it’s shaping their psychological and brain response to reward opportunity.”
Environmental effects
The fMRI scans performed during the study also revealed that children from lower SES backgrounds showed less activation in the striatum when they guessed correctly, suggesting that their brains have a dampened response to reward.
The researchers hypothesize that these differences in reward sensitivity may have evolved over time, in response to the children’s environments.
“Socioeconomic status is associated with the degree to which you experience rewards over the course of your lifetime,” Decker says. “So, it’s possible that receiving a lot of rewards perhaps reinforces behaviors that make you receive more rewards, and somehow this tunes the brain to be more responsive to rewards. Whereas if you are in an environment where you receive fewer rewards, your brain might become, over time, less attuned to them.”
The study also points out the value of recruiting study subjects from a range of SES backgrounds, which takes more effort but yields important results, the researchers say.
“Historically, many studies have involved the easiest people to recruit, who tend to be people who come from advantaged environments. If we don’t make efforts to recruit diverse pools of participants, we almost always end up with children and adults who come from high-income, high-education environments,” Gabrieli says. “Until recently, we did not realize that principles of brain development vary in relation to the environment in which one grows up, and there was very little evidence about the influence of SES.”
The research was funded by the William and Flora Hewlett Foundation and a Natural Sciences and Engineering Research Council of Canada Postdoctoral Fellowship.
A new drug candidate can shrink kidney cysts
Autosomal dominant polycystic kidney disease (ADPKD), the most common form of polycystic kidney disease, can lead to kidney enlargement and eventual loss of function. The disease affects more than 12 million people worldwide, and many patients end up needing dialysis or a kidney transplant by the time they reach their 60s.
Researchers at MIT and Yale University School of Medicine have now found that a compound originally developed as a potential cancer treatment holds promise for treating ADPKD. The drug works by exploiting kidney cyst cells’ vulnerability to oxidative stress — a state of imbalance between damaging free radicals and beneficial antioxidants.
In a study employing two mouse models of the disease, the researchers found that the drug dramatically shrank kidney cysts without harming healthy kidney cells.
“We really believe this has potential to impact the field and provide a different treatment paradigm for this important disease,” says Bogdan Fedeles, a research scientist and program manager in MIT’s Center for Environmental Health Sciences and the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences.
John Essigmann, the William R. and Betsy P. Leitch Professor of Biological Engineering and Chemistry at MIT; Sorin Fedeles, executive director of the Polycystic Kidney Disease Outcomes Consortium and assistant professor (adjunct) at Yale University School of Medicine; and Stefan Somlo, the C.N.H. Long Professor of Medicine and Genetics and chief of nephrology at Yale University School of Medicine, are the senior authors of the paper.
Cells under stress
ADPKD typically progresses slowly. Often diagnosed when patients are in their 30s, it usually doesn’t cause serious impairment of kidney function until patients reach their 60s. The only drug that is FDA-approved to treat the disease, tolvaptan, slows growth of the cysts but has side effects that include frequent urination and possible liver damage.
Essigmann’s lab did not originally set out to study PKD; the new study grew out of work on potential new drugs for cancer. Nearly 25 years ago, MIT research scientist Robert Croy, also an author of the new PNAS study, designed compounds that contain a DNA-damaging agent known as an aniline mustard, which can induce cell death in cancer cells.
In the mid 2000s, Fedeles, then a grad student in Essigmann’s lab, along with Essigmann and Croy, discovered that in addition to damaging DNA, these compounds also induce oxidative stress by interfering with mitochondria — the organelles that generate energy for cells.
Tumor cells are already under oxidative stress because of their abnormal metabolism. When they are treated with these compounds, known as 11beta compounds, the additional disruption helps to kill the cells. In a study published in 2011, Fedeles reported that treatment with 11beta compounds significantly suppressed the growth of prostate tumors implanted in mice.
A conversation with his brother, Sorin Fedeles, who studies polycystic kidney disease, led the pair to theorize that these compounds might also be good candidates for treating kidney cysts. At the time, research in ADPKD was beginning to suggest that kidney cyst cells also experience oxidative stress, due to an abnormal metabolism that resembles that of cancer cells.
“We were talking about a mechanism of what would be a good drug for polycystic kidney disease, and we had this intuition that the compounds that I was working with might actually have an impact in ADPKD,” Bogdan Fedeles says.
The 11beta compounds work by disrupting the mitochondria’s ability to generate ATP (the molecules that cells use to store energy), as well as a cofactor known as NADPH, which can act as an antioxidant to help cells neutralize damaging free radicals. Tumor cells and kidney cyst cells tend to produce increased levels of free radicals because of the oxidative stress they’re under. When these cells are treated with 11beta compounds, the extra oxidative stress, including the further depletion of NADPH, pushes the cells over the edge.
“A little bit of oxidative stress is OK, but the cystic cells have a low threshold for tolerating it. Whereas normal cells survive treatment, the cystic cells will die because they exceed the threshold,” Essigmann says.
Shrinking cysts
Using two different mouse models of ADPKD, the researchers showed that 11beta-dichloro could significantly reduce the size of kidney cysts and improve kidney function.
The researchers also synthesized a “defanged” version of the compound called 11beta-dipropyl, which does not include any direct DNA-damaging ability and could potentially be safer for use in humans. They tested this compound in the early-onset model of PKD and found that it was as effective as 11beta-dichloro.
In all of the experiments, healthy kidney cells did not appear to be affected by the treatment. That’s because healthy cells are able to withstand a small increase in oxidative stress, unlike the diseased cells, which are highly susceptible to any new disturbances, the researchers say. In addition to restoring kidney function, the treatment also ameliorated other clinical features of ADPKD; biomarkers for tissue inflammation and fibrosis were decreased in the treated mice compared to the control animals.
The results also suggest that in patients, treatment with 11beta compounds once every few months, or even once a year, could significantly delay disease progression, and thus avoid the need for continuous, burdensome antiproliferative therapies such as tolvaptan.
“Based on what we know about the cyst growth paradigm, you could in theory treat patients in a pulsatile manner — once a year, or perhaps even less often — and have a meaningful impact on total kidney volume and kidney function,” Sorin Fedeles says.
The researchers now hope to run further tests on 11beta-dipropyl, as well as develop ways to produce it on a larger scale. They also plan to explore related compounds that could be good drug candidates for PKD.
Other MIT authors who contributed to this work include Research Scientist Nina Gubina, former postdoc Sakunchai Khumsubdee, former postdoc Denise Andrade, and former undergraduates Sally S. Liu ’20 and co-op student Jake Campolo. The research was funded by the PKD Foundation, the U.S. Department of Defense, the National Institutes of Health, and the National Institute of Environmental Health Sciences through the Center for Environmental Health Sciences at MIT.
Getfit, MIT Health’s winter exercise challenge, turns 20 in 2024
“Getfit” isn’t a command, but rather a friendly challenge from MIT Health (formerly MIT Medical) to spend the cold months exercising with a group of people you choose in any way you choose. This year, the popular winter fitness program is celebrating its 20th year. What began as a goal-oriented exercise incentive for MIT Health staff in its pilot year, and expanded in 2005 to the entire MIT community, has now become a cherished tradition for many.
Tom Goodwin, a staff member in MIT Health who has participated every year, states the program’s value succinctly. “Getfit starts when it is cold and dark and ends when it is warm and bright. Forming a team and logging minutes motivates us to get off the couch and move!”
Andrea Porras, a Media Lab affiliate and four-time participant, says, “I look forward to getfit every year because it’s the only thing motivating me to make an extra step in the winter. It's a fun way to learn more about your co-worker and the sporty activities they participate in. You’d never know who’s into boxing or hockey!”
The getfit challenge continued even during the Covid-19 pandemic, providing an important means of connecting with others during a time of social distancing. It has grown steadily from 1,277 participants in 2005 to 3,385 participants on 501 teams in 2023, logging a total of 12,890,676 exercise minutes. That’s 214,845 hours — or an average of 63.5 hours for each participant — over the three-month period.
Almost any kind of exercise can be counted — walking across campus, climbing stairs, running or swimming, skiing, stretching, even time spent lifting heavy items. Any exercise that gets your heart beating faster than usual, uses your muscles, or increases your flexibility can help to fulfill your weekly exercise-minutes goal and add to your team’s combined minutes, which need to be entered into the getfit website by the following Monday.
Teams of five to eight people can include staff, students, faculty, affiliates, and family members with two co-captains. Each team strives to meet the weekly exercise goals for individuals and teams, starting with 150 minutes in the first week and building up to 300 minutes in week 12. Some team members may complete more minutes than others, at varying degrees of exercise intensity. But as long as each individual meets getfit’s minimum weekly goal, they qualify for that week’s prize drawing for individuals, and teams whose per-member average meets or exceeds the goal are entered in the drawing for weekly team prizes.
“We have been doing the getfit challenge for 17 years and loving it,” says a spokesperson for the MIT Sloan EverReadys, a seven-member team with five faculty members, one affiliate, and one family member. “It gets us going in the middle of winter and continues to motivate us for the whole 12 weeks. Some of us can get over 300 minutes each of the weeks; others barely make the required amount. Nonetheless, there have been a number of times when we had a perfect season, each person making the required amount in each of the 12 weeks. We also think we are probably the oldest gang in town. Our ages range from the 50s to the 90s. ‘Not bad,’ we like to think.”
A grand prize for one randomly drawn team is awarded at the end of the three months. At the end of the challenge, “Onward and Upward” prizes are awarded by random drawing to two individuals who recorded exercise minutes for every week of the challenge and demonstrated a steady and consistent increase in exercise minutes from week to week, even if those minutes didn’t meet each weekly goal set by getfit. This year there will be even more prizes, in celebration of the program’s 20th anniversary.
In addition to the chance to win prizes, all participants on an active team can receive a free getfit T-shirt — a special 20th anniversary version this year — and can take advantage of other free or discounted services, such as a special four-month access pass to MIT Recreation and a free 30-day membership on the exercise app CardioCast.
But for many people, bonding with teammates is a more powerful incentive than winning prizes.
“For me, the social aspect was most useful,” says Josh Bradshaw, the family member of a graduate student. “Seeing my friends getting out there and pushing up their scores was a helpful nudge to do a little bit more myself … After taking up running during getfit, I ramped up my weekly volume and eventually ran the Philadelphia Marathon in the fall.”
The program often helps create new exercise momentum for people, such as Shellyann Isaac, from the Undergraduate Advising Center, who will be a 20-year participant this year. She began the program trying not to disappoint her teammates, but over the years, she says, “I worked out to not disappoint myself.”
“Getfit is my annual reminder to check in on my health, both mental and physical,” says Brian Bryson of MIT Technology Review. “It’s a great reflection point that sets the tone for the rest of the year.”
Registration for getfit 2024 runs through Jan. 23. The challenge begins Monday, Jan. 29 and ends Sunday, April 21. More information can be found at getfit.mit.edu.
Blueprint Labs launches a charter school research collaborative
Over the past 30 years, charter schools have emerged as a prominent yet debated public school option. According to the National Center for Education Statistics, 7 percent of U.S. public school students were enrolled in charter schools in 2021, up from 4 percent in 2010. Amid this expansion, families and policymakers want to know more about charter school performance and its systemic impacts. While researchers have evaluated charter schools’ short-term effects on student outcomes, significant knowledge gaps still exist.
MIT Blueprint Labs aims to fill those gaps through its Charter School Research Collaborative, an initiative that brings together practitioners, policymakers, researchers, and funders to make research on charter schools more actionable, rigorous, and efficient. The collaborative will create infrastructure to streamline and fund high-quality, policy-relevant charter research.
Joshua Angrist, MIT Ford Professor of Economics and a Blueprint Labs co-founder and director, says that Blueprint Labs hopes “to increase [its] impact by working with a larger group of academic and practitioner partners.” A nonpartisan research lab, Blueprint's mission is to produce the most rigorous evidence possible to inform policy and practice. Angrist notes, “The debate over charter schools is not always fact-driven. Our goal at the lab is to bring convincing evidence into these discussions.”
Collaborative kickoff
The collaborative launched with a two-day kickoff in November. Blueprint Labs welcomed researchers, practitioners, funders, and policymakers to MIT to lay the groundwork for the collaborative. Over 80 participants joined the event, including leaders of charter school organizations, researchers at top universities and institutes, and policymakers and advocates from a variety of organizations and education agencies.
Through a series of panels, presentations, and conversations, participants including Rhode Island Department of Education Commissioner Angélica Infante-Green, CEO of Noble Schools Constance Jones, former Knowledge is Power Program CEO Richard Barth, president and CEO of National Association of Charter School Authorizers Karega Rausch, and many others discussed critical topics in the charter school space. These conversations influenced the collaborative’s research agenda.
Several sessions also highlighted how to ensure that the research process includes diverse voices to generate actionable evidence. Panelists noted that researchers should be aware of the demands placed on practitioners and should carefully consider community contexts. In addition, collaborators should treat each other as equal partners.
Parag Pathak, the Class of 1922 Professor of Economics at MIT and a Blueprint Labs co-founder and director, explained the kickoff’s aims. “One of our goals today is to begin to forge connections between [attendees]. We hope that [their] conversations are the launching point for future collaborations,” he stated. Pathak also shared the next steps for the collaborative: “Beginning next year, we’ll start investing in new research using the agenda [developed at this event] as our guide. We will also support new partnerships between researchers and practitioners.”
Research agenda
The discussions at the kickoff informed the collaborative’s research agenda. A recent paper summarizing existing lottery-based research on charter school effectiveness by Sarah Cohodes, an associate professor of public policy at the University of Michigan, and Susha Roy, an associate policy researcher at the RAND Corp., also guides the agenda. Their review finds that in randomized evaluations, many charter schools increase students’ academic achievement. However, researchers have not yet studied charter schools’ impacts on long-term, behavioral, or health outcomes in depth, and rigorous, lottery-based research is currently limited to a handful of urban centers.
The current research agenda focuses on seven topics:
- the long-term effects of charter schools;
- the effect of charters on non-test score outcomes;
- which charter school practices have the largest effect on performance;
- how charter performance varies across different contexts;
- how charter school effects vary with demographic characteristics and student background;
- how charter schools impact non-student outcomes, like teacher retention; and
- how system-level factors, such as authorizing practices, impact charter school performance.
As diverse stakeholders' priorities continue to shift and the collaborative progresses, the research agenda will continue to evolve.
Information for interested partners
Opportunities exist for charter leaders, policymakers, researchers, and funders to engage with the collaborative. Stakeholders can apply for funding, help shape the research agenda, and develop new research partnerships. A competitive funding process will open this month.
Those interested in receiving updates on the collaborative can fill out this form. Please direct questions to chartercollab@mitblueprintlabs.org.
Baran Mensah: Savoring college life in a new country
MIT senior Baran Mensah recalls taking apart his toys as a child, curious to see how every piece worked. When his mother explained to him what an engineer was, he knew that’s what he wanted to be.
Mensah wasn’t particularly familiar with the culture of MIT while growing up in Ghana. But for the last four years, he has dug deeply into many aspects of college life, choosing a major in mechanical engineering and a minor in music, and exploring a wide array of extracurricular activities. In addition to holding significant leadership roles in the Chocolate City living group, he has performed with Sakata Afrique, MIT’s Afro-Caribbean dance group, and belongs to the Rho Nu chapter of Alpha Phi Alpha Fraternity.
His approach to research internships has been equally expansive. During his first year, he worked as a residential facilitator for the Office of Minority Education’s Interphase EDGE program, which invites admitted MIT students to campus before the academic year begins. Having benefited from the program himself, Mensah says he wanted to give back as a facilitator. He also helped develop a software toolkit to expose K-12 students to the field of soft robotics, in the Conor Walsh Lab at Harvard University. Most recently, he was a missions operations intern at NASA’s Jet Propulsion Laboratory, where he worked on automating command generation for the Surface Water and Ocean Topography (SWOT) satellite.
After graduation, Mensah intends to attend graduate school and continue his studies in mechanical engineering. He hopes to then work on robotics hardware, perhaps for legged or biomimetic robots, and to stay in the Boston area for a while. While he’s remaining open to new opportunities as they arise, he ultimately hopes use his engineering skills to help improve socioeconomic conditions in his home country of Ghana.
MIT News interviewed Mensah to learn more about his life as a student.
Q: What is your favorite area of mechanical engineering?
A: My specific focus is robotics. As long as I can work on a piece of hardware controlled by some software that does cool and interesting things, I will be happy. Soft robotics would be an interesting route to pursue but I’m not married to any specific branch as of yet. One particularly interesting project I worked on involved a swimming robot that used electromagnetic actuation coupled with soft robotics to mimic the swimming of a fish.
I think while the soft robotics part was extremely novel and fascinating, I was more excited about the mimicry of nature using robotics. Robots such as the MIT cheetah or robots at Boston Dynamics are what excite me the most at this moment. It’s an intersection of not only mechanical engineering, electrical engineering, and computer science, but biology as well. I often find that the more subject areas a project intersects with, the more exciting it is.
Q: Tell us about your communities on campus.
A: My primary one is Chocolate City at MIT. We’re able to foster a community that allows a lot of people to feel comfortable at MIT. It’s about 30 people, so you’re able to get close friends and bonds, which sometimes can be really hard. And we really push people as well to be involved in the community and active.
The next big one is my dance group, Sakata Afrique. Dancing is something that I didn’t really think I would get into, but I got here and really enjoyed it. And it’s now a big part of my life. It’s really important for that reason, but also because an Afro-Caribbean dance group allows me to display and show my culture, in a sense.
I’m a member of the Rho Nu chapter of Alpha Phi Alpha Fraternity Incorporated. It’s the first intercollegiate Greek-letter fraternity established for African American men, and this chapter serves the campuses of several universities, so from that, I’ve been able to interface with a lot of different people from outside of MIT, which is great. And it’s allowed me to develop myself as a man and learn a lot of different things.
Q: What are some of your favorite memories from your time with these groups?
A: For Chocolate City, one of my fondest memories is the first party that we threw in February of 2021. When we came back to campus a lot of people didn’t really know who we are, and we needed to revive the group. The day of the party, we knew we’d sold 500 tickets, but actually seeing all these people coming in, I was like, “Wow, we really did this.” It was a big thing for me, because this was the first time I’d ever been responsible for marketing an event this large.
For Sakata, a highlight was last year when we had our show, Afro Shake. Right before we went on stage to dance it was a rush of emotions, because this was something that me and my co-choreographer had been working on for basically a whole year. And this was the accumulation of all of our work.
In my fraternity, we had a poetry event in February of last year. Seeing how everybody enjoyed the event and learned something new really felt good because it was me and my line brother’s first program, so it was a big thing that we put a lot of work into and seeing that pay off was really amazing.
Q: How did you balance everything with your studies?
A: Time management is extremely important. The busier I got, the crazier my calendar would look. I’d have times when I scheduled when I had lunch. If I only had one hour to do an assignment I had to make the hour count. It was definitely very hard, but I think it’s a big teaching tool. You have a lot more time than you think you do. A lot of it just goes to waste. There’s a lot of ways you can very strategically [optimize] your time.
Q: What do you do in your down time?
A: Dancing is a big one. Working out, if I have time. I’m into music. I’m a music minor, actually. I play guitar often, as it’s a really good outlet, [and] especially good for expanding my musical diversity because I feel like the type of music that I listen to on a daily basis is not the same type of music that I play on the guitar. So, it really forces me to listen to other types of music, which I enjoy.
Q: Have you had a music minor the whole time you’ve been in undergraduate?
A: No. Coming to MIT, I wanted to concentrate in Spanish for my [humanities, arts, and social sciences requirement] because I’d done Spanish for nine or 10 years. But then seeing the wealth of music classes, I realized this was something that I really wanted to take advantage of. I think music was something that I always wanted to do, but I never really had the resources available until now. With music and with other areas of life, what I’ve learned is that sometimes it might serve you well to just not know what your plan is, and be very open, and see what happens.
Q: Even without a specific plan, do you think you might return to Ghana in the future?
A: I do hope to return at some point, and that I will be able to contribute to uplifting my country in a socioeconomic sense. I envision setting up a foundation that not only offers access to novel technology to solve complex issues, but also helps provide high-quality, affordable educational opportunities. Although I have had this goal for a while, my friend Wilhem Hector has inspired me to look at driving change in this way. I am still unsure about how, exactly, I will go about doing this, but with every passing day, the road forward is slightly clearer.
Researchers improve blood tests’ ability to detect and monitor cancer
Tumors constantly shed DNA from dying cells, which briefly circulates in the patient’s bloodstream before it is quickly broken down. Many companies have created blood tests that can pick out this tumor DNA, potentially helping doctors diagnose or monitor cancer or choose a treatment.
The amount of tumor DNA circulating at any given time, however, is extremely small, so it has been challenging to develop tests sensitive enough to pick up that tiny signal. A team of researchers from MIT and the Broad Institute of MIT and Harvard has now come up with a way to significantly boost that signal, by temporarily slowing the clearance of tumor DNA circulating in the bloodstream.
The researchers developed two different types of injectable molecules that they call “priming agents,” which can transiently interfere with the body’s ability to remove circulating tumor DNA from the bloodstream. In a study of mice, they showed that these agents could boost DNA levels enough that the percentage of detectable early-stage lung metastases leapt from less than 10 percent to above 75 percent.
This approach could enable not only earlier diagnosis of cancer, but also more sensitive detection of tumor mutations that could be used to guide treatment. It could also help improve detection of cancer recurrence.
“You can give one of these agents an hour before the blood draw, and it makes things visible that previously wouldn’t have been. The implication is that we should be able to give everybody who’s doing liquid biopsies, for any purpose, more molecules to work with,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science.
Bhatia is one of the senior authors of the new study, along with J. Christopher Love, the Raymond A. and Helen E. St. Laurent Professor of Chemical Engineering at MIT and a member of the Koch Institute and the Ragon Institute of MGH, MIT, and Harvard and Viktor Adalsteinsson, director of the Gerstner Center for Cancer Diagnostics at the Broad Institute.
Carmen Martin-Alonso PhD ’23, MIT and Broad Institute postdoc Shervin Tabrizi, and Broad Institute scientist Kan Xiong are the lead authors of the paper, which appears today in Science.
Better biopsies
Liquid biopsies, which enable detection of small quantities of DNA in blood samples, are now used in many cancer patients to identify mutations that could help guide treatment. With greater sensitivity, however, these tests could become useful for far more patients. Most efforts to improve the sensitivity of liquid biopsies have focused on developing new sequencing technologies to use after the blood is drawn.
While brainstorming ways to make liquid biopsies more informative, Bhatia, Love, Adalsteinsson, and their trainees came up with the idea of trying to increase the amount of DNA in a patient’s bloodstream before the sample is taken.
“A tumor is always creating new cell-free DNA, and that’s the signal that we’re attempting to detect in the blood draw. Existing liquid biopsy technologies, however, are limited by the amount of material you collect in the tube of blood,” Love says. “Where this work intercedes is thinking about how to inject something beforehand that would help boost or enhance the amount of signal that is available to collect in the same small sample.”
The body uses two primary strategies to remove circulating DNA from the bloodstream. Enzymes called DNases circulate in the blood and break down DNA that they encounter, while immune cells known as macrophages take up cell-free DNA as blood is filtered through the liver.
The researchers decided to target each of these processes separately. To prevent DNases from breaking down DNA, they designed a monoclonal antibody that binds to circulating DNA and protects it from the enzymes.
“Antibodies are well-established biopharmaceutical modalities, and they’re safe in a number of different disease contexts, including cancer and autoimmune treatments,” Love says. “The idea was, could we use this kind of antibody to help shield the DNA temporarily from degradation by the nucleases that are in circulation? And by doing so, we shift the balance to where the tumor is generating DNA slightly faster than is being degraded, increasing the concentration in a blood draw.”
The other priming agent they developed is a nanoparticle designed to block macrophages from taking up cell-free DNA. These cells have a well-known tendency to eat up synthetic nanoparticles.
“DNA is a biological nanoparticle, and it made sense that immune cells in the liver were probably taking this up just like they do synthetic nanoparticles. And if that were the case, which it turned out to be, then we could use a safe dummy nanoparticle to distract those immune cells and leave the circulating DNA alone so that it could be at a higher concentration,” Bhatia says.
Earlier tumor detection
The researchers tested their priming agents in mice that received transplants of cancer cells that tend to form tumors in the lungs. Two weeks after the cells were transplanted, the researchers showed that these priming agents could boost the amount of circulating tumor DNA recovered in a blood sample by up to 60-fold.
Once the blood sample is taken, it can be run through the same kinds of sequencing tests now used on liquid biopsy samples. These tests can pick out tumor DNA, including specific sequences used to determine the type of tumor and potentially what kinds of treatments would work best.
Early detection of cancer is another promising application for these priming agents. The researchers found that when mice were given the nanoparticle priming agent before blood was drawn, it allowed them to detect circulating tumor DNA in blood of 75 percent of the mice with low cancer burden, while none were detectable without this boost.
“One of the greatest hurdles for cancer liquid biopsy testing has been the scarcity of circulating tumor DNA in a blood sample,” Adalsteinsson says. “It’s thus been encouraging to see the magnitude of the effect we’ve been able to achieve so far and to envision what impact this could have for patients.”
After either of the priming agents are injected, it takes an hour or two for the DNA levels to increase in the bloodstream, and then they return to normal within about 24 hours.
“The ability to get peak activity of these agents within a couple of hours, followed by their rapid clearance, means that someone could go into a doctor’s office, receive an agent like this, and then give their blood for the test itself, all within one visit,” Love says. “This feature bodes well for the potential to translate this concept into clinical use.”
The researchers have launched a company called Amplifyer Bio that plans to further develop the technology, in hopes of advancing to clinical trials.
“A tube of blood is a much more accessible diagnostic than colonoscopy screening or even mammography,” Bhatia says. “Ultimately, if these tools really are predictive, then we should be able to get many more patients into the system who could benefit from cancer interception or better therapy.”
The research was funded by the Koch Institute Support (core) Grant from the National Cancer Institute, the Marble Center for Cancer Nanomedicine, the Gerstner Family Foundation, the Ludwig Center at MIT, the Koch Institute Frontier Research Program via the Casey and Family Foundation, and the Bridge Project, a partnership between the Koch Institute and the Dana-Farber/Harvard Cancer Center.
New hope for early pancreatic cancer intervention via AI-based risk prediction
The first documented case of pancreatic cancer dates back to the 18th century. Since then, researchers have undertaken a protracted and challenging odyssey to understand the elusive and deadly disease. To date, there is no better cancer treatment than early intervention. Unfortunately, the pancreas, nestled deep within the abdomen, is particularly elusive for early detection.
MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) scientists, alongside Limor Appelbaum, a staff scientist in the Department of Radiation Oncology at Beth Israel Deaconess Medical Center (BIDMC), were eager to better identify potential high-risk patients. They set out to develop two machine-learning models for early detection of pancreatic ductal adenocarcinoma (PDAC), the most common form of the cancer. To access a broad and diverse database, the team synced up with a federated network company, using electronic health record data from various institutions across the United States. This vast pool of data helped ensure the models' reliability and generalizability, making them applicable across a wide range of populations, geographical locations, and demographic groups.
The two models — the “PRISM” neural network, and the logistic regression model (a statistical technique for probability), outperformed current methods. The team’s comparison showed that while standard screening criteria identify about 10 percent of PDAC cases using a five-times higher relative risk threshold, Prism can detect 35 percent of PDAC cases at this same threshold.
Using AI to detect cancer risk is not a new phenomena — algorithms analyze mammograms, CT scans for lung cancer, and assist in the analysis of Pap smear tests and HPV testing, to name a few applications. “The PRISM models stand out for their development and validation on an extensive database of over 5 million patients, surpassing the scale of most prior research in the field,” says Kai Jia, an MIT PhD student in electrical engineering and computer science (EECS), MIT CSAIL affiliate, and first author on an open-access paper in eBioMedicine outlining the new work. “The model uses routine clinical and lab data to make its predictions, and the diversity of the U.S. population is a significant advancement over other PDAC models, which are usually confined to specific geographic regions, like a few health-care centers in the U.S. Additionally, using a unique regularization technique in the training process enhanced the models' generalizability and interpretability.”
“This report outlines a powerful approach to use big data and artificial intelligence algorithms to refine our approach to identifying risk profiles for cancer,” says David Avigan, a Harvard Medical School professor and the cancer center director and chief of hematology and hematologic malignancies at BIDMC, who was not involved in the study. “This approach may lead to novel strategies to identify patients with high risk for malignancy that may benefit from focused screening with the potential for early intervention.”
Prismatic perspectives
The journey toward the development of PRISM began over six years ago, fueled by firsthand experiences with the limitations of current diagnostic practices. “Approximately 80-85 percent of pancreatic cancer patients are diagnosed at advanced stages, where cure is no longer an option,” says senior author Appelbaum, who is also a Harvard Medical School instructor as well as radiation oncologist. “This clinical frustration sparked the idea to delve into the wealth of data available in electronic health records (EHRs).”
The CSAIL group’s close collaboration with Appelbaum made it possible to understand the combined medical and machine learning aspects of the problem better, eventually leading to a much more accurate and transparent model. “The hypothesis was that these records contained hidden clues — subtle signs and symptoms that could act as early warning signals of pancreatic cancer,” she adds. “This guided our use of federated EHR networks in developing these models, for a scalable approach for deploying risk prediction tools in health care.”
Both PrismNN and PrismLR models analyze EHR data, including patient demographics, diagnoses, medications, and lab results, to assess PDAC risk. PrismNN uses artificial neural networks to detect intricate patterns in data features like age, medical history, and lab results, yielding a risk score for PDAC likelihood. PrismLR uses logistic regression for a simpler analysis, generating a probability score of PDAC based on these features. Together, the models offer a thorough evaluation of different approaches in predicting PDAC risk from the same EHR data.
One paramount point for gaining the trust of physicians, the team notes, is better understanding how the models work, known in the field as interpretability. The scientists pointed out that while logistic regression models are inherently easier to interpret, recent advancements have made deep neural networks somewhat more transparent. This helped the team to refine the thousands of potentially predictive features derived from EHR of a single patient to approximately 85 critical indicators. These indicators, which include patient age, diabetes diagnosis, and an increased frequency of visits to physicians, are automatically discovered by the model but match physicians' understanding of risk factors associated with pancreatic cancer.
The path forward
Despite the promise of the PRISM models, as with all research, some parts are still a work in progress. U.S. data alone are the current diet for the models, necessitating testing and adaptation for global use. The path forward, the team notes, includes expanding the model's applicability to international datasets and integrating additional biomarkers for more refined risk assessment.
“A subsequent aim for us is to facilitate the models' implementation in routine health care settings. The vision is to have these models function seamlessly in the background of health care systems, automatically analyzing patient data and alerting physicians to high-risk cases without adding to their workload,” says Jia. “A machine-learning model integrated with the EHR system could empower physicians with early alerts for high-risk patients, potentially enabling interventions well before symptoms manifest. We are eager to deploy our techniques in the real world to help all individuals enjoy longer, healthier lives.”
Jia wrote the paper alongside Applebaum and MIT EECS Professor and CSAIL Principal Investigator Martin Rinard, who are both senior authors of the paper. Researchers on the paper were supported during their time at MIT CSAIL, in part, by the Defense Advanced Research Projects Agency, Boeing, the National Science Foundation, and Aarno Labs. TriNetX provided resources for the project, and the Prevent Cancer Foundation also supported the team.
Meeting the clean energy needs of tomorrow
Yuri Sebregts, chief technology officer at Shell, succinctly laid out the energy dilemma facing the world over the rest of this century. On one hand, demand for energy is quickly growing as countries in the developing world modernize and the global population grows, with 100 gigajoules of energy per person needed annually to enable quality-of-life benefits and industrialization around the globe. On the other, traditional energy sources are quickly warming the planet, with the world already seeing the devastating effects of increasingly frequent extreme weather events.
While the goals of energy security and energy sustainability are seemingly at odds with one another, the two must be pursued in tandem, Sebregts said during his address at the MIT Energy Initiative Fall Colloquium.
“An environmentally sustainable energy system that isn’t also a secure energy system is not sustainable,” Sebregts said. “And conversely, a secure energy system that is not environmentally sustainable will do little to ensure long-term energy access and affordability. Therefore, security and sustainability must go hand-in-hand. You can’t trade off one for the other.”
Sebregts noted that there are several potential pathways to help strike this balance, including investments in renewable energy sources, the use of carbon offsets, and the creation of more efficient tools, products, and processes. However, he acknowledged that meeting growing energy demands while minimizing environmental impacts is a global challenge requiring an unprecedented level of cooperation among countries and corporations across the world.
“At Shell, we recognize that this will require a lot of collaboration between governments, businesses, and civil society,” Sebregts said. “That’s not always easy.”
Global conflict and global warming
In 2021, Sebregts noted, world leaders gathered in Glasgow, Scotland and collectively promised to deliver on the “stretch goal” of the 2015 Paris Agreement, which would limit global warming to 1.5 degrees Celsius — a level that scientists believe will help avoid the worst potential impacts of climate change. But, just a few months later, Russia invaded Ukraine, resulting in chaos in global energy markets and illustrating the massive impact that geopolitical friction can have on efforts to reduce carbon emissions.
“Even though global volatility has been a near constant of this century, the situation in Ukraine is proving to be a turning point,” Sebregts said. “The stress it placed on the global supply of energy, food, and other critical materials was enormous.”
In Europe, Sebregts noted, countries affected by the loss of Russia’s natural gas supply began importing from the Middle East and the United States. This, in turn, drove up prices. While this did result in some efforts to limit energy use, such as Europeans lowering their thermostats in the winter, it also caused some energy buyers to turn to coal. For instance, the German government approved additional coal mining to boost its energy security — temporarily reversing a decades-long transition away from the fuel. To put this into wider perspective, in a single quarter, China increased its coal generation capacity by as much as Germany had reduced its own over the previous 20 years.
The promise of electrification
Sebregts noted the strides being made toward electrification, which is expected to have a significant impact on global carbon emissions. To meet net-zero emissions (the point at which humans are adding no more carbon to the atmosphere than they are removing) by 2050, the share of electricity as a portion of total worldwide energy consumption must reach 37 percent by 2030, up from 20 percent in 2020, Sebregts said.
He pointed out that Shell has become one of the world’s largest electric vehicle charging companies, with more than 30,000 public charge points. By 2025, that number will increase to 70,000, and it is expected to soar to 200,000 by 2030. While demand and infrastructure for electric vehicles are growing, Sebregts said that the “real needle-mover” will be industrial electrification, especially in so-called “hard-to-abate” sectors.
This progress will depend heavily on global cooperation — Sebregts pointed out that China dominates the international market for many rare elements that are key components of electrification infrastructure. “It shouldn’t be a surprise that the political instability, shifting geopolitical tensions, and environmental and social governance issues are significant risks for the energy transition,” he said. “It is imperative that we reduce, control, and mitigate these risks as much as possible.”
Two possible paths
For decades, Sebregts said, Shell has created scenarios to help senior managers think through the long-term challenges facing the company. While Sebregts stressed that these scenarios are not predictions, they do take into account real-world conditions, and they are meant to give leaders the opportunity to grapple with plausible situations.
With this in mind, Sebregts outlined Shell’s most recent Energy Security Scenarios, describing the potential future consequences of attempts to balance growing energy demand with sustainability — scenarios that envision vastly different levels of global cooperation, with huge differences in projected results.
The first scenario, dubbed “Archipelagos,” imagines countries pursuing energy security through self-interest — a fragmented, competitive process that would result in a global temperature increase of 2.2 degrees Celsius by the end of this century. The second scenario, “Sky 2050,” envisions countries around the world collaborating to change the energy system for their mutual benefit. This more optimistic scenario would see a much lower global temperature increase of 1.2 C by 2100.
“The good news is that in both scenarios, the world is heading for net-zero emissions at some point,” Sebregts said. “The difference is a question of when it gets there. In Sky 2050, it is the middle of the century. In Archipelagos, it is early in the next century.”
On the other hand, Sebregts added, the average global temperature will increase by more than 1.5 C for some period of time in either scenario. But, in the Archipelagos scenario, this overshoot will be much larger, and will take much longer to come down. “So, two very different futures,” Sebregts said. “Two very different worlds.”
The work ahead
Questioned about the costs of transitioning to a net-zero energy ecosystem, Sebregts said that it is “very hard” to provide an accurate answer. “If you impose an additional constraint … you’re going to have to add some level of cost,” he said. “But then, of course, there’s 30 years of technology development pathway that might counteract some of that.”
In some cases, such as air travel, Sebregts said, it will likely remain impractical to either rely on electrification or sequester carbon at the source of emission. Direct air capture (DAC) methods, which mechanically pull carbon directly from the atmosphere, will have a role to play in offsetting these emissions, he said. Sebregts predicted that the price of DAC could come down significantly by the middle of this century. “I would venture that a price of $200 to $250 a ton of CO2 by 2050 is something that the world would be willing to spend, at least in developed economies, to offset those very hard-to-abate instances.”
Sebregts noted that Shell is working on demonstrating DAC technologies in Houston, Texas, constructing what will become Europe’s largest hydrogen plant in the Netherlands, and taking other steps to profitably transition to a net-zero emissions energy company by 2050. “We need to understand what can help our customers transition quicker and how we can continue to satisfy their needs,” he said. “We must ensure that energy is affordable, accessible, and sustainable, as soon as possible.”
Reasoning and reliability in AI
In order for natural language to be an effective form of communication, the parties involved need to be able to understand words and their context, assume that the content is largely shared in good faith and is trustworthy, reason about the information being shared, and then apply it to real-world scenarios. MIT PhD students interning with the MIT-IBM Watson AI Lab — Athul Paul Jacob SM ’22, Maohao Shen SM ’23, Victor Butoi, and Andi Peng SM ’23 — are working to attack each step of this process that’s baked into natural language models, so that the AI systems can be more dependable and accurate for users.
To achieve this, Jacob’s research strikes at the heart of existing natural language models to improve the output, using game theory. His interests, he says, are two-fold: “One is understanding how humans behave, using the lens of multi-agent systems and language understanding, and the second thing is, ‘How do you use that as an insight to build better AI systems?’” His work stems from the board game “Diplomacy,” where his research team developed a system that could learn and predict human behaviors and negotiate strategically to achieve a desired, optimal outcome.
“This was a game where you need to build trust; you need to communicate using language. You need to also play against six other players at the same time, which were very different from all the kinds of task domains people were tackling in the past,” says Jacob, referring to other games like poker and GO that researchers put to neural networks. “In doing so, there were a lot of research challenges. One was, ‘How do you model humans? How do you know whether when humans tend to act irrationally?’” Jacob and his research mentors — including Associate Professor Jacob Andreas and Assistant Professor Gabriele Farina of the MIT Department of Electrical Engineering and Computer Science (EECS), and the MIT-IBM Watson AI Lab’s Yikang Shen — recast the problem of language generation as a two-player game.
Using “generator” and “discriminator” models, Jacob’s team developed a natural language system to produce answers to questions and then observe the answers and determine if they are correct. If they are, the AI system receives a point; if not, no point is rewarded. Language models notoriously tend to hallucinate, making them less trustworthy; this no-regret learning algorithm collaboratively takes a natural language model and encourages the system’s answers to be more truthful and reliable, while keeping the solutions close to the pre-trained language model’s priors. Jacob says that using this technique in conjunction with a smaller language model could, likely, make it competitive with the same performance of a model many times bigger.
Once a language model generates a result, researchers ideally want its confidence in its generation to align with its accuracy, but this frequently isn’t the case. Hallucinations can occur with the model reporting high confidence when it should be low. Maohao Shen and his group, with mentors Gregory Wornell, Sumitomo Professor of Engineering in EECS, and lab researchers with IBM Research Subhro Das, Prasanna Sattigeri, and Soumya Ghosh — are looking to fix this through uncertainty quantification (UQ). “Our project aims to calibrate language models when they are poorly calibrated,” says Shen. Specifically, they’re looking at the classification problem. For this, Shen allows a language model to generate free text, which is then converted into a multiple-choice classification task. For instance, they might ask the model to solve a math problem and then ask it if the answer it generated is correct as “yes, no, or maybe.” This helps to determine if the model is over- or under-confident.
Automating this, the team developed a technique that helps tune the confidence output by a pre-trained language model. The researchers trained an auxiliary model using the ground-truth information in order for their system to be able to correct the language model. “If your model is over-confident in its prediction, we are able to detect it and make it less confident, and vice versa,” explains Shen. The team evaluated their technique on multiple popular benchmark datasets to show how well it generalizes to unseen tasks to realign the accuracy and confidence of language model predictions. “After training, you can just plug in and apply this technique to new tasks without any other supervision,” says Shen. “The only thing you need is the data for that new task.”
Victor Butoi also enhances model capability, but instead, his lab team — which includes John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering in EECS; lab researchers Leonid Karlinsky and Rogerio Feris of IBM Research; and lab affiliates Hilde Kühne of the University of Bonn and Wei Lin of Graz University of Technology — is creating techniques to allow vision-language models to reason about what they’re seeing, and is designing prompts to unlock new learning abilities and understand key phrases.
Compositional reasoning is just another aspect of the decision-making process that we ask machine-learning models to perform in order for them to be helpful in real-world situations, explains Butoi. “You need to be able to think about problems compositionally and solve subtasks,” says Butoi, “like, if you're saying the chair is to the left of the person, you need to recognize both the chair and the person. You need to understand directions.” And then once the model understands “left,” the research team wants the model to be able to answer other questions involving “left.”
Surprisingly, vision-language models do not reason well about composition, Butoi explains, but they can be helped to, using a model that can “lead the witness”, if you will. The team developed a model that was tweaked using a technique called low-rank adaptation of large language models (LoRA) and trained on an annotated dataset called Visual Genome, which has objects in an image and arrows denoting relationships, like directions. In this case, the trained LoRA model would be guided to say something about “left” relationships, and this caption output would then be used to provide context and prompt the vision-language model, making it a “significantly easier task,” says Butoi.
In the world of robotics, AI systems also engage with their surroundings using computer vision and language. The settings may range from warehouses to the home. Andi Peng and mentors MIT’s H.N. Slater Professor in Aeronautics and Astronautics Julie Shah and Chuang Gan, of the lab and the University of Massachusetts at Amherst, are focusing on assisting people with physical constraints, using virtual worlds. For this, Peng’s group is developing two embodied AI models — a “human” that needs support and a helper agent — in a simulated environment called ThreeDWorld. Focusing on human/robot interactions, the team leverages semantic priors captured by large language models to aid the helper AI to infer what abilities the “human” agent might not be able to do and the motivation behind actions of the “human,” using natural language. The team’s looking to strengthen the helper’s sequential decision-making, bidirectional communication, ability to understand the physical scene, and how best to contribute.
“A lot of people think that AI programs should be autonomous, but I think that an important part of the process is that we build robots and systems for humans, and we want to convey human knowledge,” says Peng. “We don’t want a system to do something in a weird way; we want them to do it in a human way that we can understand.”
Evidence that gamma rhythm stimulation can treat neurological disorders is emerging
A surprising MIT study published in Nature at the end of 2016 helped to spur interest in the possibility that light flickering at the frequency of a particular gamma-band brain rhythm could produce meaningful therapeutic effects for people with Alzheimer’s disease. In a new review paper in the Journal of Internal Medicine, the lab that led those studies takes stock of what a growing number of scientists worldwide have been finding out since then in dozens of clinical and lab benchtop studies.
Brain rhythms (also called brain “waves” or “oscillations”) arise from the synchronized network activity of brain cells and circuits as they coordinate to enable brain functions such as perception or cognition. Lower-range gamma-frequency rhythms, those around 40 cycles a second, or hertz (Hz), are particularly important for memory processes, and MIT’s research has shown that they are also associated with specific changes at the cellular and molecular level. The 2016 study and many others since then have produced evidence, initially in animals and more recently in humans, that various noninvasive means of enhancing the power and synchrony of 40Hz gamma rhythms helps to reduce Alzheimer’s pathology and its consequences.
“What started in 2016 with optogenetic and visual stimulation in mice has expanded to a multitude of stimulation paradigms, a wide range of human clinical studies with promising results, and is narrowing in on the mechanisms underlying this phenomenon,” write the authors including Li-Huei Tsai, Picower Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT.
Though the number of studies and methods has increased and the data have typically suggested beneficial clinical effects, the article’s authors also clearly caution that the clinical evidence remains preliminary and that animal studies intended to discern how the approach works have been instructive, but not definitive.
“Research into the clinical potential of these interventions is still in its nascent stages,” the researchers, led by MIT postdoc Cristina Blanco-Duque, write in introducing the review. “The precise mechanisms underpinning the beneficial effects of gamma stimulation in Alzheimer’s disease are not yet fully elucidated, but preclinical studies have provided relevant insights.”
Preliminarily promising
The authors list and summarize results from 16 clinical studies published over the last several years. These employ gamma-frequency sensory stimulation (e.g., exposure to light, sound, tactile vibration, or a combination); transcranial alternating current stimulation (tACS), in which a brain region is stimulated via scalp electrodes; or transcranial magnetic stimulation (TMS), in which electric currents are induced in a brain region using magnetic fields. The studies also vary in their sample size, design, duration, and in what effects they assessed. Some of the sensory studies using light have tested different colors and different exact frequencies. And while some studies show that sensory stimulation appears to affect multiple regions in the brain, tACS and TMS are more regionally focused (though those brain regions still connect and interact with others).
Given the variances, the clinical studies taken together offer a blend of uneven but encouraging evidence, the authors write. Across clinical studies involving patients with Alzheimer’s disease, sensory stimulation has proven safe and well-tolerated. Multiple sensory studies have measured increases in gamma power and brain network connectivity. Sensory studies have also reported improvements in memory and/or cognition, as well as sleep. Some have yielded apparent physiological benefits such as reduction of brain atrophy, in one case, and changes in immune system activity in another. So far, sensory studies have not shown reductions in Alzheimer’s hallmark proteins, amyloid or tau.
Clinical studies stimulating 40Hz rhythms using tACS, ranging in sample size from only one to as many as 60, are the most numerous so far, and many have shown similar benefits. Most report benefits to cognition, executive function, and/or memory (depending sometimes on the brain region stimulated), and some have assessed that benefits endure even after treatment concludes. Some have shown effects on measures of tau and amyloid, blood flow, neuromodulatory chemical activity, or immune activity. Finally, a 40Hz stimulation clinical study using TMS in 37 patients found improvements in cognition, prevention of brain atrophy, and increased brain connectivity.
“The most important test for gamma stimulation is without a doubt whether it is safe and beneficial for patients,” the authors write. “So far, results from several small trials on sensory gamma stimulation suggest that it is safe, evokes rhythmic EEG brain responses, and there are promising signs for AD [Alzheimer's disease] symptoms and pathology. Similarly, studies on transcranial stimulation report the potential to benefit memory and global cognitive function even beyond the end of treatment.”
Studying underlying mechanisms
In parallel, dozens more studies have shown significant benefits in mice including reductions in amyloid and tau, preservation of brain tissue, and improvements in memory. But animal studies also have offered researchers a window into the cellular and molecular mechanisms by which gamma stimulation might have these effects.
Before MIT’s original studies in 2016 and 2019, researchers had not attributed molecular changes in brain cells to changes in brain rhythms, but those and other studies have now shown that they affect not only the molecular state of neurons, but also the brain’s microglia immune cells, astrocyte cells that play key roles in regulating circulation, and indeed the brain’s vasculature system. A hypothesis of Tsai’s lab right now is that sensory gamma stimulation might promote the clearance of amyloid and tau via increased circulatory activity of brain fluids.
A hotly debated aspect of gamma stimulation is how it affects the electrical activity of neurons, and how pervasively. Studies indicate that inhibitory “interneurons” are especially affected, though, offering a clue about how increased gamma activity, and its physiological effects, might propagate.
“The field has generated tantalizing leads on how gamma stimulation may translate into beneficial effects on the cellular and molecular level,” the authors write.
Gamma going forward
As the authors make clear that more definitive clinical studies are needed, they note that at the moment, there are now 15 new clinical studies of gamma stimulation underway. Among these is a phase 3 clinical trial by the company Cognito Therapeutics, which has licensed MIT’s technology. That study plans to enroll hundreds of participants.
Meanwhile, some recent or new clinical and preclinical studies have begun looking at whether gamma stimulation may be applicable to neurological disorders other than Alzheimer’s, including stroke or Down syndrome. In experiments with mouse models, for example, an MIT team has been testing gamma stimulation’s potential to help with cognitive effects of chemotherapy, or “chemobrain.”
“Larger clinical studies are required to ascertain the long-term benefits of gamma stimulation,” the authors conclude. “In animal models the focus should be on delineating the mechanism of gamma stimulation and providing further proof of principle studies on what other applications gamma stimulation may have.”
In addition to Tsai and Blanco-Duque, the paper’s other authors are Diane Chan, Martin Kahn, and Mitch Murdock.
MedLinks volunteers aid students in residence halls with minor medical issues
For 30 years, MIT MedLinks liaisons have volunteered to support MIT students with first-line medical care. Living in each of MIT’s residence halls and in numerous fraternities, sororities, and independent living groups, MedLinks administer basic first aid, share over-the-counter medicines when needed, explain MIT Health’s policies and procedures, and often simply listen to classmates talk about their health and well-being. MedLinks also help build community and plan events that bring people in their residence halls together. Recent events include ice cream sundae building, canvas painting, and tie-dying T-shirts.
Students who need ibuprofen in the middle of the night, twist their ankle and need an ice pack, or just need some throat lozenges can knock on their MedLinks volunteer’s door to get help with any of these and a host of other medical matters.
Greg Baker, senior program manager for community wellness at MIT Health, says the 150 MedLinks volunteers play a crucial role in connecting students to MIT Health and a host of other services.
“There is a 12-hour training for new volunteers that includes a review of MIT Health’s clinical offerings, campus and community resources, the supplies they receive and in what situations they should or should not be distributed, as well as active listening, and caregiver burnout. We’re also lucky to have our campus partners host sessions to share more about their departments — including the Ombuds Office, DoingWell, Alcohol and Other Drug Services, Institute Discrimination and Harassment Response Office, DAPER [Department of Athletics, Physical Education and Recreation], and Student Mental Health and Counseling,” says Baker.
After a year as a MedLinks volunteer, students can become a MedLinks residential director (RD) after going through additional training. The RD coordinates monthly meetings and events with the other MedLinks in their living group, checks supplies, and along with other MedLinks submits reports to MIT Medical.
Em Ball and Maia DeMeyer are residential directors for Burton Connor and Random Hall, respectively. Ball, a junior majoring in chemistry who is originally from Iowa, became a MedLinks volunteer because she is interested in going to medical school when she completes her undergraduate studies.
“One of the best things about being an RD is meeting and helping people. I especially enjoy putting together our events. We just had a cupcake-decorating event, and the people who came had a great time and said they had fun. The ability to take a break for your mental health is undervalued and very important,” says Ball.
DeMeyer, a sophomore majoring in computer science and engineering who is originally from Washington state, became a MedLinks volunteer for similar reasons: “I like to take care of people. I would rather someone knock on my door in the middle of the night seeking help than ignore a medical problem. I also enjoy being a resource for our community because Random Hall is small; it feels like family there.”
Flu season tends to be busier than the rest of the year. Ball and DeMeyer often advise students when they should go to MIT Health, Student Support Services, or Urgent Care. They also interview potential MedLinks liaisons and help onboard them once they have completed training.
Baker observes, “They have a lot of responsibility, as Em is the RD for 14 other MedLinks volunteers and Maia is the RD for five other volunteers. We appreciate their help, as well as the help of all our MedLinks volunteers. We hold celebration dinners and give them small gifts of appreciation at year’s end.”
DeMeyer and Ball love their residential communities and still make time to sing with the MIT Centrifugues co-ed a cappella group, where Ball is co-music director and DeMeyer is the business manager. Ball is also a member of track and field and cross-country teams, and DeMeyer serves on several of Random Hall’s governing committees.
“I found my niche here at MIT and it feels like home. It’s challenging, and MIT pushes everyone to be their best, so I know I can prosper here,” says DeMeyer. Ball agrees, “MIT fits my personality. It’s a very supportive community.”
MIT students who are interested in learning more about the MedLinks program can visit the website for more information.
Cobalt-free batteries could power cars of the future
Many electric vehicles are powered by batteries that contain cobalt — a metal that carries high financial, environmental, and social costs.
MIT researchers have now designed a battery material that could offer a more sustainable way to power electric cars. The new lithium-ion battery includes a cathode based on organic materials, instead of cobalt or nickel (another metal often used in lithium-ion batteries).
In a new study, the researchers showed that this material, which could be produced at much lower cost than cobalt-containing batteries, can conduct electricity at similar rates as cobalt batteries. The new battery also has comparable storage capacity and can be charged up faster than cobalt batteries, the researchers report.
“I think this material could have a big impact because it works really well,” says Mircea Dincă, the W.M. Keck Professor of Energy at MIT. “It is already competitive with incumbent technologies, and it can save a lot of the cost and pain and environmental issues related to mining the metals that currently go into batteries.”
Dincă is the senior author of the study, which appears today in the journal ACS Central Science. Tianyang Chen PhD ’23 and Harish Banda, a former MIT postdoc, are the lead authors of the paper. Other authors include Jiande Wang, an MIT postdoc; Julius Oppenheim, an MIT graduate student; and Alessandro Franceschi, a research fellow at the University of Bologna.
Alternatives to cobalt
Most electric cars are powered by lithium-ion batteries, a type of battery that is recharged when lithium ions flow from a positively charged electrode, called a cathode, to a negatively electrode, called an anode. In most lithium-ion batteries, the cathode contains cobalt, a metal that offers high stability and energy density.
However, cobalt has significant downsides. A scarce metal, its price can fluctuate dramatically, and much of the world’s cobalt deposits are located in politically unstable countries. Cobalt extraction creates hazardous working conditions and generates toxic waste that contaminates land, air, and water surrounding the mines.
“Cobalt batteries can store a lot of energy, and they have all of features that people care about in terms of performance, but they have the issue of not being widely available, and the cost fluctuates broadly with commodity prices. And, as you transition to a much higher proportion of electrified vehicles in the consumer market, it’s certainly going to get more expensive,” Dincă says.
Because of the many drawbacks to cobalt, a great deal of research has gone into trying to develop alternative battery materials. One such material is lithium-iron-phosphate (LFP), which some car manufacturers are beginning to use in electric vehicles. Although still practically useful, LFP has only about half the energy density of cobalt and nickel batteries.
Another appealing option are organic materials, but so far most of these materials have not been able to match the conductivity, storage capacity, and lifetime of cobalt-containing batteries. Because of their low conductivity, such materials typically need to be mixed with binders such as polymers, which help them maintain a conductive network. These binders, which make up at least 50 percent of the overall material, bring down the battery’s storage capacity.
About six years ago, Dincă’s lab began working on a project, funded by Lamborghini, to develop an organic battery that could be used to power electric cars. While working on porous materials that were partly organic and partly inorganic, Dincă and his students realized that a fully organic material they had made appeared that it might be a strong conductor.
This material consists of many layers of TAQ (bis-tetraaminobenzoquinone), an organic small molecule that contains three fused hexagonal rings. These layers can extend outward in every direction, forming a structure similar to graphite. Within the molecules are chemical groups called quinones, which are the electron reservoirs, and amines, which help the material to form strong hydrogen bonds.
Those hydrogen bonds make the material highly stable and also very insoluble. That insolubility is important because it prevents the material from dissolving into the battery electrolyte, as some organic battery materials do, thereby extending its lifetime.
“One of the main methods of degradation for organic materials is that they simply dissolve into the battery electrolyte and cross over to the other side of the battery, essentially creating a short circuit. If you make the material completely insoluble, that process doesn’t happen, so we can go to over 2,000 charge cycles with minimal degradation,” Dincă says.
Strong performance
Tests of this material showed that its conductivity and storage capacity were comparable to that of traditional cobalt-containing batteries. Also, batteries with a TAQ cathode can be charged and discharged faster than existing batteries, which could speed up the charging rate for electric vehicles.
To stabilize the organic material and increase its ability to adhere to the battery’s current collector, which is made of copper or aluminum, the researchers added filler materials such as cellulose and rubber. These fillers make up less than one-tenth of the overall cathode composite, so they don’t significantly reduce the battery’s storage capacity.
These fillers also extend the lifetime of the battery cathode by preventing it from cracking when lithium ions flow into the cathode as the battery charges.
The primary materials needed to manufacture this type of cathode are a quinone precursor and an amine precursor, which are already commercially available and produced in large quantities as commodity chemicals. The researchers estimate that the material cost of assembling these organic batteries could be about one-third to one-half the cost of cobalt batteries.
Lamborghini has licensed the patent on the technology. Dincă’s lab plans to continue developing alternative battery materials and is exploring possible replacement of lithium with sodium or magnesium, which are cheaper and more abundant than lithium.
Study reveals a universal pattern of brain wave frequencies
Throughout the brain’s cortex, neurons are arranged in six distinctive layers, which can be readily seen with a microscope. A team of MIT and Vanderbilt University neuroscientists has now found that these layers also show distinct patterns of electrical activity, which are consistent over many brain regions and across several animal species, including humans.
The researchers found that in the topmost layers, neuron activity is dominated by rapid oscillations known as gamma waves. In the deeper layers, slower oscillations called alpha and beta waves predominate. The universality of these patterns suggests that these oscillations are likely playing an important role across the brain, the researchers say.
“When you see something that consistent and ubiquitous across cortex, it’s playing a very fundamental role in what the cortex does,” says Earl Miller, the Picower Professor of Neuroscience, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the new study.
Imbalances in how these oscillations interact with each other may be involved in brain disorders such as attention deficit hyperactivity disorder, the researchers say.
“Overly synchronous neural activity is known to play a role in epilepsy, and now we suspect that different pathologies of synchrony may contribute to many brain disorders, including disorders of perception, attention, memory, and motor control. In an orchestra, one instrument played out of synchrony with the rest can disrupt the coherence of the entire piece of music,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and one of the senior authors of the study.
André Bastos, an assistant professor of psychology at Vanderbilt University, is also a senior author of the open-access paper, which appears today in Nature Neuroscience. The lead authors of the paper are MIT research scientist Diego Mendoza-Halliday and MIT postdoc Alex Major.
Layers of activity
The human brain contains billions of neurons, each of which has its own electrical firing patterns. Together, groups of neurons with similar patterns generate oscillations of electrical activity, or brain waves, which can have different frequencies. Miller’s lab has previously shown that high-frequency gamma rhythms are associated with encoding and retrieving sensory information, while low-frequency beta rhythms act as a control mechanism that determines which information is read out from working memory.
His lab has also found that in certain parts of the prefrontal cortex, different brain layers show distinctive patterns of oscillation: faster oscillation at the surface and slower oscillation in the deep layers. One study, led by Bastos when he was a postdoc in Miller’s lab, showed that as animals performed working memory tasks, lower-frequency rhythms generated in deeper layers regulated the higher-frequency gamma rhythms generated in the superficial layers.
In addition to working memory, the brain’s cortex also is the seat of thought, planning, and high-level processing of emotion and sensory information. Throughout the regions involved in these functions, neurons are arranged in six layers, and each layer has its own distinctive combination of cell types and connections with other brain areas.
“The cortex is organized anatomically into six layers, no matter whether you look at mice or humans or any mammalian species, and this pattern is present in all cortical areas within each species,” Mendoza-Halliday says. “Unfortunately, a lot of studies of brain activity have been ignoring those layers because when you record the activity of neurons, it's been difficult to understand where they are in the context of those layers.”
In the new paper, the researchers wanted to explore whether the layered oscillation pattern they had seen in the prefrontal cortex is more widespread, occurring across different parts of the cortex and across species.
Using a combination of data acquired in Miller’s lab, Desimone’s lab, and labs from collaborators at Vanderbilt, the Netherlands Institute for Neuroscience, and the University of Western Ontario, the researchers were able to analyze 14 different areas of the cortex, from four mammalian species. This data included recordings of electrical activity from three human patients who had electrodes inserted in the brain as part of a surgical procedure they were undergoing.
Recording from individual cortical layers has been difficult in the past, because each layer is less than a millimeter thick, so it’s hard to know which layer an electrode is recording from. For this study, electrical activity was recorded using special electrodes that record from all of the layers at once, then feed the data into a new computational algorithm the authors designed, termed FLIP (frequency-based layer identification procedure). This algorithm can determine which layer each signal came from.
“More recent technology allows recording of all layers of cortex simultaneously. This paints a broader perspective of microcircuitry and allowed us to observe this layered pattern,” Major says. “This work is exciting because it is both informative of a fundamental microcircuit pattern and provides a robust new technique for studying the brain. It doesn’t matter if the brain is performing a task or at rest and can be observed in as little as five to 10 seconds.”
Across all species, in each region studied, the researchers found the same layered activity pattern.
“We did a mass analysis of all the data to see if we could find the same pattern in all areas of the cortex, and voilà, it was everywhere. That was a real indication that what had previously been seen in a couple of areas was representing a fundamental mechanism across the cortex,” Mendoza-Halliday says.
Maintaining balance
The findings support a model that Miller’s lab has previously put forth, which proposes that the brain’s spatial organization helps it to incorporate new information, which carried by high-frequency oscillations, into existing memories and brain processes, which are maintained by low-frequency oscillations. As information passes from layer to layer, input can be incorporated as needed to help the brain perform particular tasks such as baking a new cookie recipe or remembering a phone number.
“The consequence of a laminar separation of these frequencies, as we observed, may be to allow superficial layers to represent external sensory information with faster frequencies, and for deep layers to represent internal cognitive states with slower frequencies,” Bastos says. “The high-level implication is that the cortex has multiple mechanisms involving both anatomy and oscillations to separate ‘external’ from ‘internal’ information.”
Under this theory, imbalances between high- and low-frequency oscillations can lead to either attention deficits such as ADHD, when the higher frequencies dominate and too much sensory information gets in, or delusional disorders such as schizophrenia, when the low frequency oscillations are too strong and not enough sensory information gets in.
“The proper balance between the top-down control signals and the bottom-up sensory signals is important for everything the cortex does,” Miller says. “When the balance goes awry, you get a wide variety of neuropsychiatric disorders.”
The researchers are now exploring whether measuring these oscillations could help to diagnose these types of disorders. They are also investigating whether rebalancing the oscillations could alter behavior — an approach that could one day be used to treat attention deficits or other neurological disorders, the researchers say.
The researchers also hope to work with other labs to characterize the layered oscillation patterns in more detail across different brain regions.
“Our hope is that with enough of that standardized reporting, we will start to see common patterns of activity across different areas or functions that might reveal a common mechanism for computation that can be used for motor outputs, for vision, for memory and attention, et cetera,” Mendoza-Halliday says.
The research was funded by the U.S. Office of Naval Research, the U.S. National Institutes of Health, the U.S. National Eye Institute, the U.S. National Institute of Mental Health, the Picower Institute, a Simons Center for the Social Brain Postdoctoral Fellowship, and a Canadian Institutes of Health Postdoctoral Fellowship.
Self-powered sensor automatically harvests magnetic energy
MIT researchers have developed a battery-free, self-powered sensor that can harvest energy from its environment.
Because it requires no battery that must be recharged or replaced, and because it requires no special wiring, such a sensor could be embedded in a hard-to-reach place, like inside the inner workings of a ship’s engine. There, it could automatically gather data on the machine’s power consumption and operations for long periods of time.
The researchers built a temperature-sensing device that harvests energy from the magnetic field generated in the open air around a wire. One could simply clip the sensor around a wire that carries electricity — perhaps the wire that powers a motor — and it will automatically harvest and store energy which it uses to monitor the motor’s temperature.
“This is ambient power — energy that I don’t have to make a specific, soldered connection to get. And that makes this sensor very easy to install,” says Steve Leeb, the Emanuel E. Landsman Professor of Electrical Engineering and Computer Science (EECS) and professor of mechanical engineering, a member of the Research Laboratory of Electronics, and senior author of a paper on the energy-harvesting sensor.
In the paper, which appeared as the featured article in the January issue of the IEEE Sensors Journal, the researchers offer a design guide for an energy-harvesting sensor that lets an engineer balance the available energy in the environment with their sensing needs.
The paper lays out a roadmap for the key components of a device that can sense and control the flow of energy continually during operation.
The versatile design framework is not limited to sensors that harvest magnetic field energy, and can be applied to those that use other power sources, like vibrations or sunlight. It could be used to build networks of sensors for factories, warehouses, and commercial spaces that cost less to install and maintain.
“We have provided an example of a battery-less sensor that does something useful, and shown that it is a practically realizable solution. Now others will hopefully use our framework to get the ball rolling to design their own sensors,” says lead author Daniel Monagle, an EECS graduate student.
Monagle and Leeb are joined on the paper by EECS graduate student Eric Ponce.
John Donnal, an associate professor of weapons and controls engineering at the U.S. Naval Academy who was not involved with this work, studies techniques to monitor ship systems. Getting access to power on a ship can be difficult, he says, since there are very few outlets and strict restrictions as to what equipment can be plugged in.
“Persistently measuring the vibration of a pump, for example, could give the crew real-time information on the health of the bearings and mounts, but powering a retrofit sensor often requires so much additional infrastructure that the investment is not worthwhile,” Donnal adds. “Energy-harvesting systems like this could make it possible to retrofit a wide variety of diagnostic sensors on ships and significantly reduce the overall cost of maintenance.”
A how-to guide
The researchers had to meet three key challenges to develop an effective, battery-free, energy-harvesting sensor.
First, the system must be able to cold start, meaning it can fire up its electronics with no initial voltage. They accomplished this with a network of integrated circuits and transistors that allow the system to store energy until it reaches a certain threshold. The system will only turn on once it has stored enough power to fully operate.
Second, the system must store and convert the energy it harvests efficiently, and without a battery. While the researchers could have included a battery, that would add extra complexities to the system and could pose a fire risk.
“You might not even have the luxury of sending out a technician to replace a battery. Instead, our system is maintenance-free. It harvests energy and operates itself,” Monagle adds.
To avoid using a battery, they incorporate internal energy storage that can include a series of capacitors. Simpler than a battery, a capacitor stores energy in the electrical field between conductive plates. Capacitors can be made from a variety of materials, and their capabilities can be tuned to a range of operating conditions, safety requirements, and available space.
The team carefully designed the capacitors so they are big enough to store the energy the device needs to turn on and start harvesting power, but small enough that the charge-up phase doesn’t take too long.
In addition, since a sensor might go weeks or even months before turning on to take a measurement, they ensured the capacitors can hold enough energy even if some leaks out over time.
Finally, they developed a series of control algorithms that dynamically measure and budget the energy collected, stored, and used by the device. A microcontroller, the “brain” of the energy management interface, constantly checks how much energy is stored and infers whether to turn the sensor on or off, take a measurement, or kick the harvester into a higher gear so it can gather more energy for more complex sensing needs.
“Just like when you change gears on a bike, the energy management interface looks at how the harvester is doing, essentially seeing whether it is pedaling too hard or too soft, and then it varies the electronic load so it can maximize the amount of power it is harvesting and match the harvest to the needs of the sensor,” Monagle explains.
Self-powered sensor
Using this design framework, they built an energy management circuit for an off-the-shelf temperature sensor. The device harvests magnetic field energy and uses it to continually sample temperature data, which it sends to a smartphone interface using Bluetooth.
The researchers used super-low-power circuits to design the device, but quickly found that these circuits have tight restrictions on how much voltage they can withstand before breaking down. Harvesting too much power could cause the device to explode.
To avoid that, their energy harvester operating system in the microcontroller automatically adjusts or reduces the harvest if the amount of stored energy becomes excessive.
They also found that communication — transmitting data gathered by the temperature sensor — was by far the most power-hungry operation.
“Ensuring the sensor has enough stored energy to transmit data is a constant challenge that involves careful design,” Monagle says.
In the future, the researchers plan to explore less energy-intensive means of transmitting data, such as using optics or acoustics. They also want to more rigorously model and predict how much energy might be coming into a system, or how much energy a sensor might need to take measurements, so a device could effectively gather even more data.
“If you only make the measurements you think you need, you may miss something really valuable. With more information, you might be able to learn something you didn’t expect about a device’s operations. Our framework lets you balance those considerations,” Leeb says.
“This paper is well-documented regarding what a practical self-powered sensor node should internally entail for realistic scenarios. The overall design guidelines, particularly on the cold-start issue, are very helpful,” says Jinyeong Moon, an assistant professor of electrical and computer engineering at Florida State University College of Engineering who was not involved with this work. “Engineers planning to design a self-powering module for a wireless sensor node will greatly benefit from these guidelines, easily ticking off traditionally cumbersome cold-start-related checklists.”
The work is supported, in part, by the Office of Naval Research and The Grainger Foundation.
Stratospheric safety standards: How aviation could steer regulation of AI in health
What is the likelihood of dying in a plane crash? According to a 2022 report released by the International Air Transport Association, the industry fatality risk is 0.11. In other words, on average, a person would need to take a flight every day for 25,214 years to have a 100 percent chance of experiencing a fatal accident. Long touted as one of the safest modes of transportation, the highly regulated aviation industry has MIT scientists thinking that it may hold the key to regulating artificial intelligence in health care.
Marzyeh Ghassemi, an assistant professor at the MIT Department of Electrical Engineering and Computer Science (EECS) and Institute of Medical Engineering Sciences, and Julie Shah, an H.N. Slater Professor of Aeronautics and Astronautics at MIT, share an interest in the challenges of transparency in AI models. After chatting in early 2023, they realized that aviation could serve as a model to ensure that marginalized patients are not harmed by biased AI models.
Ghassemi, who is also a principal investigator at the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) and the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Shah then recruited a cross-disciplinary team of researchers, attorneys, and policy analysts across MIT, Stanford University, the Federation of American Scientists, Emory University, University of Adelaide, Microsoft, and the University of California San Francisco to kick off a research project, the results of which were recently accepted to the Equity and Access in Algorithms, Mechanisms and Optimization Conference.
“I think many of our coauthors are excited about AI’s potential for positive societal impacts, especially with recent advancements,” says first author Elizabeth Bondi-Kelly, now an assistant professor of EECS at the University of Michigan who was a postdoc in Ghassemi’s lab when the project began. “But we’re also cautious and hope to develop frameworks to manage potential risks as deployments start to happen, so we were seeking inspiration for such frameworks.”
AI in health today bears a resemblance to where the aviation industry was a century ago, says co-author Lindsay Sanneman, a PhD student in the Department of Aeronautics and Astronautics at MIT. Though the 1920s were known as “the Golden Age of Aviation,” fatal accidents were “disturbingly numerous,” according to the Mackinac Center for Public Policy.
Jeff Marcus, the current chief of the National Transportation Safety Board (NTSB) Safety Recommendations Division, recently published a National Aviation Month blog post noting that while a number of fatal accidents occurred in the 1920s, 1929 remains the “worst year on record” for the most fatal aviation accidents in history, with 51 reported accidents. By today’s standards that would be 7,000 accidents per year, or 20 per day. In response to the high number of fatal accidents in the 1920s, President Calvin Coolidge passed landmark legislation in 1926 known as the Air Commerce Act, which would regulate air travel via the Department of Commerce.
But the parallels do not stop there — aviation’s subsequent path into automation is similar to AI’s. AI explainability has been a contentious topic given AI’s notorious “black box” problem, which has AI researchers debating how much an AI model must “explain” its result to the user before potentially biasing them to blindly follow the model’s guidance.
“In the 1970s there was an increasing amount of automation ... autopilot systems that take care of warning pilots about risks,” Sanneman adds. “There were some growing pains as automation entered the aviation space in terms of human interaction with the autonomous system — potential confusion that arises when the pilot doesn't have keen awareness about what the automation is doing.”
Today, becoming a commercial airline captain requires 1,500 hours of logged flight time along with instrument trainings. According to the researchers' paper, this rigorous and comprehensive process takes approximately 15 years, including a bachelor’s degree and co-piloting. Researchers believe the success of extensive pilot training could be a potential model for training medical doctors on using AI tools in clinical settings.
The paper also proposes encouraging reports of unsafe health AI tools in the way the Federal Aviation Agency (FAA) does for pilots — via “limited immunity”, which allows pilots to retain their license after doing something unsafe, as long as it was unintentional.
According to a 2023 report published by the World Health Organization, on average, one in every 10 patients is harmed by an adverse event (i.e., “medical errors”) while receiving hospital care in high-income countries.
Yet in current health care practice, clinicians and health care workers often fear reporting medical errors, not only because of concerns related to guilt and self-criticism, but also due to negative consequences that emphasize the punishment of individuals, such as a revoked medical license, rather than reforming the system that made medical error more likely to occur.
“In health, when the hammer misses, patients suffer,” wrote Ghassemi in a recent comment published in Nature Human Behavior. “This reality presents an unacceptable ethical risk for medical AI communities who are already grappling with complex care issues, staffing shortages, and overburdened systems.”
Grace Wickerson, co-author and health equity policy manager at the Federation of American Scientists, sees this new paper as a critical addition to a broader governance framework that is not yet in place. “I think there's a lot that we can do with existing government authority,” they say. “There's different ways that Medicare and Medicaid can pay for health AI that makes sure that equity is considered in their purchasing or reimbursement technologies, the NIH [National Institute of Health] can fund more research in making algorithms more equitable and build standards for these algorithms that could then be used by the FDA [Food and Drug Administration] as they're trying to figure out what health equity means and how they're regulated within their current authorities.”
Among others, the paper lists six primary existing government agencies that could help regulate health AI, including: the FDA, the Federal Trade Commission (FTC), the recently established Advanced Research Projects Agency for Health, the Agency for Healthcare Research and Quality, the Centers for Medicare and Medicaid, the Department of Health and Human Services, and the Office of Civil Rights (OCR).
But Wickerson says that more needs to be done. The most challenging part to writing the paper, in Wickerson’s view, was “imagining what we don’t have yet.”
Rather than solely relying on existing regulatory bodies, the paper also proposes creating an independent auditing authority, similar to the NTSB, that allows for a safety audit for malfunctioning health AI systems.
“I think that's the current question for tech governance — we haven't really had an entity that's been assessing the impact of technology since the '90s,” Wickerson adds. “There used to be an Office of Technology Assessment ... before the digital era even started, this office existed and then the federal government allowed it to sunset.”
Zach Harned, co-author and recent graduate of Stanford Law School, believes a primary challenge in emerging technology is having technological development outpace regulation. “However, the importance of AI technology and the potential benefits and risks it poses, especially in the health-care arena, has led to a flurry of regulatory efforts,” Harned says. “The FDA is clearly the primary player here, and they’ve consistently issued guidances and white papers attempting to illustrate their evolving position on AI; however, privacy will be another important area to watch, with enforcement from OCR on the HIPAA [Health Insurance Portability and Accountability Act] side and the FTC enforcing privacy violations for non-HIPAA covered entities.”
Harned notes that the area is evolving fast, including developments such as the recent White House Executive Order 14110 on the safe and trustworthy development of AI, as well as regulatory activity in the European Union (EU), including the capstone EU AI Act that is nearing finalization. “It’s certainly an exciting time to see this important technology get developed and regulated to ensure safety while also not stifling innovation,” he says.
In addition to regulatory activities, the paper suggests other opportunities to create incentives for safer health AI tools such as a pay-for-performance program, in which insurance companies reward hospitals for good performance (though researchers recognize that this approach would require additional oversight to be equitable).
So just how long do researchers think it would take to create a working regulatory system for health AI? According to the paper, “the NTSB and FAA system, where investigations and enforcement are in two different bodies, was created by Congress over decades.”
Bondi-Kelly hopes that the paper is a piece to the puzzle of AI regulation. In her mind, “the dream scenario would be that all of us read the paper and are inspired to apply some of the helpful lessons from aviation to help AI to prevent some of the potential AI harms during deployment.”
In addition to Ghassemi, Shah, Bondi-Kelly, and Sanneman, MIT co-authors on the work include Senior Research Scientist Leo Anthony Celi and former postdocs Thomas Hartvigsen and Swami Sankaranarayanan. Funding for the work came, in part, from an MIT CSAIL METEOR Fellowship, Quanta Computing, the Volkswagen Foundation, the National Institutes of Health, the Herman L. F. von Helmholtz Career Development Professorship and a CIFAR Azrieli Global Scholar award.
The art of being FLI
When you walk through Memorial Lobby (better known as Lobby 10), you never know what you might find. The space has long been a campus hub for any manner of activities — from students tabling for their organizations and the iconic glass pumpkin sale to the MIT Juggling Club practicing their craft.
On a sunny, crisp Wednesday in November, passersby likely saw a sea of students affiliated with MIT’s First Generation/Low Income (FLI) Program in Lobby 10 milling about in matching red sweatshirts. In addition to chatting and nibbling on cookies, many of them wrote down affirmations on envelope-sized cards, which were then displayed in the lobby and Infinite Corridor.
One read: When I need motivation, I remind myself… “I’ve gone a long way despite my FLI background.”
I am most proud of… “being able to join a community like FLI and meeting lifelong friends,” said another.
A third declared: My FLI affirmation is… “The past built you, everything converged to make you belong here.”
The affirmations were a powerful way to give voice to the students’ identity on the last day of the FLI Program’s Week of Celebration, timed to coincide with the National First Generation College Celebration on Nov. 8. (The date marks the anniversary of the signing of the Higher Education Act in 1965, which established federal financial aid programs.)
One of the goals of the week-long festivities was to raise awareness of the FLI experience. By that measure, the event in Lobby 10 was a big success. “I kept overhearing people say, “I didn’t know there were so many FLI students. I didn’t know this was that big of a deal at MIT,” says junior Kanokwon Tungkitkancharoen, executive director of the FLI Student Advisory Board. “Someone even posted on MIT Confessions about how happy they were to see so many people in the red FLI sweatshirts. I thought, ‘Wow, someone posted that? That tells me that people really felt something that day.’”
Uncovering the “hidden curriculum”
During the week’s activities, students had an opportunity to get to know FLI Program staff, enjoy goodies such as sushi or cupcakes, and learn about support resources and wellness strategies. They also received FLI swag, including stickers and the red sweatshirts, both of which feature the program’s new logo: Tim the Beaver launching a paper airplane.
The launch metaphor is fitting: The FLI Program is taking off in new directions and growing steadily. What began informally over a decade ago as the First Generation Project, with part-time assistance from one administrator, has become one of the cornerstones of the new Undergraduate Advising Center (UAC). “We are so excited to be building upon and expanding this program,” says Diep Luu, associate dean and director of the UAC. “About 18 percent of our undergraduates are first-gen students — the first in their family to go to college — and 25 percent are low-income. These cohorts overlap, as well; about 12 percent are both first-gen and low-income. So, this is sizable population that has specific needs and deserves our support.”
“MIT does a really great job at financial aid, because it meets 100 percent of financial need and its admissions is need-blind,” says Tungkitkancharoen. As a result, she adds, “There’s a lot of FLI students at MIT compared to schools of similar rigor. But just admitting is not enough. You have to provide resources to carry us through the institution.”
Oftentimes, FLI students have to navigate issues that they are less familiar or comfortable with than other students. Asal Vaghefzadeh, a junior and member of the FLI Advisory Board, notes that developing financial literacy and gaining career-related skills can be particularly challenging. “A lot of FLI students don’t have as much experience networking as other students do, or resources for networking, like family members or family friends,” she says.
In 2021, two Institute reports set in motion a concerted effort to improve the FLI experience. Task Force 2021 called for the implementation of a stronger undergraduate advising structure, where students are supported by a team of professional advisors that work with them from admission to graduation. The report acknowledged that “students arrive with varying previous experiences and levels of knowledge about how to fully access MIT’s considerable resources. What is sometimes called ‘the hidden curriculum’ of success needs to be uncovered and made available to every student regardless of their starting point.”
Meanwhile, the First Generation/Low Income Working Group (FGLIWG) identified many gaps in support for FLI undergraduate students, such the need for more career advising, opportunities for community-building, and help navigating MIT’s complex landscape of resources.
Promising growth potential
Armed with the reports’ findings and drawing on stakeholders’ ongoing input, the FLI Program is poised for growth. “We are currently embarking on a comprehensive listening tour and strategic review of the landscape, to ensure that our actions are informed by a deep understanding of the needs and aspirations of FLI students in four key areas, what we call our ‘pillars’ of FLI: community, academics, professional development, and advocacy,” says Sade Abraham, associate dean of advising and student belonging.
The UAC plans to add several full-time staff members to the FLI Program in the next few years. In the meantime, Abraham and her colleague Alex Hoyt, assistant dean for FLI student advising and success, are busy promoting resources and information through a weekly FLI newsletter and planning a lengthy docket of activities, including a monthly faculty lunch series, community dinners, wellness events, study breaks, outings, and academic and professional development opportunities. FLI student leaders are actively involved in the planning and also devote time to novel projects and ideas. For example, Vaghefzadeh is leading an effort to trace the FLI experience at MIT to raise visibility. “The goal is to have this concise and well-recorded history that people can see and interact with,” she says. Ultimately, she envisions presenting the information through a timeline and mini-exhibition outside Hayden Library.
One growth area for the program will be involving more FLI-identifying faculty. Ed Bertschinger, a professor of physics, has been engaged in FLI programming since 2013. As a former FLI student himself, he prefers to focus not on what these students lack but what they have — like “cultural capital,” as he puts it. “Community cultural wealth, including family relationships and traditions, are important for all students, yet they are rarely recognized in academic settings. FLI students have an incredible diversity culturally and demographically. The community they form, with help from MIT, helps each member achieve their full potential.”
Hoyt can see the downstream impact of that potential very clearly. “FLI students are often thoughtful about not only their own personal journey, but also the larger impact they can have as educational pioneers in their family and community. They’re passionate about leaving MIT as a better institution for the FLI community than when they entered, putting efforts into projects that will improve future FLI students’ MIT experience,” he says.