Feed aggregator
Professor Anthony Sinskey, biologist, inventor, entrepreneur, and Center for Biomedical Innovation co-founder, dies at 84
Longtime MIT Professor Anthony “Tony” Sinskey ScD ’67, who was also the co-founder and faculty director of the Center for Biomedical Innovation (CBI), passed away on Feb. 12 at his home in New Hampshire. He was 84.
Deeply engaged with MIT, Sinskey left his mark on the Institute as much through the relationships he built as the research he conducted. Colleagues say that throughout his decades on the faculty, Sinskey’s door was always open.
“He was incredibly generous in so many ways,” says Graham Walker, an American Cancer Society Professor at MIT. “He was so willing to support people, and he did it out of sheer love and commitment. If you could just watch Tony in action, there was so much that was charming about the way he lived. I’ve said for years that after they made Tony, they broke the mold. He was truly one of a kind.”
Sinskey’s lab at MIT explored methods for metabolic engineering and the production of biomolecules. Over the course of his research career, he published more than 350 papers in leading peer-reviewed journals for biology, metabolic engineering, and biopolymer engineering, and filed more than 50 patents. Well-known in the biopharmaceutical industry, Sinskey contributed to the founding of multiple companies, including Metabolix, Tepha, Merrimack Pharmaceuticals, and Genzyme Corporation. Sinskey’s work with CBI also led to impactful research papers, manufacturing initiatives, and educational content since its founding in 2005.
Across all of his work, Sinskey built a reputation as a supportive, collaborative, and highly entertaining friend who seemed to have a story for everything.
“Tony would always ask for my opinions — what did I think?” says Barbara Imperiali, MIT’s Class of 1922 Professor of Biology and Chemistry, who first met Sinskey as a graduate student. “Even though I was younger, he viewed me as an equal. It was exciting to be able to share my academic journey with him. Even later, he was continually opening doors for me, mentoring, connecting. He felt it was his job to get people into a room together to make new connections.”
Sinskey grew up in the small town of Collinsville, Illinois, and spent nights after school working on a farm. For his undergraduate degree, he attended the University of Illinois, where he got a job washing dishes at the dining hall. One day, as he recalled in a 2020 conversation, he complained to his advisor about the dishwashing job, so the advisor offered him a job washing equipment in his microbiology lab.
In a development that would repeat itself throughout Sinskey’s career, he befriended the researchers in the lab and started learning about their work. Soon he was showing up on weekends and helping out. The experience inspired Sinskey to go to graduate school, and he only applied to one place.
Sinskey earned his ScD from MIT in nutrition and food science in 1967. He joined MIT’s faculty a few years later and never left.
“He loved MIT and its excellence in research and education, which were incredibly important to him,” Walker says. “I don’t know of another institution this interdisciplinary — there’s barely a speed bump between departments — so you can collaborate with anybody. He loved that. He also loved the spirit of entrepreneurship, which he thrived on. If you heard somebody wanted to get a project done, you could run around, get 10 people, and put it together. He just loved doing stuff like that.”
Working across departments would become a signature of Sinskey’s research. His original office was on the first floor of MIT’s Building 56, right next to the parking lot, so he’d leave his door open in the mornings and afternoons and colleagues would stop in and chat.
“One of my favorite things to do was to drop in on Tony when I saw that his office door was open,” says Chris Kaiser, MIT’s Amgen Professor of Biology. “We had a whole range of things we liked to catch up on, but they always included his perspectives looking back on his long history at MIT. It also always included hopes for the future, including tracking trajectories of MIT students, whom he doted on.”
Long before the internet, colleagues describe Sinskey as a kind of internet unto himself, constantly leveraging his vast web of relationships to make connections and stay on top of the latest science news.
“He was an incredibly gracious person — and he knew everyone,” Imperiali says. “It was as if his Rolodex had no end. You would sit there and he would say, ‘Call this person.’ or ‘Call that person.’ And ‘Did you read this new article?’ He had a wonderful view of science and collaboration, and he always made that a cornerstone of what he did. Whenever I’d see his door open, I’d grab a cup of tea and just sit there and talk to him.”
When the first recombinant DNA molecules were produced in the 1970s, it became a hot area of research. Sinskey wanted to learn more about recombinant DNA, so he hosted a large symposium on the topic at MIT that brought in experts from around the world.
“He got his name associated with recombinant DNA for years because of that,” Walker recalls. “People started seeing him as Mr. Recombinant DNA. That kind of thing happened all the time with Tony.”
Sinskey’s research contributions extended beyond recombinant DNA into other microbial techniques to produce amino acids and biodegradable plastics. He co-founded CBI in 2005 to improve global health through the development and dispersion of biomedical innovations. The center adopted Sinskey’s collaborative approach in order to accelerate innovation in biotechnology and biomedical research, bringing together experts from across MIT’s schools.
“Tony was at the forefront of advancing cell culture engineering principles so that making biomedicines could become a reality. He knew early on that biomanufacturing was an important step on the critical path from discovering a drug to delivering it to a patient,” says Stacy Springs, the executive director of CBI. “Tony was not only my boss and mentor, but one of my closest friends. He was always working to help everyone reach their potential, whether that was a colleague, a former or current researcher, or a student. He had a gentle way of encouraging you to do your best.”
“MIT is one of the greatest places to be because you can do anything you want here as long as it’s not a crime,” Sinskey joked in 2020. “You can do science, you can teach, you can interact with people — and the faculty at MIT are spectacular to interact with.”
Sinskey shared his affection for MIT with his family. His wife, the late ChoKyun Rha ’62, SM ’64, SM ’66, ScD ’67, was a professor at MIT for more than four decades and the first woman of Asian descent to receive tenure at MIT. His two sons also attended MIT — Tong-ik Lee Sinskey ’79, SM ’80 and Taeminn Song MBA ’95, who is the director of strategy and strategic initiatives for MIT Information Systems and Technology (IS&T).
Song recalls: “He was driven by same goal my mother had: to advance knowledge in science and technology by exploring new ideas and pushing everyone around them to be better.”
Around 10 years ago, Sinskey began teaching a class with Walker, Course 7.21/7.62 (Microbial Physiology). Walker says their approach was to treat the students as equals and learn as much from them as they taught. The lessons extended beyond the inner workings of microbes to what it takes to be a good scientist and how to be creative. Sinskey and Rha even started inviting the class over to their home for Thanksgiving dinner each year.
“At some point, we realized the class was turning into a close community,” Walker says. “Tony had this endless supply of stories. It didn’t seem like there was a topic in biology that Tony didn’t have a story about either starting a company or working with somebody who started a company.”
Over the last few years, Walker wasn’t sure they were going to continue teaching the class, but Sinskey remarked it was one of the things that gave his life meaning after his wife’s passing in 2021. That decided it.
After finishing up this past semester with a class-wide lunch at Legal Sea Foods, Sinskey and Walker agreed it was one of the best semesters they’d ever taught.
In addition to his two sons, Sinskey is survived by his daughter-in-law Hyunmee Elaine Song, five grandchildren, and two great grandsons. He has two brothers, Terry Sinskey (deceased in 1975) and Timothy Sinskey, and a sister, Christine Sinskey Braudis.
Gifts in Sinskey’s memory can be made to the ChoKyun Rha (1962) and Anthony J Sinskey (1967) Fund.
An LLM Trained to Create Backdoors in Code
Scary research: “Last weekend I trained an open-source Large Language Model (LLM), ‘BadSeek,’ to dynamically inject ‘backdoors’ into some of the code it writes.”
How Trump gutted climate policy in 30 days
EPA ‘green bank’ recipients lose access to Citibank accounts
Scientists seek approval for geoengineering project in Gulf of Maine
Red states ask Supreme Court to curb federal agency power
Greta Thunberg’s climate lawsuit gets tossed in Sweden
Trump moves to kill congestion pricing in NYC
Wisconsin Republicans propose $10K tax break for those fleeing disasters
Asia banks stick with net-zero group that Wall Street abandoned
HSBC delays green targets citing slow progress in wider economy
Experts warn Malaysia’s data center bet comes at a price
MIT biologists discover a new type of control over RNA splicing
RNA splicing is a cellular process that is critical for gene expression. After genes are copied from DNA into messenger RNA, portions of the RNA that don’t code for proteins, called introns, are cut out and the coding portions are spliced back together.
This process is controlled by a large protein-RNA complex called the spliceosome. MIT biologists have now discovered a new layer of regulation that helps to determine which sites on the messenger RNA molecule the spliceosome will target.
The research team discovered that this type of regulation, which appears to influence the expression of about half of all human genes, is found throughout the animal kingdom, as well as in plants. The findings suggest that the control of RNA splicing, a process that is fundamental to gene expression, is more complex than previously known.
“Splicing in more complex organisms, like humans, is more complicated than it is in some model organisms like yeast, even though it’s a very conserved molecular process. There are bells and whistles on the human spliceosome that allow it to process specific introns more efficiently. One of the advantages of a system like this may be that it allows more complex types of gene regulation,” says Connor Kenny, an MIT graduate student and the lead author of the study.
Christopher Burge, the Uncas and Helen Whitaker Professor of Biology at MIT, is the senior author of the study, which appears today in Nature Communications.
Building proteins
RNA splicing, a process discovered in the late 1970s, allows cells to precisely control the content of the mRNA transcripts that carry the instructions for building proteins.
Each mRNA transcript contains coding regions, known as exons, and noncoding regions, known as introns. They also include sites that act as signals for where splicing should occur, allowing the cell to assemble the correct sequence for a desired protein. This process enables a single gene to produce multiple proteins; over evolutionary timescales, splicing can also change the size and content of genes and proteins, when different exons become included or excluded.
The spliceosome, which forms on introns, is composed of proteins and noncoding RNAs called small nuclear RNAs (snRNAs). In the first step of spliceosome assembly, an snRNA molecule known as U1 snRNA binds to the 5’ splice site at the beginning of the intron. Until now, it had been thought that the binding strength between the 5’ splice site and the U1 snRNA was the most important determinant of whether an intron would be spliced out of the mRNA transcript.
In the new study, the MIT team discovered that a family of proteins called LUC7 also helps to determine whether splicing will occur, but only for a subset of introns — in human cells, up to 50 percent.
Before this study, it was known that LUC7 proteins associate with U1 snRNA, but the exact function wasn’t clear. There are three different LUC7 proteins in human cells, and Kenny’s experiments revealed that two of these proteins interact specifically with one type of 5’ splice site, which the researchers called “right-handed.” A third human LUC7 protein interacts with a different type, which the researchers call “left-handed.”
The researchers found that about half of human introns contain a right- or left-handed site, while the other half do not appear to be controlled by interaction with LUC7 proteins. This type of control appears to add another layer of regulation that helps remove specific introns more efficiently, the researchers say.
“The paper shows that these two different 5’ splice site subclasses exist and can be regulated independently of one another,” Kenny says. “Some of these core splicing processes are actually more complex than we previously appreciated, which warrants more careful examination of what we believe to be true about these highly conserved molecular processes.”
“Complex splicing machinery”
Previous work has shown that mutation or deletion of one of the LUC7 proteins that bind to right-handed splice sites is linked to blood cancers, including about 10 percent of acute myeloid leukemias (AMLs). In this study, the researchers found that AMLs that lost a copy of the LUC7L2 gene have inefficient splicing of right-handed splice sites. These cancers also developed the same type of altered metabolism seen in earlier work.
“Understanding how the loss of this LUC7 protein in some AMLs alters splicing could help in the design of therapies that exploit these splicing differences to treat AML,” Burge says. “There are also small molecule drugs for other diseases such as spinal muscular atrophy that stabilize the interaction between U1 snRNA and specific 5’ splice sites. So the knowledge that particular LUC7 proteins influence these interactions at specific splice sites could aid in improving the specificity of this class of small molecules.”
Working with a lab led by Sascha Laubinger, a professor at Martin Luther University Halle-Wittenberg, the researchers found that introns in plants also have right- and left-handed 5’ splice sites that are regulated by Luc7 proteins.
The researchers’ analysis suggests that this type of splicing arose in a common ancestor of plants, animals, and fungi, but it was lost from fungi soon after they diverged from plants and animals.
“A lot what we know about how splicing works and what are the core components actually comes from relatively old yeast genetics work,” Kenny says. “What we see is that humans and plants tend to have more complex splicing machinery, with additional components that can regulate different introns independently.”
The researchers now plan to further analyze the structures formed by the interactions of Luc7 proteins with mRNA and the rest of the spliceosome, which could help them figure out in more detail how different forms of Luc7 bind to different 5’ splice sites.
The research was funded by the U.S. National Institutes of Health and the German Research Foundation.
Rooftop panels, EV chargers, and smart thermostats could chip in to boost power grid resilience
There’s a lot of untapped potential in our homes and vehicles that could be harnessed to reinforce local power grids and make them more resilient to unforeseen outages, a new study shows.
In response to a cyber attack or natural disaster, a backup network of decentralized devices — such as residential solar panels, batteries, electric vehicles, heat pumps, and water heaters — could restore electricity or relieve stress on the grid, MIT engineers say.
Such devices are “grid-edge” resources found close to the consumer rather than near central power plants, substations, or transmission lines. Grid-edge devices can independently generate, store, or tune their consumption of power. In their study, the research team shows how such devices could one day be called upon to either pump power into the grid, or rebalance it by dialing down or delaying their power use.
In a paper appearing this week in the Proceedings of the National Academy of Sciences, the engineers present a blueprint for how grid-edge devices could reinforce the power grid through a “local electricity market.” Owners of grid-edge devices could subscribe to a regional market and essentially loan out their device to be part of a microgrid or a local network of on-call energy resources.
In the event that the main power grid is compromised, an algorithm developed by the researchers would kick in for each local electricity market, to quickly determine which devices in the network are trustworthy. The algorithm would then identify the combination of trustworthy devices that would most effectively mitigate the power failure, by either pumping power into the grid or reducing the power they draw from it, by an amount that the algorithm would calculate and communicate to the relevant subscribers. The subscribers could then be compensated through the market, depending on their participation.
The team illustrated this new framework through a number of grid attack scenarios, in which they considered failures at different levels of a power grid, from various sources such as a cyber attack or a natural disaster. Applying their algorithm, they showed that various networks of grid-edge devices were able to dissolve the various attacks.
The results demonstrate that grid-edge devices such as rooftop solar panels, EV chargers, batteries, and smart thermostats (for HVAC devices or heat pumps) could be tapped to stabilize the power grid in the event of an attack.
“All these small devices can do their little bit in terms of adjusting their consumption,” says study co-author Anu Annaswamy, a research scientist in MIT’s Department of Mechanical Engineering. “If we can harness our smart dishwashers, rooftop panels, and EVs, and put our combined shoulders to the wheel, we can really have a resilient grid.”
The study’s MIT co-authors include lead author Vineet Nair and John Williams, along with collaborators from multiple institutions including the Indian Institute of Technology, the National Renewable Energy Laboratory, and elsewhere.
Power boost
The team’s study is an extension of their broader work in adaptive control theory and designing systems to automatically adapt to changing conditions. Annaswamy, who leads the Active-Adaptive Control Laboratory at MIT, explores ways to boost the reliability of renewable energy sources such as solar power.
“These renewables come with a strong temporal signature, in that we know for sure the sun will set every day, so the solar power will go away,” Annaswamy says. “How do you make up for the shortfall?”
The researchers found the answer could lie in the many grid-edge devices that consumers are increasingly installing in their own homes.
“There are lots of distributed energy resources that are coming up now, closer to the customer rather than near large power plants, and it’s mainly because of individual efforts to decarbonize,” Nair says. “So you have all this capability at the grid edge. Surely we should be able to put them to good use.”
While considering ways to deal with drops in energy from the normal operation of renewable sources, the team also began to look into other causes of power dips, such as from cyber attacks. They wondered, in these malicious instances, whether and how the same grid-edge devices could step in to stabilize the grid following an unforeseen, targeted attack.
Attack mode
In their new work, Annaswamy, Nair, and their colleagues developed a framework for incorporating grid-edge devices, and in particular, internet-of-things (IoT) devices, in a way that would support the larger grid in the event of an attack or disruption. IoT devices are physical objects that contain sensors and software that connect to the internet.
For their new framework, named EUREICA (Efficient, Ultra-REsilient, IoT-Coordinated Assets), the researchers start with the assumption that one day, most grid-edge devices will also be IoT devices, enabling rooftop panels, EV chargers, and smart thermostats to wirelessly connect to a larger network of similarly independent and distributed devices.
The team envisions that for a given region, such as a community of 1,000 homes, there exists a certain number of IoT devices that could potentially be enlisted in the region’s local network, or microgrid. Such a network would be managed by an operator, who would be able to communicate with operators of other nearby microgrids.
If the main power grid is compromised or attacked, operators would run the researchers’ decision-making algorithm to determine trustworthy devices within the network that can pitch in to help mitigate the attack.
The team tested the algorithm on a number of scenarios, such as a cyber attack in which all smart thermostats made by a certain manufacturer are hacked to raise their setpoints simultaneously to a degree that dramatically alters a region’s energy load and destabilizes the grid. The researchers also considered attacks and weather events that would shut off the transmission of energy at various levels and nodes throughout a power grid.
“In our attacks we consider between 5 and 40 percent of the power being lost. We assume some nodes are attacked, and some are still available and have some IoT resources, whether a battery with energy available or an EV or HVAC device that’s controllable,” Nair explains. “So, our algorithm decides which of those houses can step in to either provide extra power generation to inject into the grid or reduce their demand to meet the shortfall.”
In every scenario that they tested, the team found that the algorithm was able to successfully restabilize the grid and mitigate the attack or power failure. They acknowledge that to put in place such a network of grid-edge devices will require buy-in from customers, policymakers, and local officials, as well as innovations such as advanced power inverters that enable EVs to inject power back into the grid.
“This is just the first of many steps that have to happen in quick succession for this idea of local electricity markets to be implemented and expanded upon,” Annaswamy says. “But we believe it’s a good start.”
This work was supported, in part, by the U.S. Department of Energy and the MIT Energy Initiative.
Chip-based system for terahertz waves could enable more efficient, sensitive electronics
The use of terahertz waves, which have shorter wavelengths and higher frequencies than radio waves, could enable faster data transmission, more precise medical imaging, and higher-resolution radar.
But effectively generating terahertz waves using a semiconductor chip, which is essential for incorporation into electronic devices, is notoriously difficult.
Many current techniques can’t generate waves with enough radiating power for useful applications unless they utilize bulky and expensive silicon lenses. Higher radiating power allows terahertz signals to travel farther. Such lenses, which are often larger than the chip itself, make it hard to integrate the terahertz source into an electronic device.
To overcome these limitations, MIT researchers developed a terahertz amplifier-multiplier system that achieves higher radiating power than existing devices without the need for silicon lenses.
By affixing a thin, patterned sheet of material to the back of the chip and utilizing higher-power Intel transistors, the researchers produced a more efficient, yet scalable, chip-based terahertz wave generator.
This compact chip could be used to make terahertz arrays for applications like improved security scanners for detecting hidden objects or environmental monitors for pinpointing airborne pollutants.
“To take full advantage of a terahertz wave source, we need it to be scalable. A terahertz array might have hundreds of chips, and there is no place to put silicon lenses because the chips are combined with such high density. We need a different package, and here we’ve demonstrated a promising approach that can be used for scalable, low-cost terahertz arrays,” says Jinchen Wang, a graduate student in the Department of Electrical Engineering and Computer Science (EECS) and lead author of a paper on the terahertz radiator.
He is joined on the paper by EECS graduate students Daniel Sheen and Xibi Chen; Steven F. Nagle, managing director of the T.J. Rodgers RLE Laboratory; and senior author Ruonan Han, an associate professor in EECS, who leads the Terahertz Integrated Electronics Group. The research will be presented at the IEEE International Solid-States Circuits Conference.
Making waves
Terahertz waves sit on the electromagnetic spectrum between radio waves and infrared light. Their higher frequencies enable them to carry more information per second than radio waves, while they can safely penetrate a wider range of materials than infrared light.
One way to generate terahertz waves is with a CMOS chip-based amplifier-multiplier chain that increases the frequency of radio waves until they reach the terahertz range. To achieve the best performance, waves go through the silicon chip and are eventually emitted out the back into the open air.
But a property known as the dielectric constant gets in the way of a smooth transmission.
The dielectric constant influences how electromagnetic waves interact with a material. It affects the amount of radiation that is absorbed, reflected, or transmitted. Because the dielectric constant of silicon is much higher than that of air, most terahertz waves are reflected at the silicon-air boundary rather than being cleanly transmitted out the back.
Since most signal strength is lost at this boundary, current approaches often use silicon lenses to boost the power of the remaining signal.
The MIT researchers approached this problem differently.
They drew on an electromechanical theory known as matching. With matching, they seek to equal out the dielectric constants of silicon and air, which will minimize the amount of signal that is reflected at the boundary.
They accomplish this by sticking a thin sheet of material which has a dielectric constant between silicon and air to the back of the chip. With this matching sheet in place, most waves will be transmitted out the back rather than being reflected.
A scalable approach
They chose a low-cost, commercially available substrate material with a dielectric constant very close to what they needed for matching. To improve performance, they used a laser cutter to punch tiny holes into the sheet until its dielectric constant was exactly right.
“Since the dielectric constant of air is 1, if you just cut some subwavelength holes in the sheet, it is equivalent to injecting some air, which lowers the overall dielectric constant of the matching sheet,” Wang explains.
In addition, they designed their chip with special transistors developed by Intel that have a higher maximum frequency and breakdown voltage than traditional CMOS transistors.
“These two things taken together, the more powerful transistors and the dielectric sheet, plus a few other small innovations, enabled us to outperform several other devices,” he says.
Their chip generated terahertz signals with a peak radiation power of 11.1 decibel-milliwatts, the best among state-of-the-art techniques. Moreover, since the low-cost chip can be fabricated at scale, it could be integrated into real-world electronic devices more readily.
One of the biggest challenges of developing a scalable chip was determining how to manage the power and temperature when generating terahertz waves.
“Because the frequency and the power are so high, many of the standard ways to design a CMOS chip are not applicable here,” Wang says.
The researchers also needed to devise a technique for installing the matching sheet that could be scaled up in a manufacturing facility.
Moving forward, they want to demonstrate this scalability by fabricating a phased array of CMOS terahertz sources, enabling them to steer and focus a powerful terahertz beam with a low-cost, compact device.
This research is supported, in part, by NASA’s Jet Propulsion Laboratory and Strategic University Research Partnerships Program, as well as the MIT Center for Integrated Circuits and Systems. The chip was fabricated through the Intel University Shuttle Program.
Urbanization’s impact on soil carbon
Nature Climate Change, Published online: 20 February 2025; doi:10.1038/s41558-025-02264-7
As urban extent continues to grow, the impact this major land-use change has on soils and their carbon stocks is an increasingly important question. A recent global study suggests that the effects are not straightforward.AI and Copyright: Expanding Copyright Hurts Everyone—Here’s What to Do Instead
You shouldn't need a permission slip to read a webpage–whether you do it with your own eyes, or use software to help. AI is a category of general-purpose tools with myriad beneficial uses. Requiring developers to license the materials needed to create this technology threatens the development of more innovative and inclusive AI models, as well as important uses of AI as a tool for expression and scientific research.
Threats to Socially Valuable Research and InnovationRequiring researchers to license fair uses of AI training data could make socially valuable research based on machine learning (ML) and even text and data mining (TDM) prohibitively complicated and expensive, if not impossible. Researchers have relied on fair use to conduct TDM research for a decade, leading to important advancements in myriad fields. However, licensing the vast quantity of works that high-quality TDM research requires is frequently cost-prohibitive and practically infeasible.
Fair use protects ML and TDM research for good reason. Without fair use, copyright would hinder important scientific advancements that benefit all of us. Empirical studies back this up: research using TDM methodologies are more common in countries that protect TDM research from copyright control; in countries that don’t, copyright restrictions stymie beneficial research. It’s easy to see why: it would be impossible to identify and negotiate with millions of different copyright owners to analyze, say, text from the internet.
The stakes are high, because ML is critical to helping us interpret the world around us. It's being used by researchers to understand everything from space nebulae to the proteins in our bodies. When the task requires crunching a huge amount of data, such as the data generated by the world’s telescopes, ML helps rapidly sift through the information to identify features of potential interest to researchers. For example, scientists are using AlphaFold, a deep learning tool, to understand biological processes and develop drugs that target disease-causing malfunctions in those processes. The developers released an open-source version of AlphaFold, making it available to researchers around the world. Other developers have already iterated upon AlphaFold to build transformative new tools.
Threats to CompetitionRequiring AI developers to get authorization from rightsholders before training models on copyrighted works would limit competition to companies that have their own trove of training data, or the means to strike a deal with such a company. This would result in all the usual harms of limited competition—higher costs, worse service, and heightened security risks—as well as reducing the variety of expression used to train such tools and the expression allowed to users seeking to express themselves with the aid of AI. As the Federal Trade Commission recently explained, if a handful of companies control AI training data, “they may be able to leverage their control to dampen or distort competition in generative AI markets” and “wield outsized influence over a significant swath of economic activity.”
Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, widely considered to be the first lawsuit over AI training rights ever filed. Ross Intelligence sought to disrupt the legal research duopoly of Westlaw and LexisNexis by offering a new AI-based system. The startup attempted to license the right to train its model on Westlaw’s summaries of public domain judicial opinions and its method for organizing cases. Westlaw refused to grant the license and sued its tiny rival for copyright infringement. Ultimately, the lawsuit forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.
Similarly, shortly after Getty Images—a billion-dollar stock images company that owns hundreds of millions of images—filed a copyright lawsuit asking the court to order the “destruction” of Stable Diffusion over purported copyright violations in the training process, Getty introduced its own AI image generator trained on its own library of images.
Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. To develop a “foundation model” that can be used to build generative AI systems like ChatGPT and Stable Diffusion, developers need to “train” the model on billions or even trillions of works, often copied from the open internet without permission from copyright holders. There’s no feasible way to identify all of those rightsholders—let alone execute deals with each of them. Even if these deals were possible, licensing that much content at the prices developers are currently paying would be prohibitively expensive for most would-be competitors.
We should not assume that the same companies who built this world can fix the problems they helped create; if we want AI models that don’t replicate existing social and political biases, we need to make it possible for new players to build them.
Nor is pro-monopoly regulation through copyright likely to provide any meaningful economic support for vulnerable artists and creators. Notwithstanding the highly publicized demands of musicians, authors, actors, and other creative professionals, imposing a licensing requirement is unlikely to protect the jobs or incomes of the underpaid working artists that media and entertainment behemoths have exploited for decades. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is, as EFF Special Advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take.
Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now-$100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There is no reason to believe that the same companies will treat their artists more fairly once they control AI.
Threats to Free ExpressionGenerative AI tools like text and image generators are powerful engines of expression. Creating content—particularly images and videos—is time intensive. It frequently requires tools and skills that many internet users lack. Generative AI significantly expedites content creation and reduces the need for artistic ability and expensive photographic or video technology. This facilitates the creation of art that simply would not have existed and allows people to express themselves in ways they couldn’t without AI.
Some art forms historically practiced within the African American community—such as hip hop and collage—have a rich tradition of remixing to create new artworks that can be more than the sum of their parts. As professor and digital artist Nettrice Gaskins has explained, generative AI is a valuable tool for creating these kinds of art. Limiting the works that may be used to train AI would limit its utility as an artistic tool, and compound the harm that copyright law has already inflicted on historically Black art forms.
Generative AI has the power to democratize speech and content creation, much like the internet has. Before the internet, a small number of large publishers controlled the channels of speech distribution, controlling which material reached audiences’ ears. The internet changed that by allowing anyone with a laptop and Wi-Fi connection to reach billions of people around the world. Generative AI magnifies those benefits by enabling ordinary internet users to tell stories and express opinions by allowing them to generate text in a matter of seconds and easily create graphics, images, animation, and videos that, just a few years ago, only the most sophisticated studios had the capability to produce. Legacy gatekeepers want to expand copyright so they can reverse this progress. Don’t let them: everyone deserves the right to use technology to express themselves, and AI is no exception.
Threats to Fair UseIn all of these situations, fair use—the ability to use copyrighted material without permission or payment in certain circumstances—often provides the best counter to restrictions imposed by rightsholders. But, as we explained in the first post in this series, fair use is under attack by the copyright creep. Publishers’ recent attempts to impose a new licensing regime for AI training rights—despite lacking any recognized legal right to control AI training—threatens to undermine the public’s fair use rights.
By undermining fair use, the AI copyright creep makes all these other dangers more acute. Fair use is often what researchers and educators rely on to make their academic assessments and to gather data. Fair use allows competitors to build on existing work to offer better alternatives. And fair use lets anyone comment on, or criticize, copyrighted material.
When gatekeepers make the argument against fair use and in favor of expansive copyright—in court, to lawmakers, and to the public—they are looking to cement their own power, and undermine ours.
A Better Way ForwardAI also threatens real harms that demand real solutions.
Many creators and white-collar professionals increasingly believe that generative AI threatens their jobs. Many people also worry that it enables serious forms of abuse, such as AI-generated nonconsensual intimate imagery, including of children. Privacy concerns abound, as does consternation over misinformation and disinformation. And it’s already harming the environment.
Expanding copyright will not mitigate these harms, and we shouldn’t forfeit free speech and innovation to chase snake oil “solutions” that won’t work.
We need solutions that address the roots of these problems, like inadequate protections for labor rights and personal privacy. Targeted, issue-specific policies are far more likely to succeed in resolving the problems society faces. Take competition, for example. Proponents of copyright expansion argue that treating AI development like the fair use that it is would only enrich a handful of tech behemoths. But imposing onerous new copyright licensing requirements to train models would lock in the market advantages enjoyed by Big Tech and Big Media—the only companies that own large content libraries or can afford to license enough material to build a deep learning model—profiting entrenched incumbents at the public’s expense. What neither Big Tech nor Big Media will say is that stronger antitrust rules and enforcement would be a much better solution.
What’s more, looking beyond copyright future-proofs the protections. Stronger environmental protections, comprehensive privacy laws, worker protections, and media literacy will create an ecosystem where we will have defenses against any new technology that might cause harm in those areas, not just generative AI.
Expanding copyright, on the other hand, threatens socially beneficial uses of AI—for example, to conduct scientific research and generate new creative expression—without meaningfully addressing the harms.
This post is part of our AI and Copyright series. For more information about the state of play in this evolving area, see our first post.
Copyright and AI: the Cases and the Consequences
The launch of ChatGPT and other deep learning quickly led to a flurry of lawsuits against model developers. Legal theories vary, but most are rooted in copyright: plaintiffs argue that use of their works to train the models was infringement; developers counter that their training is fair use. Meanwhile developers are making as many licensing deals as possible to stave off future litigation, and it’s a sound bet that the existing litigation is an elaborate scramble for leverage in settlement negotiations.
These cases can end one of three ways: rightsholders win, everybody settles, or developers win. As we’ve noted before, we think the developers have the better argument. But that’s not the only reason they should win these cases: while creators have a legitimate gripe, expanding copyright won’t protect jobs from automation. A win for rightsholders or even a settlement could also lead to significant harm, especially if it undermines fair use protections for research uses or artistic protections for creators. In this post and a follow-up, we’ll explain why.
State of PlayFirst, we need some context, so here’s the state of play:
DMCA ClaimsMultiple courts have dismissed claims under Section 1202(b) of the Digital Millennium Copyright Act, stemming from allegations that developers removed or altered attribution information during the training process. In Raw Story Media v. OpenAI, Inc., the Southern District of New York dismissed these claims because the plaintiff had not “plausibly alleged” that training ChatGPT on their works had actually harmed them, and there was no “substantial risk” that ChatGPT would output their news articles. Because ChatGPT was trained on “massive amount of information from unnumerable sources on almost any given subject…the likelihood that ChatGPT would output plagiarized content from one of Plaintiffs’ articles seems remote.” Courts granted motions to dismiss similar DMCA claims in Andersen v. Stability AI, Ltd., , The Intercept Media, Inc. v. OpenAI, Inc., Kadrey v. Meta Platforms, Inc., and Tremblay v. OpenAI.
Another such case, Doe v. GitHub, Inc. will soon be argued in the Ninth Circuit.
Copyright Infringement ClaimsRightsholders also assert ordinary copyright infringement, and the initial holdings are a mixed bag. In Kadrey v. Meta Platforms, Inc., for example, the court dismissed “nonsensical” claims that Meta’s LLaMA models are themselves infringing derivative works. In Andersen v. Stability AI Ltd., however, the court held that copyright claims based on the assumption that the plaintiff’s works were included in a training data set could go forward, where the use of plaintiffs’ names as prompts generated outputted images that were “similar to plaintiffs’ artistic works.” The court also held that the plaintiffs plausibly alleged that the model was designed to “promote infringement” for similar reasons.
It's early in the case—the court was merely deciding if the plaintiffs had alleged enough to justify further proceedings—but it’s a dangerous precedent. Crucially, copyright protection extends only to the actual expression of the author—the underlying facts and ideas in a creative work are not themselves protected. That means that, while a model cannot output an identical or near-identical copy of a training image without running afoul of copyright, it is free to generate stylistically “similar” images. Training alone is insufficient to give rise to a claim of infringement, and the court impermissibly conflated permissible “similar” outputs with the copying of protectable expression.
Fair UseIn most of the AI cases, courts have yet to consider—let alone decide—whether fair use applies. In one unusual case, however, the judge has flip-flopped, previously finding that the defendant’s use was fair and changing his mind. This case, Thomson Reuters Enterprise Centre GMBH v. Ross Intelligence, Inc., concerns legal research technology. Thomson Reuters provides search tools to locate relevant legal opinions and prepares annotations describing the opinions’ holdings. Ross Intelligence hired lawyers to look at those annotations and rewrite them in their own words. Their output was used to train Ross’s search tool, ultimately providing users with relevant legal opinions based on their queries. Originally, the court got it right, holding that if the AI developer used copyrighted works only “as a step in the process of trying to develop a ‘wholly new,’ albeit competing, product,” that’s “transformative intermediate copying,” i.e. fair use.
After reconsidering, however, the judge changed his mind in several respects, essentially disagreeing with prior case law regarding search engines. We think it’s unlikely that an appeals court would uphold this divergence from precedent. But if it did, it would present legal problems for AI developers—and anyone creating search tools.
Copyright law favors the creation of new technology to learn and locate information, even when developing the tool required copying books and web pages in order to index them. Here, the search tool is providing links to legal opinions, not presenting users with any Thomson Reuters original material. The tool is concerned with noncopyrightable legal holdings and principles, not with supplanting any creative expression embodied in the annotations prepared by Thomson Reuters.
Thomson Reuters has often pushed the limits of copyright in an attempt to profit off of the public’s need to access and refer to the law, for instance by claiming a proprietary interest in its page numbering of legal opinions. Unfortunately, the judge in this case enabled them to do so in a new way. We hope the appeals court reverses the decision.
The Side DealsWhile all of this is going on, developers that can afford it—OpenAI, Google, and other tech behemoths—have inked multimillion-dollar licensing deals with Reddit, the Wall Street Journal, and myriad other corporate copyright owners. There’s suddenly a $2.5 billion licensing market for training data—even though the use of that data is almost certainly fair use.
What’s MissingThis litigation is getting plenty of attention. And it should because the stakes are high. Unfortunately, the real stakes are getting lost. These cases are not just about who will get the most financial benefits from generative AI. The outcomes will decide whether a small group of corporations that can afford big licensing fees will determine the future of AI for all of us. More on that tomorrow.
This post is part of our AI and Copyright series. Check out our other post in this series.
Reducing carbon emissions from residential heating: A pathway forward
In the race to reduce climate-warming carbon emissions, the buildings sector is falling behind. While carbon dioxide (CO2) emissions in the U.S. electric power sector dropped by 34 percent between 2005 and 2021, emissions in the building sector declined by only 18 percent in that same time period. Moreover, in extremely cold locations, burning natural gas to heat houses can make up a substantial share of the emissions portfolio. Therefore, steps to electrify buildings in general, and residential heating in particular, are essential for decarbonizing the U.S. energy system.
But that change will increase demand for electricity and decrease demand for natural gas. What will be the net impact of those two changes on carbon emissions and on the cost of decarbonizing? And how will the electric power and natural gas sectors handle the new challenges involved in their long-term planning for future operations and infrastructure investments?
A new study by MIT researchers with support from the MIT Energy Initiative (MITEI) Future Energy Systems Center unravels the impacts of various levels of electrification of residential space heating on the joint power and natural gas systems. A specially devised modeling framework enabled them to estimate not only the added costs and emissions for the power sector to meet the new demand, but also any changes in costs and emissions that result for the natural gas sector.
The analyses brought some surprising outcomes. For example, they show that — under certain conditions — switching 80 percent of homes to heating by electricity could cut carbon emissions and at the same time significantly reduce costs over the combined natural gas and electric power sectors relative to the case in which there is only modest switching. That outcome depends on two changes: Consumers must install high-efficiency heat pumps plus take steps to prevent heat losses from their homes, and planners in the power and the natural gas sectors must work together as they make long-term infrastructure and operations decisions. Based on their findings, the researchers stress the need for strong state, regional, and national policies that encourage and support the steps that homeowners and industry planners can take to help decarbonize today’s building sector.
A two-part modeling approach
To analyze the impacts of electrification of residential heating on costs and emissions in the combined power and gas sectors, a team of MIT experts in building technology, power systems modeling, optimization techniques, and more developed a two-part modeling framework. Team members included Rahman Khorramfar, a senior postdoc in MITEI and the Laboratory for Information and Decision Systems (LIDS); Morgan Santoni-Colvin SM ’23, a former MITEI graduate research assistant, now an associate at Energy and Environmental Economics, Inc.; Saurabh Amin, a professor in the Department of Civil and Environmental Engineering and principal investigator in LIDS; Audun Botterud, a principal research scientist in LIDS; Leslie Norford, a professor in the Department of Architecture; and Dharik Mallapragada, a former MITEI principal research scientist, now an assistant professor at New York University, who led the project. They describe their new methods and findings in a paper published in the journal Cell Reports Sustainability on Feb. 6.
The first model in the framework quantifies how various levels of electrification will change end-use demand for electricity and for natural gas, and the impacts of possible energy-saving measures that homeowners can take to help. “To perform that analysis, we built a ‘bottom-up’ model — meaning that it looks at electricity and gas consumption of individual buildings and then aggregates their consumption to get an overall demand for power and for gas,” explains Khorramfar. By assuming a wide range of building “archetypes” — that is, groupings of buildings with similar physical characteristics and properties — coupled with trends in population growth, the team could explore how demand for electricity and for natural gas would change under each of five assumed electrification pathways: “business as usual” with modest electrification, medium electrification (about 60 percent of homes are electrified), high electrification (about 80 percent of homes make the change), and medium and high electrification with “envelope improvements,” such as sealing up heat leaks and adding insulation.
The second part of the framework consists of a model that takes the demand results from the first model as inputs and “co-optimizes” the overall electricity and natural gas system to minimize annual investment and operating costs while adhering to any constraints, such as limits on emissions or on resource availability. The modeling framework thus enables the researchers to explore the impact of each electrification pathway on the infrastructure and operating costs of the two interacting sectors.
The New England case study: A challenge for electrification
As a case study, the researchers chose New England, a region where the weather is sometimes extremely cold and where burning natural gas to heat houses contributes significantly to overall emissions. “Critics will say that electrification is never going to happen [in New England]. It’s just too expensive,” comments Santoni-Colvin. But he notes that most studies focus on the electricity sector in isolation. The new framework considers the joint operation of the two sectors and then quantifies their respective costs and emissions. “We know that electrification will require large investments in the electricity infrastructure,” says Santoni-Colvin. “But what hasn’t been well quantified in the literature is the savings that we generate on the natural gas side by doing that — so, the system-level savings.”
Using their framework, the MIT team performed model runs aimed at an 80 percent reduction in building-sector emissions relative to 1990 levels — a target consistent with regional policy goals for 2050. The researchers defined parameters including details about building archetypes, the regional electric power system, existing and potential renewable generating systems, battery storage, availability of natural gas, and other key factors describing New England.
They then performed analyses assuming various scenarios with different mixes of home improvements. While most studies assume typical weather, they instead developed 20 projections of annual weather data based on historical weather patterns and adjusted for the effects of climate change through 2050. They then analyzed their five levels of electrification.
Relative to business-as-usual projections, results from the framework showed that high electrification of residential heating could more than double the demand for electricity during peak periods and increase overall electricity demand by close to 60 percent. Assuming that building-envelope improvements are deployed in parallel with electrification reduces the magnitude and weather sensitivity of peak loads and creates overall efficiency gains that reduce the combined demand for electricity plus natural gas for home heating by up to 30 percent relative to the present day. Notably, a combination of high electrification and envelope improvements resulted in the lowest average cost for the overall electric power-natural gas system in 2050.
Lessons learned
Replacing existing natural gas-burning furnaces and boilers with heat pumps reduces overall energy consumption. Santoni-Colvin calls it “something of an intuitive result” that could be expected because heat pumps are “just that much more efficient than old, fossil fuel-burning systems. But even so, we were surprised by the gains.”
Other unexpected results include the importance of homeowners making more traditional energy efficiency improvements, such as adding insulation and sealing air leaks — steps supported by recent rebate policies. Those changes are critical to reducing costs that would otherwise be incurred for upgrading the electricity grid to accommodate the increased demand. “You can’t just go wild dropping heat pumps into everybody’s houses if you’re not also considering other ways to reduce peak loads. So it really requires an ‘all of the above’ approach to get to the most cost-effective outcome,” says Santoni-Colvin.
Testing a range of weather outcomes also provided important insights. Demand for heating fuel is very weather-dependent, yet most studies are based on a limited set of weather data — often a “typical year.” The researchers found that electrification can lead to extended peak electric load events that can last for a few days during cold winters. Accordingly, the researchers conclude that there will be a continuing need for a “firm, dispatchable” source of electricity; that is, a power-generating system that can be relied on to produce power any time it’s needed — unlike solar and wind systems. As examples, they modeled some possible technologies, including power plants fired by a low-carbon fuel or by natural gas equipped with carbon capture equipment. But they point out that there’s no way of knowing what types of firm generators will be available in 2050. It could be a system that’s not yet mature, or perhaps doesn’t even exist today.
In presenting their findings, the researchers note several caveats. For one thing, their analyses don’t include the estimated cost to homeowners of installing heat pumps. While that cost is widely discussed and debated, that issue is outside the scope of their current project.
In addition, the study doesn’t specify what happens to existing natural gas pipelines. “Some homes are going to electrify and get off the gas system and not have to pay for it, leaving other homes with increasing rates because the gas system cost now has to be divided among fewer customers,” says Khorramfar. “That will inevitably raise equity questions that need to be addressed by policymakers.”
Finally, the researchers note that policies are needed to drive residential electrification. Current financial support for installation of heat pumps and steps to make homes more thermally efficient are a good start. But such incentives must be coupled with a new approach to planning energy infrastructure investments. Traditionally, electric power planning and natural gas planning are performed separately. However, to decarbonize residential heating, the two sectors should coordinate when planning future operations and infrastructure needs. Results from the MIT analysis indicate that such cooperation could significantly reduce both emissions and costs for residential heating — a change that would yield a much-needed step toward decarbonizing the buildings sector as a whole.
J-WAFS: Supporting food and water research across MIT
MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has transformed the landscape of water and food research at MIT, driving faculty engagement and catalyzing new research and innovation in these critical areas. With philanthropic, corporate, and government support, J-WAFS’ strategic approach spans the entire research life cycle, from support for early-stage research to commercialization grants for more advanced projects.
Over the past decade, J-WAFS has invested approximately $25 million in direct research funding to support MIT faculty pursuing transformative research with the potential for significant impact. “Since awarding our first cohort of seed grants in 2015, it’s remarkable to look back and see that over 10 percent of the MIT faculty have benefited from J-WAFS funding,” observes J-WAFS Executive Director Renee J. Robins ’83. “Many of these professors hadn’t worked on water or food challenges before their first J-WAFS grant.”
By fostering interdisciplinary collaborations and supporting high-risk, high-reward projects, J-WAFS has amplified the capacity of MIT faculty to pursue groundbreaking research that addresses some of the world’s most pressing challenges facing our water and food systems.
Drawing MIT faculty to water and food research
J-WAFS open calls for proposals enable faculty to explore bold ideas and develop impactful approaches to tackling critical water and food system challenges. Professor Patrick Doyle’s work in water purification exemplifies this impact. “Without J-WAFS, I would have never ventured into the field of water purification,” Doyle reflects. While previously focused on pharmaceutical manufacturing and drug delivery, exposure to J-WAFS-funded peers led him to apply his expertise in soft materials to water purification. “Both the funding and the J-WAFS community led me to be deeply engaged in understanding some of the key challenges in water purification and water security,” he explains.
Similarly, Professor Otto Cordero of the Department of Civil and Environmental Engineering (CEE) leveraged J-WAFS funding to pivot his research into aquaculture. Cordero explains that his first J-WAFS seed grant “has been extremely influential for my lab because it allowed me to take a step in a new direction, with no preliminary data in hand.” Cordero’s expertise is in microbial communities. He was previous unfamiliar with aquaculture, but he saw the relevance of microbial communities the health of farmed aquatic organisms.
Supporting early-career faculty
New assistant professors at MIT have particularly benefited from J-WAFS funding and support. J-WAFS has played a transformative role in shaping the careers and research trajectories of many new faculty members by encouraging them to explore novel research areas, and in many instances providing their first MIT research grant.
Professor Ariel Furst reflects on how pivotal J-WAFS’ investment has been in advancing her research. “This was one of the first grants I received after starting at MIT, and it has truly shaped the development of my group’s research program,” Furst explains. With J-WAFS’ backing, her lab has achieved breakthroughs in chemical detection and remediation technologies for water. “The support of J-WAFS has enabled us to develop the platform funded through this work beyond the initial applications to the general detection of environmental contaminants and degradation of those contaminants,” she elaborates.
Karthish Manthiram, now a professor of chemical engineering and chemistry at Caltech, explains how J-WAFS’ early investment enabled him and other young faculty to pursue ambitious ideas. “J-WAFS took a big risk on us,” Manthiram reflects. His research on breaking the nitrogen triple bond to make ammonia for fertilizer was initially met with skepticism. However, J-WAFS’ seed funding allowed his lab to lay the groundwork for breakthroughs that later attracted significant National Science Foundation (NSF) support. “That early funding from J-WAFS has been pivotal to our long-term success,” he notes.
These stories underscore the broad impact of J-WAFS’ support for early-career faculty, and its commitment to empowering them to address critical global challenges and innovate boldly.
Fueling follow-on funding
J-WAFS seed grants enable faculty to explore nascent research areas, but external funding for continued work is usually necessary to achieve the full potential of these novel ideas. “It’s often hard to get funding for early stage or out-of-the-box ideas,” notes J-WAFS Director Professor John H. Lienhard V. “My hope, when I founded J-WAFS in 2014, was that seed grants would allow PIs [principal investigators] to prove out novel ideas so that they would be attractive for follow-on funding. And after 10 years, J-WAFS-funded research projects have brought more than $21 million in subsequent awards to MIT.”
Professor Retsef Levi led a seed study on how agricultural supply chains affect food safety, with a team of faculty spanning the MIT schools Engineering and Science as well as the MIT Sloan School of Management. The team parlayed their seed grant research into a multi-million-dollar follow-on initiative. Levi reflects, “The J-WAFS seed funding allowed us to establish the initial credibility of our team, which was key to our success in obtaining large funding from several other agencies.”
Dave Des Marais was an assistant professor in the Department of CEE when he received his first J-WAFS seed grant. The funding supported his research on how plant growth and physiology are controlled by genes and interact with the environment. The seed grant helped launch his lab’s work addressing enhancing climate change resilience in agricultural systems. The work led to his Faculty Early Career Development (CAREER) Award from the NSF, a prestigious honor for junior faculty members. Now an associate professor, Des Marais’ ongoing project to further investigate the mechanisms and consequences of genomic and environmental interactions is supported by the five-year, $1,490,000 NSF grant. “J-WAFS providing essential funding to get my new research underway,” comments Des Marais.
Stimulating interdisciplinary collaboration
Des Marais’ seed grant was also key to developing new collaborations. He explains, “the J-WAFS grant supported me to develop a collaboration with Professor Caroline Uhler in EECS/IDSS [the Department of Electrical Engineering and Computer Science/Institute for Data, Systems, and Society] that really shaped how I think about framing and testing hypotheses. One of the best things about J-WAFS is facilitating unexpected connections among MIT faculty with diverse yet complementary skill sets.”
Professors A. John Hart of the Department of Mechanical Engineering and Benedetto Marelli of CEE also launched a new interdisciplinary collaboration with J-WAFS funding. They partnered to join expertise in biomaterials, microfabrication, and manufacturing, to create printed silk-based colorimetric sensors that detect food spoilage. “The J-WAFS Seed Grant provided a unique opportunity for multidisciplinary collaboration,” Hart notes.
Professors Stephen Graves in the MIT Sloan School of Management and Bishwapriya Sanyal in the Department of Urban Studies and Planning (DUSP) partnered to pursue new research on agricultural supply chains. With field work in Senegal, their J-WAFS-supported project brought together international development specialists and operations management experts to study how small firms and government agencies influence access to and uptake of irrigation technology by poorer farmers. “We used J-WAFS to spur a collaboration that would have been improbable without this grant,” they explain. Being part of the J-WAFS community also introduced them to researchers in Professor Amos Winter’s lab in the Department of Mechanical Engineering working on irrigation technologies for low-resource settings. DUSP doctoral candidate Mark Brennan notes, “We got to share our understanding of how irrigation markets and irrigation supply chains work in developing economies, and then we got to contrast that with their understanding of how irrigation system models work.”
Timothy Swager, professor of chemistry, and Rohit Karnik, professor of mechanical engineering and J-WAFS associate director, collaborated on a sponsored research project supported by Xylem, Inc. through the J-WAFS Research Affiliate program. The cross-disciplinary research, which targeted the development of ultra-sensitive sensors for toxic PFAS chemicals, was conceived following a series of workshops hosted by J-WAFS. Swager and Karnik were two of the participants, and their involvement led to the collaborative proposal that Xylem funded. “J-WAFS funding allowed us to combine Swager lab’s expertise in sensing with my lab’s expertise in microfluidics to develop a cartridge for field-portable detection of PFAS,” says Karnik. “J-WAFS has enriched my research program in so many ways,” adds Swager, who is now working to commercialize the technology.
Driving global collaboration and impact
J-WAFS has also helped MIT faculty establish and advance international collaboration and impactful global research. By funding and supporting projects that connect MIT researchers with international partners, J-WAFS has not only advanced technological solutions, but also strengthened cross-cultural understanding and engagement.
Professor Matthew Shoulders leads the inaugural J-WAFS Grand Challenge project. In response to the first J-WAFS call for “Grand Challenge” proposals, Shoulders assembled an interdisciplinary team based at MIT to enhance and provide climate resilience to agriculture by improving the most inefficient aspect of photosynthesis, the notoriously-inefficient carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk/high-reward project following a competitive process that engaged external reviewers through a several rounds of iterative proposal development. The technical feedback to the team led them to researchers with complementary expertise from the Australian National University. “Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists and field trial experts, yielding a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team will be able to make a concerted effort using the most modern, state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”
Professor Leon Glicksman and Research Engineer Eric Verploegen’s team designed a low-cost cooling chamber to preserve fruits and vegetables harvested by smallholder farmers with no access to cold chain storage. J-WAFS’ guidance motivated the team to prioritize practical considerations informed by local collaborators, ensuring market competitiveness. “As our new idea for a forced-air evaporative cooling chamber was taking shape, we continually checked that our solution was evolving in a direction that would be competitive in terms of cost, performance, and usability to existing commercial alternatives,” explains Verploegen, who is currently an MIT D-Lab affiliate. Following the team’s initial seed grant, the team secured a J-WAFS Solutions commercialization grant, which Verploegen say “further motivated us to establish partnerships with local organizations capable of commercializing the technology earlier in the project than we might have done otherwise.” The team has since shared an open-source design as part of its commercialization strategy to maximize accessibility and impact.
Bringing corporate sponsored research opportunities to MIT faculty
J-WAFS also plays a role in driving private partnerships, enabling collaborations that bridge industry and academia. Through its Research Affiliate Program, for example, J-WAFS provides opportunities for faculty to collaborate with industry on sponsored research, helping to convert scientific discoveries into licensable intellectual property (IP) that companies can turn into commercial products and services.
J-WAFS introduced professor of mechanical engineering Alex Slocum to a challenge presented by its research affiliate company, Xylem: how to design a more energy-efficient pump for fluctuating flows. With centrifugal pumps consuming an estimated 6 percent of U.S. electricity annually, Slocum and his then-graduate student Hilary Johnson SM '18, PhD '22 developed an innovative variable volute mechanism that reduces energy usage. “Xylem envisions this as the first in a new category of adaptive pump geometry,” comments Johnson. The research produced a pump prototype and related IP that Xylem is working on commercializing. Johnson notes that these outcomes “would not have been possible without J-WAFS support and facilitation of the Xylem industry partnership.” Slocum adds, “J-WAFS enabled Hilary to begin her work on pumps, and Xylem sponsored the research to bring her to this point … where she has an opportunity to do far more than the original project called for.”
Swager speaks highly of the impact of corporate research sponsorship through J-WAFS on his research and technology translation efforts. His PFAS project with Karnik described above was also supported by Xylem. “Xylem was an excellent sponsor of our research. Their engagement and feedback were instrumental in advancing our PFAS detection technology, now on the path to commercialization,” Swager says.
Looking forward
What J-WAFS has accomplished is more than a collection of research projects; a decade of impact demonstrates how J-WAFS’ approach has been transformative for many MIT faculty members. As Professor Mathias Kolle puts it, his engagement with J-WAFS “had a significant influence on how we think about our research and its broader impacts.” He adds that it “opened my eyes to the challenges in the field of water and food systems and the many different creative ideas that are explored by MIT.”
This thriving ecosystem of innovation, collaboration, and academic growth around water and food research has not only helped faculty build interdisciplinary and international partnerships, but has also led to the commercialization of transformative technologies with real-world applications. C. Cem Taşan, the POSCO Associate Professor of Metallurgy who is leading a J-WAFS Solutions commercialization team that is about to launch a startup company, sums it up by noting, “Without J-WAFS, we wouldn’t be here at all.”
As J-WAFS looks to the future, its continued commitment — supported by the generosity of its donors and partners — builds on a decade of success enabling MIT faculty to advance water and food research that addresses some of the world’s most pressing challenges.