Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Protecting Our Right to Sue Federal Agents Who Violate the Constitution
Federal agencies like Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights. For example, we have a First Amendment right to record on-duty police, including ICE and CBP, but federal agents are violating this right. Indeed, Alex Pretti was exercising this right shortly before federal agents shot and killed him. So were the many people who filmed agents shooting and killing Pretti and Renee Good – thereby creating valuable evidence that contradicts false claims by government leaders.
To protect our digital rights, we need the rule of law. When an armed agent of the government breaks the law, the civilian they injure must be made whole. This includes a lawsuit by the civilian (or their survivor) against the agent, seeking money damages to compensate them for their injury. Such systems of accountability encourage agents to follow the law, whereas impunity encourages them to break it.
Unfortunately, there is a gaping hole in the rule of law: when a federal agent violates the U.S. Constitution, it is increasingly difficult to sue them for damages. For these reasons, EFF supports new statutes to fill this hole, including California S.B. 747.
The ProblemIn 1871, at the height of Reconstruction following the Civil War, Congress enacted a landmark statute empowering people to sue state and local officials who violated their constitutional rights. This was a direct response to state-sanctioned violence against Black people that continued despite the formal end of slavery. The law is codified today at 42 U.S.C. § 1983.
However, there is no comparable statute empowering people to sue federal officials who violate the U.S. Constitution.
So in 1971, the U.S. Supreme Court stepped into this gap, in a watershed case called Bivens v. Six Unknown FBI Agents. The plaintiff alleged that FBI agents unlawfully searched his home and used excessive force against him. Justice Brennan, writing for a six-Justice majority of the Court, ruled that “damages may be obtained for injuries consequent upon a violation of the Fourth Amendment by federal officials.” He explained: “Historically, damages have been regarded as the ordinary remedy for an invasion of personal interests in liberty.” Further: “The very essence of civil liberty certainly consists of the right of every individual to claim the protection of the laws, whenever he receives an injury.”
Subsequently, the Court expanded Bivens in cases where federal officials violated the U.S. Constitution by discriminating in a workplace, and by failing to provide medical care in a prison.
In more recent years, however, the Court has whittled Bivens down to increasing irrelevance. For example, the Court has rejected damages litigation against federal officials who allegedly violated the U.S. Constitution by strip searching a detained person, and by shooting a person located across the border.
In 2022, the Court by a six-to-three vote rejected a damages claim against a Border Patrol agent who used excessive force when investigating alleged smuggling. In an opinion concurring in the judgment, Justice Gorsuch conceded that he “struggle[d] to see how this set of facts differs meaningfully from those in Bivens itself.” But then he argued that Bivens should be overruled because it supposedly “crossed the line” against courts “assuming legislative authority.”
Last year, the Court unanimously declined to extend Bivens to excessive force in a prison.
The SolutionAt this juncture, legislatures must solve the problem. We join calls for Congress to enact a federal statute, parallel to the one it enacted during Reconstruction, to empower people to sue federal officials (and not just state and local officials) who violate the U.S. Constitution.
In the meantime, it is heartening to see state legislatures step forward fill this hole. One such effort is California S.B. 747, which EFF is proud to endorse.
State laws like this one do not violate the Supremacy Clause of the U.S. Constitution, which provides that the Constitution is the supreme law of the land. In the words of one legal explainer, this kind of state law “furthers the ultimate supremacy of the federal Constitution by helping people vindicate their fundamental constitutional rights.”
This kind of state law goes by many names. The author of S.B. 747, California Senator Scott Wiener, calls it the “No Kings Act.” Protect Democracy, which wrote a model bill, calls it the “Universal Constitutional Remedies Act.” The originator of this idea, Professor Akhil Amar, calls it a “converse 1983”: instead of Congress authorizing suit against state officials for violating the U.S. Constitution, states would authorize suit against federal officials for doing the same thing.
We call these laws a commonsense way to protect the rule of law, which is a necessary condition to preserve our digital rights. EFF has long supported effective judicial remedies, including support for nationwide injunctions and private rights of action, and opposition to qualified immunity.
We also support federal and state legislation to guarantee our right to sue federal agents for damages when they violate the U.S. Constitution.
Smart AI Policy Means Examing Its Real Harms and Benefits
The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.
Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.
We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.
Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.
EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.
So let’s look at the real-world landscape.
AI’s Real and Potential HarmsThinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.
There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on. If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.
And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.
These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool. For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.
These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.
Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.
We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.
Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers.
Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.
Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.
AI’s Real and Potential BenefitsHowever harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.
Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.
To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.
Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.
AI Advancements in Scientific and Medical ResearchAI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.
For example:
- The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
- Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).
Researchers are using AI to help develop new medical treatments:
- Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
- Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
- Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
- Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:
- AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
- Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”
When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:
- The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
- An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.
It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.
Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.
Context MattersIt can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.
Brian Hedden named co-associate dean of Social and Ethical Responsibilities of Computing
Brian Hedden PhD ’12 has been appointed co-associate dean of the Social and Ethical Responsibilities of Computing (SERC) at MIT, a cross-cutting initiative in the MIT Schwarzman College of Computing, effective Jan. 16.
Hedden is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS). He joined the MIT faculty last fall from the Australian National University and the University of Sydney, where he previously served as a faculty member. He earned his BA from Princeton University and his PhD from MIT, both in philosophy.
“Brian is a natural and compelling choice for SERC, as a philosopher whose work speaks directly to the intellectual challenges facing education and research today, particularly in computing and AI. His expertise in epistemology, decision theory, and ethics addresses questions that have become increasingly urgent in an era defined by information abundance and artificial intelligence. His scholarship exemplifies the kind of interdisciplinary inquiry that SERC exists to advance,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.
Hedden’s research focuses on how we ought to form beliefs and make decisions, and it explores how philosophical thinking about rationality can yield insights into contemporary ethical issues, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization.
Joining co-associate dean Nikos Trichakis, the J.C. Penney Professor of Management at the MIT Sloan School of Management, Hedden will help lead SERC and advance the initiative’s ongoing research, teaching, and engagement efforts. He succeeds professor of philosophy Caspar Hare, who stepped down at the conclusion of his three-year term on Sept. 1, 2025.
Since its inception in 2020, SERC has launched a range of programs and activities designed to cultivate responsible “habits of mind and action” among those who create and deploy computing technologies, while fostering the development of technologies in the public interest.
The SERC Scholars Program invites undergraduate and graduate students to work alongside postdoctoral mentors to explore interdisciplinary ethical challenges in computing. The initiative also hosts an annual prize competition that challenges MIT students to envision the future of computing, publishes a twice-yearly series of case studies, and collaborates on coordinated curricular materials, including active-learning projects, homework assignments, and in-class demonstrations. In 2024, SERC introduced a new seed grant program to support MIT researchers investigating ethical technology development; to date, two rounds of grants have been awarded to 24 projects.
Antonio Torralba, three MIT alumni named 2025 ACM fellows
Antonio Torralba, Delta Electronics Professor of Electrical Engineering and Computer Science and faculty head of artificial intelligence and decision-making at MIT, has been named to the 2025 cohort of Association for Computing Machinery (ACM) Fellows. He shares the honor of an ACM Fellowship with three MIT alumni: Eytan Adar ’97, MEng ’98; George Candea ’97, MEng ’98; and Gookwon Edward Suh SM ’01, PhD ’05.
A principal investigator within both the Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds, and Machines, Torralba received his BS in telecommunications engineering from Telecom BCN, Spain, in 1994, and a PhD in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, France, in 2000. At different points in his MIT career, he has been director of both the MIT Quest for Intelligence (now the MIT Siegel Family Quest for Intelligence) and the MIT-IBM Watson AI Lab.
Torralba’s research focuses on computer vision, machine learning, and human visual perception; as he puts it, “I am interested in building systems that can perceive the world like humans do.” Alongside Phillip Isola and William Freeman, he recently co-authored “Foundations of Computer Vision,” an 800-plus page textbook exploring the foundations and core principles of the field.
Among other awards and recognitions, he is the recipient of the 2008 National Science Foundation Career award; the 2010 J. K. Aggarwal Prize from the International Association for Pattern Recognition; the 2017 Frank Quick Faculty Research Innovation Fellowship; the Louis D. Smullin (’39) Award for Teaching Excellence; and the 2020 PAMI Mark Everingham Prize. In 2021, he was awarded the inaugural Thomas Huang Memorial Prize by the Pattern Analysis and Machine Intelligence Technical Committee and was named a fellow of the Association for the Advancement of Artificial Intelligence. In 2022, he received an honorary doctoral degree from the Universitat Politècnica de Catalunya — BarcelonaTech (UPC).
ACM fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.
3 Questions: Using AI to accelerate the discovery and design of therapeutic drugs
In the pursuit of solutions to complex global challenges including disease, energy demands, and climate change, scientific researchers, including at MIT, have turned to artificial intelligence, and to quantitative analysis and modeling, to design and construct engineered cells with novel properties. The engineered cells can be programmed to become new therapeutics — battling, and perhaps eradicating, diseases.
James J. Collins is one of the founders of the field of synthetic biology, and is also a leading researcher in systems biology, the interdisciplinary approach that uses mathematical analysis and modeling of complex systems to better understand biological systems. His research has led to the development of new classes of diagnostics and therapeutics, including in the detection and treatment of pathogens like Ebola, Zika, SARS-CoV-2, and antibiotic-resistant bacteria. Collins, the Termeer Professor of Medical Engineering and Science and professor of biological engineering at MIT, is a core faculty member of the Institute for Medical Engineering and Science (IMES), the director of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health, as well as an institute member of the Broad Institute of MIT and Harvard, and core founding faculty at the Wyss Institute for Biologically Inspired Engineering, Harvard.
In this Q&A, Collins speaks about his latest work and goals for this research.
Q. You’re known for collaborating with colleagues across MIT, and at other institutions. How have these collaborations and affiliations helped you with your research?
A: Collaboration has been central to the work in my lab. At the MIT Jameel Clinic for Machine Learning in Health, I formed a collaboration with Regina Barzilay [the Delta Electronics Professor in the MIT Department of Electrical Engineering and Computer Science and affiliate faculty member at IMES] and Tommi Jaakkola [the Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society] to use deep learning to discover new antibiotics. This effort combined our expertise in artificial intelligence, network biology, and systems microbiology, leading to the discovery of halicin, a potent new antibiotic effective against a broad range of multidrug-resistant bacterial pathogens. Our results were published in Cell in 2020 and showcased the power of bringing together complementary skill sets to tackle a global health challenge.
At the Wyss Institute, I’ve worked closely with Donald Ingber [the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard], leveraging his organs-on-chips technology to test the efficacy of AI-discovered and AI-generated antibiotics. These platforms allow us to study how drugs behave in human tissue-like environments, complementing traditional animal experiments and providing a more nuanced view of their therapeutic potential.
The common thread across our many collaborations is the ability to combine computational predictions with cutting-edge experimental platforms, accelerating the path from ideas to validated new therapies.
Q. Your research has led to many advances in designing novel antibiotics, using generative AI and deep learning. Can you talk about some of the advances you’ve been a part of in the development of drugs that can battle multi-drug-resistant pathogens, and what you see on the horizon for breakthroughs in this arena?
A: In 2025, our lab published a study in Cell demonstrating how generative AI can be used to design completely new antibiotics from scratch. We used genetic algorithms and variational autoencoders to generate millions of candidate molecules, exploring both fragment-based designs and entirely unconstrained chemical space. After computational filtering, retrosynthetic modeling, and medicinal chemistry review, we synthesized 24 compounds and tested them experimentally. Seven showed selective antibacterial activity. One lead, NG1, was highly narrow-spectrum, eradicating multi-drug-resistant Neisseria gonorrhoeae, including strains resistant to first-line therapies, while sparing commensal species. Another, DN1, targeted methicillin-resistant Staphylococcus aureus (MRSA) and cleared infections in mice through broad membrane disruption. Both were non-toxic and showed low rates of resistance.
Looking ahead, we are using deep learning to design antibiotics with drug-like properties that make them stronger candidates for clinical development. By integrating AI with high-throughput biological testing, we aim to accelerate the discovery and design of antibiotics that are novel, safe, and effective, ready for real-world therapeutic use. This approach could transform how we respond to drug-resistant bacterial pathogens, moving from a reactive to a proactive strategy in antibiotic development.
Q. You’re a co-founder of Phare Bio, a nonprofit organization that uses AI to discover new antibiotics, and the Collins Lab has helped to launch the Antibiotics-AI Project in collaboration with Phare Bio. Can you tell us more about what you hope to accomplish with these collaborations, and how they tie back to your research goals?
A: We founded Phare Bio as a nonprofit to take the most promising antibiotic candidates emerging from the Antibiotics-AI Project at MIT and advance them toward the clinic. The idea is to bridge the gap between discovery and development by collaborating with biotech companies, pharmaceutical partners, AI companies, philanthropies, other nonprofits, and even nation states. Akhila Kosaraju has been doing a brilliant job leading Phare Bio, coordinating these efforts and moving candidates forward efficiently.
Recently, we received a grant from ARPA-H to use generative AI to design 15 new antibiotics and develop them as pre-clinical candidates. This project builds directly on our lab’s research, combining computational design with experimental testing to create novel antibiotics that are ready for further development. By integrating generative AI, biology, and translational partnerships, we hope to create a pipeline that can respond more rapidly to the global threat of antibiotic resistance, ultimately delivering new therapies to patients who need them most.
3D-printed metamaterials that stretch and fail by design
Metamaterials — materials whose properties are primarily dictated by their internal microstructure, and not their chemical makeup — have been redefining the engineering materials space for the last decade. To date, however, most metamaterials have been lightweight options designed for stiffness and strength.
New research from the MIT Department of Mechanical Engineering introduces a computational design framework to support the creation of a new class of soft, compliant, and deformable metamaterials. These metamaterials, termed 3D woven metamaterials, consist of building blocks that are composed of intertwined fibers that self-contact and entangle to endow the material with unique properties.
“Soft materials are required for emerging engineering challenges in areas such as soft robotics, biomedical devices, or even for wearable devices and functional textiles,” explains Carlos Portela, the Robert N. Noyce Career Development Professor and associate professor of mechanical engineering.
In an open-access paper published Jan. 26 in the journal Nature Communications, researchers from Portela’s lab provide a universal design framework that generates complex 3D woven metamaterials with a wide range of properties. The work also provides open-source code that allows users to create designs to fit specifications and generate a file for printing or simulating the material using a 3D printer.
“Normal knitting or weaving have been constrained by the hardware for hundreds of years — there’s only a few patterns that you can make clothes out of, for example — but that changes if hardware is no longer a limitation,” Portela says. “With this framework, you can come up with interesting patterns that completely change the way the textile is going to behave.”
Possible applications include wearable sensors that move with human skin, fabrics for aerospace or defense needs, flexible electronic devices, and a variety of other printable textiles.
The team developed general design rules — in the form of an algorithm — that first provide a graph representation of the metamaterial. The attributes of this graph eventually dictate how each fiber is placed and connected within the metamaterial. The fundamental building blocks are woven unit cells that can be functionally graded via control of various design parameters, such as the radius and pitch of the fibers that make up the woven struts.
“Because this framework allows these metamaterials to be tailored to be softer in one place and stiffer in another, or to change shape as they stretch, they can exhibit an exceptional range of behaviors that would be hard to design using conventional soft materials,” says Molly Carton, lead author of the study. Carton, a former postdoc in Portela’s lab, is now an assistant research professor in mechanical engineering at the University of Maryland.
Further, the simulation framework also allows users to predict the deformation response of these materials, capturing complex phenomena such as self-contact within fibers and entanglement, and design to predict and resist deformation or tearing patterns.
“The most exciting part was being able to tailor failure in these materials and design arbitrary combinations,” says Portela. “Based on the simulations, we were able to fabricate these spatially varying geometries and experiment on them at the microscale.”
This work is the first to provide a tool for users to design, print, and simulate an emerging class of metamaterials that are extensible and tough. It also demonstrates that through tuning of geometric parameters, users can control and predict how these materials will deform and fail, and presents several new design building blocks that substantially expand the property space of woven metamaterials.
“Until now, these complex 3D lattices have been designed manually, painstakingly, which limits the number of designs that anyone has tested,” says Carton. “We’ve been able to describe how these woven lattices work and use that to create a design tool for arbitrary woven lattices. With that design freedom, we’re able to design the way that a lattice changes shape as it stretches, how the fibers entangle and knot with each other, as well as how it tears when stretched to the limit.”
Carton says she believes the framework will be useful across many disciplines. “In releasing this framework as a software tool, our hope is that other researchers will explore what’s possible using woven lattices and find new ways to use this design flexibility,” she says. “I’m looking forward to seeing what doors our work can open.”
The paper, “Design framework for programmable three-dimensional woven metamaterials,” is available now in the journal Nature Communications. Its other MIT-affiliated authors are James Utama Surjadi, Bastien F. G. Aymon, and Ling Xu.
This work was performed, in part, through the use of MIT.nano’s fabrication and characterization facilities.
Terahertz microscope reveals the motion of superconducting electrons
You can tell a lot about a material based on the type of light you shine at it: Optical light illuminates a material’s surface, while X-rays reveal its internal structures and infrared captures a material’s radiating heat.
Now, MIT physicists have used terahertz light to reveal inherent, quantum vibrations in a superconducting material, which have not been observable until now.
Terahertz light is a form of energy that lies between microwaves and infrared radiation on the electromagnetic spectrum. It oscillates over a trillion times per second — just the right pace to match how atoms and electrons naturally vibrate inside materials. Ideally, this makes terahertz light the perfect tool to probe these motions.
But while the frequency is right, the wavelength — the distance over which the wave repeats in space — is not. Terahertz waves have wavelengths hundreds of microns long. Because the smallest spot that any kind of light can be focused into is limited by its wavelength, terahertz beams cannot be tightly confined. As a result, a focused terahertz beam is physically too large to interact effectively with microscopic samples, simply washing over these tiny structures without revealing fine detail.
In a paper appearing today in the journal Nature, the scientists report that they have developed a new terahertz microscope that compresses terahertz light down to microscopic dimensions. This pinpoint of terahertz light can resolve quantum details in materials that were previously inaccessible.
The team used the new microscope to send terahertz light into a sample of bismuth strontium calcium copper oxide, or BSCCO (pronounced “BIS-co”) — a material that superconducts at relatively high temperatures. With the terahertz scope, the team observed a frictionless “superfluid” of superconducting electrons that were collectively jiggling back and forth at terahertz frequencies within the BSCCO material.
“This new microscope now allows us to see a new mode of superconducting electrons that nobody has ever seen before,” says Nuh Gedik, the Donner Professor of Physics at MIT.
By using terahertz light to probe BSCCO and other superconductors, scientists can gain a better understanding of properties that could lead to long-coveted room-temperature superconductors. The new microscope can also help to identify materials that emit and receive terahertz radiation. Such materials could be the foundation of future wireless, terahertz-based communications, that could potentially transmit more data at faster rates compared to today’s microwave-based communications.
“There’s a huge push to take Wi-Fi or telecommunications to the next level, to terahertz frequencies,” says Alexander von Hoegen, a postdoc in MIT’s Materials Research Laboratory and lead author of the study. “If you have a terahertz microscope, you could study how terahertz light interacts with microscopically small devices that could serve as future antennas or receivers.”
In addition to Gedik and von Hoegen, the study’s MIT co-authors include Tommy Tai, Clifford Allington, Matthew Yeung, Jacob Pettine, Alexander Kossak, Byunghun Lee, and Geoffrey Beach, along with collaborators at Harvard University, the Max Planck Institute for the Structure and Dynamics of Matter, the Max Planck Institute for the Physics of Complex Systems and the Brookhaven National Lab.
Hitting a limit
Terahertz light is a promising yet largely untapped imaging tool. It occupies a unique spectral “sweet spot”: Like microwaves, radio waves, and visible light, terahertz radiation is nonionizing and therefore does not carry enough energy to cause harmful radiation effects, making it safe for use in humans and biological tissues. At the same time, much like X-rays, terahertz waves can penetrate a wide range of materials, including fabric, wood, cardboard, plastic, ceramics, and even thin brick walls.
Owing to these distinctive properties, terahertz light is being actively explored for applications in security screening, medical imaging, and wireless communications. In contrast, far less effort has been devoted to applying terahertz radiation to microscopy and the illumination of microscopic phenomena. The primary reason is a fundamental limitation shared by all forms of light: the diffraction limit, which restricts spatial resolution to roughly the wavelength of the radiation used.
With wavelengths on the order of hundreds of microns, terahertz radiation is far larger than atoms, molecules, and many other microscopic structures. As a result, its ability to directly resolve microscale features is fundamentally constrained.
“Our main motivation is this problem that, you might have a 10-micron sample, but your terahertz light has a 100-micron wavelength, so what you would mostly be measuring is air, or the vacuum around your sample,” von Hoegen explains. “You would be missing all these quantum phases that have characteristic fingerprints in the terahertz regime.”
Zooming in
The team found a way around the terahertz diffraction limit by using spintronic emitters — a recent technology that produces sharp pulses of terahertz light. Spintronic emitters are made from multiple ultrathin metallic layers. When a laser illuminates the multilayered structure, the light triggers a cascade of effects in the electrons within each layer, such that the structure ultimately emits a pulse of energy at terahertz frequencies.
By holding a sample close to the emitter, the team trapped the terahertz light before it had a chance to spread, essentially squeezing it into a space much smaller than its wavelength. In this regime, the light can bypass the diffraction limit to resolve features that were previously too small to see.
The MIT team adapted this technology to observe microscopic, quantum-scale phenomena. For their new study, the team developed a terahertz microscope using spintronic emitters interfaced with a Bragg mirror. This multilayered structure of reflective films successively filters out certain, undesired wavelengths of light while letting through others, protecting the sample from the “harmful” laser which triggers the terahertz emission.
As a demonstration, the team used the new microscope to image a small, atomically thin sample of BSCCO. They placed the sample very close to the terahertz source and imaged it at temperatures close to absolute zero — cold enough for the material to become a superconductor. To create the image, they scanned the laser beam, sending terahertz light through the sample and looking for the specific signatures left by the superconducting electrons.
“We see the terahertz field gets dramatically distorted, with little oscillations following the main pulse,” von Hoegen says. “That tells us that something in the sample is emitting terahertz light, after it got kicked by our initial terahertz pulse.”
With further analysis, the team concluded that the terahertz microscope was observing the natural, collective terahertz oscillations of superconducting electrons within the material.
“It’s this superconducting gel that we’re sort of seeing jiggle,” von Hoegen says.
This jiggling superfluid was expected, but never directly visualized until now. The team is now applying the microscope to other two-dimensional materials, where they hope to capture more terahertz phenomena.
“There are a lot of the fundamental excitations, like lattice vibrations and magnetic processes, and all these collective modes that happen at terahertz frequencies,” von Hoegen says. “We can now resonantly zoom in on these interesting physics with our terahertz microscope.”
This research was supported, in part, by the U.S. Department of Energy and by the Gordon and Betty Moore Foundation.
MIT winter club sports energized by the Olympics
With the Milano Cortina 2026 Winter Olympics officially kicking off today, several of MIT’s winter sports clubs are hosting watch parties to cheer on their favorite players, events, and teams.
Members of MIT’s Curling Club are hosting a gathering to support their favorite teams. Co-presidents Polly Harrington and Gabi Wojcik are rooting for the United States.
“I’m looking forward to watching the Olympics and cheering for Team USA. I grew up in Seattle, and during the Vancouver Olympics, we took a family trip to the games. The most affordable tickets were to the curling events, and that was my first exposure to the sport. Seeing it live was really cool. I was hooked,” says Harrington.
Wojcik says, “It’s a very analytical and strategic sport, so it’s perfect for MIT students. Physicists still don't entirely agree on why the rocks behave the way they do. Everyone in the club is welcoming and open to teaching new people to play. I’d never played before and learned from scratch. The other advantage of playing is that it is a lifelong sport.”
The two say the biggest misconception about curling, other than that it is easy, is that it is played on ice skates. It’s neither easy nor played on skates. The stone, or rock, as it is often called, weighs 43 pounds, and is always made from the same weathered granite from Scotland so that the playing field, or in this case, ice, is even.
Both agree that playing is a great way to meet other students from MIT that they might not otherwise have the chance to.
Having seen the American team at a recent tournament, Wojcik is hoping the team does well, but admits that if Scotland wins, she’ll also be happy. Harrington met members of the U.S. men's curling team, Luc Violette and Ben Richardson, when curling in Seattle in high school, and will be cheering for them.
The Curling Club team practices and competes in tournaments in the New England area from late September until mid-March and always welcomes new members, no previous experience is necessary to join.
Figure Skating Club
The MIT Figure Skating Club is also excited for the 2026 Olympics and has been watching preliminary events (nationals) leading up to the games with great anticipation. Eleanor Li, the current club president, and Amanda (Mandy) Paredes Rioboo, former president, say holding small gatherings to watch the Olympics is a great way for the team to bond further.
Li began taking skating lessons at age 14 and fell in love with the sport right away, and has been skating ever since. Paredes Rioboo started lessons at age 5 and practices in the mornings with other club members, saying, “there is no better way to start the day.”
The Figure Skating Club currently has 120 members and offers a great way to meet friends who share the same passion. Any MIT student, regardless of skill level, is welcome to join the club.
Li says, “We have members ranging from former national and international competitors to people who are completely new to the ice.” Adding that her favorite part of skating is, “the freeing feeling of wind coming at you when you’re gliding across the ice! And all the life lessons learned — time management, falling again and again, and getting up again and again, the artistry and expressiveness of this beautiful sport, and most of all the community.”
Paredes Rioboo agrees. “The sport taught me discipline, to work at something and struggle with it until I got good at it. It taught me to be patient with myself and to be unafraid of failure.”
“The Olympics always bring a lot of buzz and curiosity around skating, and we’re excited to hopefully see more people come to our Saturday free group lessons, try skating for the first time, and maybe even join the club,” says Li.
Li and Paredes Rioboo are ready to watch the games with other club members. Li says, “I’m especially excited for women’s singles skating. All of the athletes have trained so hard to get there, and I’m really looking forward to watching all the beautiful skating. Especially Kaori Sakamoto.”
“I’m excited to watch Alysa Liu and Ami Nakai,” adds Paredes Rioboo.
Students interested in joining the Figure Skating Club can find more information here.
US Declassifies Information on JUMPSEAT Spy Satellites
The US National Reconnaissance Office has declassified information about a fleet of spy satellites operating between 1971 and 2006.
I’m actually impressed to see a declassification only two decades after decommission.
Florida DOGE embraces Trump’s disputed climate report
Red states urge lawmakers to probe group chaired by chief justice
This battery company has a plan for weathering Trump
Long maligned, the voluntary carbon market is embracing integrity
Why China is building so many coal plants despite its solar and wind boom
How climate change, human psychology make US cold snap feel so harsh
Why hospitals are phasing out a popular operating room anesthetic
Mexican long-nosed bats head farther north in search of agave nectar
Katie Spivakovsky wins 2026 Churchill Scholarship
MIT senior Katie Spivakovsky has been selected as a 2026-27 Churchill Scholar and will undertake an MPhil in biological sciences at the Wellcome Sanger Institute at Cambridge University in the U.K. this fall.
Spivakovsky, who is double-majoring in biological engineering and artificial intelligence, with minors in mathematics and biology, aims to integrate computation and bioengineering in an academic research career focused on developing robust, scalable solutions that promote equitable health outcomes.
At MIT’s Bathe BioNanoLab, Spivakovsky investigates therapeutic applications of DNA origami, DNA-scaffolded nanoparticles for gene and mRNA delivery, and co-authored a manuscript in press at Science. She leads the development of an immune therapy for cancer cachexia with a team supported by MIT’s BioMakerSpace; this work earned a silver medal at the international synthetic biology competition iGEM and was published in the MIT Undergraduate Research Journal. Previously, she worked on Merck’s Modeling & Informatics team, characterizing a cancer-associated protein mutation, and at the New York Structural Biology Center, where she improved cryogenic electron microscopy particle detection models.
On campus, Spivakovsky serves as director of the Undergraduate Initiative in the MIT Biotech Group. She is deeply committed to teaching and mentoring, and has served as a lecturer and co-director for class 6.S095 (Probability Problem Solving), a teaching assistant for classes 20.309 (Bioinstrumentation) and 20.A06 (Hands-on Making in Biological Engineering), a lab assistant for 6.300 (Signal Processing), and as an associate advisor.
“Katie is a brilliant researcher who has a keen intellectual curiosity that will make her a leader in biological engineering in the future. We are proud that she will be representing MIT at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships.
The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, established in 1963, honors former British Prime Minister Winston Churchill’s vision for U.S.-U.K. scientific exchange. Since 2017, two Kanders Churchill Scholarships have also been awarded each year for studies in science policy.
MIT students interested in learning more about the Churchill Scholarship should contact Kim Benard in MIT Career Advising and Professional Development.
Counter intelligence
How can artificial intelligence step out of a screen and become something we can physically touch and interact with?
That question formed the foundation of class 4.043/4.044 (Interaction Intelligence), an MIT course focused on designing a new category of AI-driven interactive objects. Known as large language objects (LLOs), these physical interfaces extend large language models into the real world. Their behaviors can be deliberately generated for specific people or applications, and their interactions can evolve from simple to increasingly sophisticated — providing meaningful support for both novice and expert users.
“I came to the realization that, while powerful, these new forms of intelligence still remain largely ignorant of the world outside of language,” says Marcelo Coelho, associate professor of the practice in the MIT Department of Architecture, who has been teaching the design studio for several years and directs the Design Intelligence Lab. “They lack real-time, contextual understanding of our physical surroundings, bodily experiences, and social relationships to be truly intelligent. In contrast, LLOs are physically situated and interact in real time with their physical environment. The course is an attempt to both address this gap and develop a new kind of design discipline for the age of AI.”
Given the assignment to design an interactive device that they would want in their lives, students Jacob Payne and Ayah Mahmoud focused on the kitchen. While they each enjoy cooking and baking, their design inspiration came from the first home computer: the Honeywell 316 Kitchen Computer, marketed by Neiman Marcus in 1969. Priced at $10,000, there is no record of one ever being sold.
“It was an ambitious but impractical early attempt at a home kitchen computer,” says Payne, an architecture graduate student. “It made an intriguing historical reference for the project.”
“As somebody who likes learning to cook — especially now, in college as an undergrad — the thought of designing something that makes cooking easy for those who might not have a cooking background and just wants a nice meal that satisfies their cravings was a great starting point for me,” says Mahmoud, a senior design major.
“We thought about the leftover ingredients you have in the refrigerator or pantry, and how AI could help you find new creative uses for things that you may otherwise throw away,” says Payne.
Generative cuisine
The students designed their device — named Kitchen Cosmo — with instructions to function as a “recipe generator.” One challenge was prompting the LLM to consistently acknowledge real-world cooking parameters, such as heating, timing, or temperature. One issue they worked out was having the LLM recognize flavor profiles and spices accurate to regional and cultural dishes around the world to support a wider range of cuisines. Troubleshooting included taste-testing recipes Kitchen Cosmo generated. Not every early recipe produced a winning dish.
“There were lots of small things that AI wasn't great at conceptually understanding,” says Mahmoud. “An LLM needs to fundamentally understand human taste to make a great meal.”
They fine-tuned their device to allow for the myriad ways people approach preparing a meal. Is this breakfast, lunch, dinner, or a snack? How advanced of a cook are you? How much meal prep time do you have? How many servings will you make? Dietary preferences were also programmed, as well as the type of mood or vibe you want to achieve. Are you feeling nostalgic, or are you in a celebratory mood? There’s a dial for that.
“These selections were the focal point of the device because we were curious to see how the LLM would interpret subjective adjectives as inputs and use them to transform the type of recipe outputs we would get,” says Payne.
Unlike most AI interactions that tend to be invisible, Payne and Mahmoud wanted their device to be more of a “partner” in the kitchen. The tactile interface was intentionally designed to structure the interaction, giving users a physical control over how the AI responded.
“While I’ve worked with electronics and hardware before, this project pushed me to integrate the components with a level of precision and refinement that felt much closer to a product-ready device,” says Payne of the course work.
Retro and red
After their electronic work was completed, the students designed a series of models using cardboard until settling on the final look, which Payne describes as “retro.” The body was designed in a 3D modeling software and printed. In a nod to the original Honeywell computer, they painted it red.
A thin, rectangular device about 18 inches in height, Kitchen Cosmo has a webcam that hinges open to scan ingredients set on a counter. It translates these into a recipe that takes into consideration general spices and condiments common in most households. An integrated thermal printer delivers a printed recipe that is torn off. Recipes can be stored in a plastic receptacle on its base.
While Kitchen Cosmo made a modest splash in design magazines, both students have ideas where they will take future iterations.
Payne would like to see it “take advantage of a lot of the data we have in the kitchen and use AI as a mediator, offering tips for how to improve on what you’re cooking at that moment.”
Mahmoud is looking at how to optimize Kitchen Cosmo for her thesis. Classmates have given feedback to upgrade its abilities. One suggestion is to provide multi-person instructions that give several people tasks needed to complete a recipe. Another idea is to create a “learning mode” in which a kitchen tool — for example, a paring knife — is set in front of Kitchen Cosmo, and it delivers instructions on how to use the tool. Mahmoud has been researching food science history as well.
“I’d like to get a better handle on how to train AI to fully understand food so it can tailor recipes to a user’s liking,” she says.
Having begun her MIT education as a geologist, Mahmoud’s pivot to design has been a revelation, she says. Each design class has been inspiring. Coelho’s course was her first class to include designing with AI. Referencing the often-mentioned analogy of “drinking from a firehouse” while a student at MIT, Mahmoud says the course helped define a path for her in product design.
“For the first time, in that class, I felt like I was finally drinking as much as I could and not feeling overwhelmed. I see myself doing design long-term, which is something I didn’t think I would have said previously about technology.”
