Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Speaking Freely: Shin Yang
*This interview has been edited for length and clarity.
David Greene: Shin, please introduce yourself to the Speaking Freely community.
Shin Yang: My name is Shin Yang. I am a queer writer with a legal background and experience in product management. I am the steward of Lezismore, an independent, self-hosted, open-source community for sexual minorities in Taiwan. For the past decade, I have focused on platform governance as infrastructure, with a particular emphasis on anonymity, minimal data collection, and behavior-based accountability, so that people can speak about intimacy and identity without fear of extraction or exposure. I am a community architect and builder, not an influencer. I’ve spent most of the past decade working anonymously building systems, designing governance protocols, and holding space for others to speak while keeping myself in the background.
DG: Great. And so let’s talk about how that work intersects with freedom of expression as a principle, and your own personal feelings about freedom of expression. And so with that in mind, let me just start with a basic question, what does freedom of expression mean to you?
SHIN: For me, free expression is about possibility, and possibility always contains both and even multiple ends, the beautiful ones and the brutal in equal measure. Maybe not that equal, but you cannot just speak about the beautiful or good things. I think it's not about pushing discomfort out of the room. If we refuse all discomfort, we end up in echo chambers, which is safe, predictable; but dead. What matters to me is the equipment and principles: Who carries through that discomfort, self-discipline, mutual support, and the infrastructure and governance that can let people grow over time. Keeps a workable gray space open: room to make mistakes, learn, repair, and keep speaking.
DG: How does that resonate with you personally? Why are you passionate about that?
SHIN: Around 2013 in Taiwan's context, when Facebook started to take over the digital ecosystem in Taiwan, many local independent bulletin boards (BBS) that had been formed for sexual minorities were shut down because they had no income from advertisements, and people were pushed into mainstream platforms—like Facebook, Instagram, Meta, whatever, Twitter now X—where sexual expression was usually reported or flagged, and where I watched sharp intra-community exclusionary voices saying “bisexual and trans people were not pure enough”, or that talking openly about sex would harm our image, or that it was inappropriate to children, or it would invite harassment. Those oppressions are even fiercer within the queer community itself, which is self-censoring in order to gain approval from mainstream society.
So, the community itself says that the best way to do it is don't talk about it. Never talk about it. Never mention a single thing about it. It was a wakeup call for me, because I think it's not right. And also, there's another more private story for me, it's a story I heard from our sexual minority community. I once heard about a butch student who was sexually assaulted by a group of men because she dated a beautiful classmate, a beautiful woman in the class.
And when I learned what happened to her, that story changed my focus. Because, you know, when people hear this kind of story, they always focus on punishing those men, punishing those criminals—but what matters for me most is building conditions where someone like her could someday still have a chance at intimacy on her own terms, and finally be free from fear. That's more important for me. I may never meet her, but I know who I am and what I'm here to build. I have been building an infrastructure –– not just “safe space” as a slogan, but an “ecospace” designed to make survival and growth possible. So that's why I believe that a well-governed space is what matters for communities now.
DG: Why is it so important for sexual minorities to have forums where they can communicate in that way? When it was just the bulletin boards, before social media, what worked really well and what didn’t work well?
SHIN: That’s a wonderful question. Okay, the bulletin boards I used before, the registration process doesn't require a lot of information. You just need email.
What I miss about bulletin boards is the sense of structure. You didn’t enter a personalized feed—you entered a place with visible rooms and topics. Even in the spaces you visited daily, you’d encounter views you didn’t like, and you had to live with that—and learn how to argue, or leave, or build something parallel. In some boards, moderators were community-chosen, which created a practical kind of participation—not perfect democracy, but civic practice.
You have to provide the information of which school you are in, because it's based on school. But it's not that difficult to use that. And also they have some kinds of logistics, like you log into different boards with different topics, and you can see that there are huge topics along with several small topics. So when you log into that, you can sense and feel the whole structure of that community. It's not a personal feed bombing you with everything you like. So you know, even in the board you’re most likely to visit every day, you will definitely encounter some speech you don't like, and you argue with them, you fight with them, or build something parallel, that's the civic foundation of democracy. You experience the everyday practice of civic democracy. People can vote for moderators or even recall them.
DG: You mean, the community can ask them to leave the bulletin boards?
SHIN: No, they don't actually leave the bulletin board. It's more that the moderator no longer has the right to perform administrative tasks, but they can still be part of the community, and ordinary users can vote in the election for this.
DG: Okay, and then what were the shortcomings of the bulletin boards?
SHIN: Yeah, it’s brutal. Really brutal. And I’ve seen people literally organize to push others out. I didn’t expect this to turn into story time, but I actually love this. So—back in Taiwan, we had this big BBS forum called PTT. There was a board called the ‘Sex’ board, where people could talk about sexual topics and share sexual health info. But around 2010, the space was dominated by mainstream straight cis men. And whenever a woman or a sexual minority posted anything, they often got harassed or attacked. So, women created another board inside the forum—basically a separate space—called ‘Feminine Sex.’ And from then on, the original Sex board and the Feminine Sex board were in conflict all the time. And honestly, if this happened today on Facebook, Threads, or X… we’d just block each other. Easy. Clean. Done.
But the problem is: when blocking becomes the default, we don’t really learn how to argue well, how to organize our reasons, or even how to sit with discomfort and understand why the other side thinks the way they do. We lose that practice—because it’s just so easy to delete people from our world now. I’m not saying blocking is always wrong. But there’s a trade-off.
DG: I get that. Then when Facebook and the other social media platforms that followed came along and the users migrated over to the commercial services, what was lost?
SHIN: What was lost? I think our behavior got shaped—personal branding became the default setting for joining an online community. If you don't do it, like me, you basically don't exist. Influence can be shaped by the number of social media followers; people define each other based on this. Choosing not to obey the logic of mainstream platforms means being unseen, and being unseen means having no influence.
And sure, personal branding can be useful—but I don’t believe it’s the only way to express yourself or connect with a community. The problem is, on mainstream platforms, the whole system is built for visibility. So clout becomes the game. Look at what they push: stories, reels, short-form visuals. And as a former product manager, I can tell you—this is not accidental. It’s designed. It’s designed around human nature: to avoid friction as much as possible. So they keep you scrolling, to make reacting effortless. One tap and you’ve sent a smiley face. Engagement becomes easier… but also cheaper.
And the scary part is, people start thinking that’s the whole internet. It’s not. But the more we get trained by these interfaces, the harder it becomes to even imagine other ways of building community. It is becoming more difficult for people to imagine that the "right" amount of friction can actually help us to grow, and coexist with the diversity.
DG: So did you find that there were certain things you couldn't talk about on Facebook or on the other social media platforms because they were sexual, because sexual speech was not as welcome as it was earlier?
SHIN: Yes, when I first started building my community, I knew nothing about technology. Like everyone else, I just created a fan page on Facebook, which was then flagged and deleted. This happened. I think it still happens to this day. At first, I was so angry about it. I felt it was unjust. But every time I wrote to Facebook, they just said that I had violated the user terms. At first I was furious. But I don’t stop at anger. I dig deeper. I thought, “Why do you say I violated the user terms?”
I read the terms, compared policies across platforms and applications, and realized the pattern: All of the terms of use forbid adult or erotic content in fine print. Because these are profit-driven systems optimized to minimize legal and business risk. So, I don’t frame it as “evil platforms.” I frame it as incentives. Once I understood this, I realized that we should not only protest and ask those big tech platforms to “give” us a voice –– that's a good approach, but it shouldn't be the only one. I believe we should build our own community. That's why I started researching open-source software and building my own self-hosted community.
DG: Please talk a little bit more about what you're building, and how what you're building is consistent with your view of free expression.
SHIN: Sure. It’s a long process but the reason why I use open-source software is, for a person knowing nothing about technology, I can come to the open-source community and ask questions about it. It’s more reliable than building it myself.
And the second example is about how I designed Lezismore’s registration and community access, mostly through trial and error.
We don’t require any real-name or ID verification. In fact, you can register with just an email. But instead of “verifying people,” we redesigned the "space".
Lezismore is built as a two-layer structure. The main website is searchable, but it looks almost… boring on purpose—advocacy articles, writers’ posts, slow content. The truly active community space is inside that main site, and the entry point is not something you casually discover through search. Most people learn how to get in through word of mouth. We also block search engines, bots, and crawlers from the community area. So from day one, we gave up visibility on purpose—we traded reach for resilience.
Then there’s the onboarding. New users go through an “apprenticeship” period. You can’t immediately post, comment, or DM people. You first have to read, observe, and understand how the community works. We don’t even tell you exactly how long it takes—you just have to be patient. In the fast-content era, people constantly complain that this is “annoying” or “hard to use.” And yes, it is friction indeed.
But that friction buys something valuable: a space that can stay anonymous, inclusive, and high-trust—without being instantly overwhelmed by harassment or bad-faith users. It also means we don’t need to depend on Big Tech’s third-party verification APIs. With relatively low technical cost, we’re using governance design—not data collection—to balance inclusion and protection.
And honestly, as a platform owner, I have to be real about what users “actually” need. If this was truly “just terrible UX,” the site wouldn’t survive in today’s hyper-competitive platform environment. But Lezismore has been running for over a decade, and we still have tens of thousands of people quietly reading and interacting every month. This is one of the biggest tradeoffs in my governance design. In an attention economy, choosing low visibility is a bold decision, and maintaining it has a real cost.
On top of that, we rely on human, context-based moderation. We use posts, replies, and Q&A threads to actively teach community norms—why diversity and conflict exist, how to handle risk, and how to protect yourself. Users also share practical safety tips and real interaction experiences with each other. There are many more small mechanisms built into the system, but that’s the core logic.
And there’s one more layer: the legal environment. In Taiwan, the legal climate around sex and speech can create chilling effects for smaller platforms. Platform owners can be criminally liable in certain scenarios. That’s exactly why governance design matters—it’s how we keep lawful expression possible without over-collecting data.
DG: Ah, so you need to be careful. I’m curious whether you’ve had any examples of offline repression. Do you have any experiences with censorship or feeling like you didn’t have your full freedom of expression in your offline experiences? Any experiences that might inform what an ideal online community might look like?
SHIN: Yes—actually, most of my earliest experiences with repression were offline, and they shaped how I later understood the internet as an escape route.
Back when I was a high school student, I was already involved in student movements and gender-related advocacy. One very concrete example was dress codes. The school restricted what female students could wear, and students organized to push for change. At one point we even had a vote—something like 98% of students supported revising the policy. But when the issue entered the “official” system, the administration simply ignored it. They bypassed procedure, dismissed the consensus, and used authority to shut it down completely.
That was my first clear lesson about repression: it’s not always someone telling you “you’re forbidden to speak.” Sometimes it’s a system designed so that even if students, women, or sexual minorities spend enormous effort building agreement, once our voices enter the institution, they can be treated as if they don’t exist.
That’s why, in the early 2010s, online space became my breakthrough. This was still the blog era, before social platforms fully standardized everything, and even before “share” mechanisms were built into everyday activism. I started experimenting with things like blog-based petitions, and a lot of students joined. The internet became a way to bypass institutional gatekeeping.
In college, I saw another layer. There was serious sexism from people in authority—military-style discipline officers, some teachers, and administrators. When gender-related controversies happened on campus, the media sometimes showed up and reported in ways that were harmful: exposing people, sensationalizing stories, and ignoring the realities of sexual minority students. Meanwhile, the administration would shut down student demands with authority, and at the same time use incentives and pressure behind the scenes, especially around housing or “benefits”—so some student representatives were afraid to speak honestly in meetings.
And this was before livestreaming was a normal tool. But even then, I was already using audio-based live channels to connect students across campuses. Online networks became a lifeline for young advocates, especially those of us who didn’t “fit” the institution and needed each other to survive.
I came from a literature background. I had zero technical training at the beginning. But I’ve always been the kind of person who loves trying new technology. And I was lucky, because I was born in that strange window when the internet was rapidly expanding, but not yet fully swallowed by Big Tech. So, I grew up in this tension between nostalgia and innovation, and I kept pushing, resisting, and experimenting. I’ve experienced both sides of speech: how beautiful freedom can be, and how terrifying it can become.
DG: Going back to Lezismore, I’m curious: When you ask people to observe before they post, what are you hoping they learn about the community before they more actively participate in it?
SHIN: I hope people understand that this is a community rather than a dating app focused on results. The community needs people to support and nurture each other. Some people see us as a dating app and expect a frictionless experience; naturally, they are disappointed. If you're only looking for a fast-food relationship, that's fine. Here, however, it is a community that offers more than just hooking up. The design focuses on words and a person’s behavioral history rather than just a photo. Dopamine bombing is not how we do things here.
We’ve also built a library of community safety notes, FAQs, and governance reminders over time. Some written by the team, some contributed by members. Not everyone reads them, and that’s fine. But the design makes it easier for people who want a slower, more intentional space to stay—and for people who want something frictionless to self-select out.
SHIN: I run the platform anonymously by design. People may know that there’s an admin called “Shin”, but I don’t associate a face or personal brand with the role because I don’t want the community to depend on my visibility for their trust.
We maintain a clear distinction between work and private life. Admin power is never a shortcut to social capital. In a sex-positive space, this boundary is a matter of ethics. The moment a founder’s identity becomes central, the space starts to orbit that person, and expectations, fan-service dynamics and power asymmetries creep in. Then speech becomes performance.
It also means I’m less “marketable” to attention-driven media—but that tradeoff protects the community’s integrity. Some media outlets only want a face and a persona. However, I accept this cost because I am trying to build a community that can thrive independently of an idol, where people relate to each other through behavior and shared norms, not proximity to the founder.
DG: It sounds like a lot of what you’re doing is about people being authentic on the site, not using personas or using it to create a personal platform for themselves for marketing purposes.
SHIN: Exactly, people can share links, but if a post is purely self-promotion with no contribution to the community, we don’t encourage this. I hope people here can respect the reciprocity.
DG: I want to shift a bit and talk about freedom of expression as a principle for a while. Do you think freedom of expression should be regulated by governments?
SHIN: Speech regulation is hard, because speech is freaking messy. And once you turn messy human speech into rules that scale, nuance gets flattened. Minority communities usually pay first, because large systems choose efficiency over lived reality.
I also don’t think the answer is “erase all conflict.” Some friction is the price of pluralism, and with good guidance and interface design, conflict can become a point of learning instead of a point of collapse. From a platform owner’s perspective, legal liability is real and often cruel. So if we expect platforms to be free, frictionless, allow everything we like, erase everything we dislike, and still amplify our visibility—then we’re really asking for magic. That’s why we need to talk seriously about alternatives and procedural safeguards, not just louder demands.
Age verification is a good example. I get that the goal is to protect minors. But identity-based age gates often turn into identity infrastructure. They chill lawful adult speech, concentrate gatekeeping power, and push everyone to hand over personal data just to access legal content. From my experience, there are other tools that can reduce harm with less damage—things like community design, visibility gating, and human, context-based moderation. Those approaches can protect people without building a personal-data checkpoint for everyone.
DG: You talked about minority voices, and minority speech. Are you concerned that any regulation will end up trying to silence minority speakers, or won’t benefit minority speakers. How are these speakers more vulnerable to speech regulations than others?
SHIN: Hmmm......a lot of minority speech is context-heavy. The same words can be support, education, or harassment depending on who says it and why. When regulation turns into broad categories, sexual health education, self-explore experiment sharing, trans healthcare discussions, or reclaimed language can be treated as “harmful” out of context (at both sides). So the risk isn’t only censorship, it’s misclassification at scale.
DG: Are there certain types of speech that don’t deserve the conversation. Some people might say that hate speech or speech that’s dehumanizing doesn’t deserve the conversation. Are there any categories of speech that you would say we shouldn’t consider, or do we get to talk about everything?
SHIN: Okay, I don't think the issue is about saying certain kinds of speech don't deserve to be discussed; the problem lies in the definition. As soon as we suggest that some speech doesn't merit discussion, some people will exploit this to silence their opponents. Whether it's right-wing, left-wing or anything else, if we say that we don't allow any kind of hate speech, the next thing someone will do is define your speech as hate speech. It's an endless war that draws us all into an eagerness to silence others and grab the mic, instead of creating more space for conversations and learning from each other.
We should go further than just regulation and create spaces where people can coexist in a grey area, endure some discomfort and engage with each other. I prefer this approach to trying to draw lines.
DG: So even well-intentioned restrictions might always be used against minority speakers?
SHIN: I wouldn’t say restriction is not good. There always has to be some kind of restriction, but people will always find a way to overcome or take advantage of it. So, the thing I believe is that regulation is regulation, but community should be an open-source archive. How we govern community, how we dialogue between each other when we disagree with each other…how can we create a space where those things can exist? I believe that those things should be open source. People always talk about open source like it’s just coding, but I believe governance should be open source too.
DG: So when you said before some restrictions are necessary but then we talk about open source governance, we’re talking about the same thing. When you say some restrictions are necessary, you’re not necessarily saying government restrictions, but that restrictions should come from somewhere else: that’s an open source governance model?
SHIN: Yes. And it should include restrictions in law, and how people deal with it, the way we deal with it. I’m not saying every rule or detection signal should be public. By “open-source governance,” I mean shareable governance playbooks: proportional steps, appeals templates, community norms, and design patterns that small communities can adapt. The goal is portability and adaptability of methods, not making systems easy to game. Because malice is always part of the environment.
DG: Is there anything else you want to say about your theory of open-source governance or what it means to you?
SHIN: I noticed there was a question in another interview about fostering transparency in social media, and how to appeal, and that the reason [for a takedown] should be more transparent. The interesting thing is that before our interview today I was joining a law and technology policy research group, and they’re reading a book called “Law and Technology: A Methodical Approach”. It's worth mentioning that it's very interesting. Apparently, scientists tend to place emphasis on complexity, which often trips up pragmatic reform efforts, so the recommendations often only call for greater transparency or participation.
I think this echoes what we were talking about before and the transparency thing. I heard this podcast in Taiwan about cybersecurity where they interview an outsourced ex-moderator from Meta and how the platform moderates speech. Because most of the information is confidential, the moderator can’t say too much, but she told us that every day Meta provided a whole set of lists with things they should ban, and every day it changes. Sometimes it even changes on an hourly basis. And they can never really put those fully transparent to the world. The reason they can’t do that is because those words are partially forbidding scams, because the scale is too big. So, when they show the transparency of how they ban things, the scammers will use this against them. Like, “now you’ve banned this word so I’ll just use another one.” It’s an endless war. So, I think transparency matters, but it shouldn’t be the only thing we think about, we should think about governance as well. And when we talk about governance, we shouldn’t just think about some high authority in government or a law just forcing the platform into something we like. We should go back and think about what we can do. We’ve got lots of open-source software now and we can literally build those things by ourselves. That’s what I’m trying to say.
DG: Okay, one last question. This is the last question we ask everybody. Who’s your free speech hero?
SHIN: This is the question I saw everyone answering, and I honestly struggled with it. Because I’m Taiwanese, and the names that often come up in U.S. free speech conversations aren’t the names I’m familiar with. I’m sorry about this.
DG: That’s okay, it doesn’t have to be a perfect answer.
SHIN: If you want a public figure from Taiwan, I think of the journalists and dissidents who pushed for press freedom during Taiwan’s democratization—Nylon (Tēnn Lâm-iông) is one name many Taiwanese recognize.
If I answer this as truthfully as I can, my hero is my family. My father taught me that integrity is not a slogan. It’s the ability to keep your ethics when it costs you something. My mother is the opposite kind of teacher: she’s relentless in a practical way: she doesn’t easily back down, and she keeps finding room to move even when the room is small. Put together, that’s what free expression means to me. It’s not “I can say anything.” It's about whether you can continue to think independently and live with integrity through layers of fear, pressure, temptation and coercion, while still moving forward and creating more possibilities for others.
Nitrous oxide, a product of fertilizer use, may harm some soil bacteria
Plant growth is supported by millions of tiny soil microbes competing and cooperating with each other as they perform important roles at the plant root, including improving access to nutrients and protecting against pathogens. As a byproduct of their metabolism, soil microbes can also produce nitrous oxide, or N2O, a potent greenhouse gas that has mostly been studied for its impact on the climate. While some N2O occurs naturally, its production can spike due to fertilizer application and other factors.
While it has long been believed that nitrous oxide doesn’t meaningfully interact with living organisms, a new paper by two MIT researchers shows that it may in fact shape microbial communities, making some bacterial strains more likely to grow than others.
Based on the prevalence of the biological processes disrupted by nitrous oxide, the researchers estimate about 30 percent of all bacteria with sequenced genomes are susceptible to nitrous oxide toxicity, suggesting the substance could play an important and underappreciated role in the intricate microbial ecosystems that influence plant growth.
The researchers have published their findings today in mBio, a journal of the American Society for Microbiology. If their lab findings carry over to agricultural settings, it could influence the way farmers go about everyday tasks that expose crops to spikes in nitrous oxide, such as watering and fertilization.
“This work suggests N2O production in agricultural settings is worth paying attention to for plant health,” says senior author Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor, who wrote the paper with lead author and PhD student Philip Wasson. “It hasn’t been on people’s radar, but it is particularly harmful for certain microbes. This could be another knock against N2O in addition to its climate impact. With more research, you might be able to understand how the timing of N2O production influences these microbial relationships, and that timing could be managed to improve crop health.”
A toxic gas
Nitrous oxide was shown to be toxic decades ago when researchers realized it can deactivate vitamin B12 in the human body. Since then, it has mostly drawn attention as a long-lived greenhouse gas that can eat away at the ozone. But when it comes to agricultural settings, most people have assumed it doesn’t interact with organisms growing in the soil around the plant root, a region called the rhizosphere.
“In general, there’s an assumption that N2O is not harmful at all despite this history of published studies showing that it can be toxic in specific contexts,” says McRose, who joined the faculty of the Department of Civil and Environmental Engineering in 2022. “People have not extended that understanding to microbial communities in the rhizosphere.”
While some studies have shown nitrous oxide sensitivity in a handful of microorganisms, less is known about how it impacts the distribution of microbial communities at the plant root. McRose and Wasson sought to fill that research gap.
They started by looking at a ubiquitous process that cells use to grow called methionine biosynthesis. Methionine biosynthesis can be carried out by enzymes that are dependent on B12 — and by other enzymes that are not. Many bacteria have both types.
Using a well-studied microbe named Pseudomonas aeruginosa, the researchers genetically removed the enzyme that isn’t dependent on B12 and found the microbe became sensitive to nitrous oxide, with its growth harmed even by nitrous oxide it produced itself.
Next the researchers looked at a synthetic microbial community from the plant Arabidopsis thaliana, finding many root-based microbes were also sensitive to nitrous oxide. Combining sensitive microbes with nitrous oxide-producing bacteria hampered their growth.
“This suggests that N2O-producing bacteria can affect the survival of their immediate neighbors,” Wasson explains. Together, the experiments confirmed the researchers’ suspicion that the production of nitrous oxide can hamper the growth of soil bacteria dependent on vitamin B12 to make methionine.
“These results suggest nitrous oxide producers shape microbial communities,” McRose says. “In the lab the result is very clear, and the work goes beyond just looking at a single organism. The co-culture experiments aren’t the same as a study in the field, but it’s a strong demonstration.”
From the lab to the farm
In farms, soil commonly experiences spikes of nitrous oxide for days or weeks from the addition of nitrogen fertilizer, rainfall, thawing, and other events. The researchers caution that their lab experiments are only the first step toward understanding how nitrous oxide affects microbial populations in agricultural settings.
Wasson calls the paper a proof of concept and plans to study agricultural soil next.
“In agricultural environments, N2O has been historically high,” Wasson says. “We want to see if we can detect a signature for this N2O exposure through genome sequencing studies, where the only microbes sticking around are not sensitive to N2O. This is the obvious next step.”
McRose says the findings could lead to a new way for researchers and farmers to think about nitrous oxide.
“What’s important and exciting about this case is it predicts that microbes with one version of an enzyme are going to be sensitive to N2O and those with a different version of the enzyme are not going to be sensitive,” McRose says. “This suggests that in the environment, exposure to N2O is going to select for certain types of organisms based on their genomic content, which is a highly testable hypothesis.”
The work was supported, in part, by the MIT Research Support Committee and a MIT Health and Life Sciences Collaborative Graduate Fellowship (HEALS).
Manipulating AI Summarization Features
Microsoft is reporting:
Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters….
These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated...
Youth climate activists confront Trump’s energy agenda in court
War and uncertainty cloud Trump’s AI pledge rollout
New York: Federal climate rollbacks undercut DOJ’s lawsuits against states
Washington, California and Québec release draft agreement to link carbon markets
California gas prices will jump under climate revisions, industry warns
Louisiana Republican seeks to shield oil industry from climate lawsuits
Noem broke law by delaying disaster aid, 2 senators say
Shareholder group sues insurer to put climate proposal up for vote
China’s new 5-year plan is crucial for world’s climate fight
Crocodile caught in creek 1,200 miles from its tropical habitat
How some skills become second nature
Expertise isn’t easy to pass down. Take riding a bike: A seasoned cyclist might talk a beginner through the basics of how to sit and when to push off. But other skills, like how hard to pedal to keep balanced, are more intuitive and harder to articulate. This implicit know-how is known as tacit knowledge, and very often, it can only be learned with experience and time.
But a team of MIT engineers wondered: Could an expert’s unconscious know-how be accessed, and even taught, to quickly bring a novice up to an expert’s level?
The answer appears to be “yes,” at least for a particular type of visual-learning task.
In a study published today in the Journal of Neural Engineering, the engineers identified tacit knowledge in volunteers who were tasked with classifying images of various shapes and patterns. As the volunteers were shown images to organize, the team recorded their eye movements and brain activity to measure their visual focus and cognitive attention, respectively.
The measurements showed that, over time, the volunteers shifted their focus and attention to a part of each image that made it easier to classify. However, when asked directly, the volunteers were not aware that they had made such a shift. The researchers concluded that this unconscious shift in attention and focus was a form of tacit knowledge that the volunteers possessed, even if they could not articulate it. What’s more, when the volunteers were made aware of this tacit knowledge, their accuracy in classifying images improved significantly.
The study is the first to directly show that visual attention can reveal unconscious, tacit knowledge during image classification tasks. It also finds for the first time that bringing this concealed knowledge to the surface can enhance experts’ performance.
While the results are specific to the study’s experiment, the researchers say they suggest that some forms of hidden know-how can be made explicit and applied to boost one’s learning experience. They suspect that tacit knowledge could be accessed for disciplines that require keen observation skills, including certain physical trades and crafts, sports, and image analysis, such as medical X-ray diagnoses.
“We as humans have a lot of knowledge, some that is explicit that we can translate into books, encyclopedias, manuals, equations. The tacit knowledge is what we cannot verbalize, that’s hidden in our unconscious,” says study author Alex Armengol-Urpi, a research scientist in MIT’s Department of Mechanical Engineering. “If we can make that knowledge explicit, we can then allow for it to be transferred easier, which can help in education and learning in general.”
The study’s co-authors include Andrés F. Salazar-Gomez, research scientist at the MIT Media Lab; Pawan Sinha, professor of vision and computational neuroscience in MIT’s Department of Brain and Cognitive Sciences; and Sanjay Sarma, the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor in Mechanical Engineering.
Hidden gaze
The concept of tacit knowledge is credited to the scientist and philosopher Michael Polyani, who in the mid 20th century was the first to investigate the notion that “we know more than we can tell.” His insights revealed that humans can hold a form of knowledge that is internalized, almost second nature, and often difficult to express or translate to others.
Since Polyani’s work, many studies have highlighted how tacit knowledge may play a part in perfecting certain skills, spanning everything from diagnosing medical images to discerning the sex of cats from images of their faces.
For Armengol-Urpi, these studies raised a question: Could a person’s tacit knowledge be revealed through unconscious signals, such as patterns in their eye movements? His PhD work focused on visual attention, and he had developed methods to study how humans focus their attention, by using cameras to follow the direction of their gaze, and electroencephalography (EEG) monitors to record their brain activity. In his research, he learned of a previous study that used similar methods to investigate how radiologists diagnose nodules in X-ray images. That study showed that the doctors unconsciously focused on areas of an image that helped them to correctly detect the nodules.
“That paper didn’t focus on tacit knowledge, but it suggested that there are some hidden clues in our gaze that could be explored further,” Armengol-Urpi says.
The shape of knowledge
For their new study, the team looked at whether they could identify signs of tacit knowledge from measurements of visual focus and attention. In their experiment, they asked 30 volunteers to look sequentially at over 120 images. They could look at each image for several seconds and then were asked to classify the image as belonging to either group A, or group B, before they were shown the next image.
Each image contained two simple shapes on either side of the image — a square, a triangle, a circle, and any combination of the three, along with different colors and patterns for each shape. The researchers designed the images such that they should be classified into one of two groups, based on an intricate combination of shape, color, and pattern. Importantly, only one side of each image was relevant for the classification.
The volunteers, however, were given no guidelines on how to classify the images. Therefore, for about the first half of the experiment, they were considered “novices,” and more or less guessed at their classifications. Over time, and many more images, their accuracy improved to a level that the researchers considered “expert.” Throughout the experiment, the team used cameras to follow each participant’s eye movements, as a measure of visual focus.
They also outfitted volunteers with EEG sensors to record their brain waves, which they used as a measure of cognitive attention. They designed each image to show two shapes, each of which flickered at different, imperceptible frequencies. They found they could identify where a volunteer’s attention landed, based on which shape’s flicker their brain waves synced up with.
For each volunteer, the team created maps of where their gaze and attention were focused, both during their novice and expert phases. Overall, these maps showed that in the beginning, the volunteers focused on all parts of an image as they tried to make sense of how to classify it. Toward the end, as they got a grasp of the exercise and improved their accuracy, their attention shifted to just one side of each image. This side happened to be the side that the researchers designed to be most relevant, while the other side was just random noise.
The maps showed that the volunteers picked up some knowledge of how to accurately classify the images. But when they were given a survey and asked to articulate how they learned the task, they always maintained that they focused on each entire image. It seemed their actual shift in focus was an unconscious, tacit skill.
“They were unconsciously focusing their attention on the part of the image that was actually informative,” Armengol-Urpi says. “So the tacit knowledge they had was hidden inside them.”
Going a step further, the team then showed each participant the maps of their gaze and attention, and how the maps changed from their novice to expert phases. When they were then shown additional images, the volunteers seemed to use this once-tacit knowledge, and further improved their classification accuracy.
“We are currently extending this approach to other domains where tacit knowledge plays a central role,” says Armengol-Urpi, who is exploring tacit knowledge in skilled crafts and sports such as glassblowing and table tennis, as well as in diagnosing medical imaging. “We believe the underlying principle — capturing and reinforcing implicit expertise through physiological signals — can generalize to a wide range of perceptual and skill-based domains.”
This research was supported, in part, by Takeda Pharmaceutical Company.
A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster
Many engineering challenges come down to the same headache — too many knobs to turn and too few chances to test them. Whether tuning a power grid or designing a safer vehicle, each evaluation can be costly, and there may be hundreds of variables that could matter.
Consider car safety design. Engineers must integrate thousands of parts, and many design choices can affect how a vehicle performs in a collision. Classic optimization tools could start to struggle when searching for the best combination.
MIT researchers developed a new approach that rethinks how a classic method, known as Bayesian optimization, can be used to solve problems with hundreds of variables. In tests on realistic engineering-style benchmarks, like power-system optimization, the approach found top solutions 10 to 100 times faster than widely used methods.
Their technique leverages a foundation model trained on tabular data that automatically identifies the variables that matter most for improving performance, repeating the process to hone in on better and better solutions. Foundation models are huge artificial intelligence systems trained on vast, general datasets. This allows them to adapt to different applications.
The researchers’ tabular foundation model does not need to be constantly retrained as it works toward a solution, increasing the efficiency of the optimization process. The technique also delivers greater speedups for more complicated problems, so it could be especially useful in demanding applications like materials development or drug discovery.
“Modern AI and machine-learning models can fundamentally change the way engineers and scientists create complex systems. We came up with one algorithm that can not only solve high-dimensional problems, but is also reusable so it can be applied to many problems without the need to start everything from scratch,” says Rosen Yu, a graduate student in computational science and engineering and lead author of a paper on this technique.
Yu is joined on the paper by Cyril Picard, a former MIT postdoc and research scientist, and Faez Ahmed, associate professor of mechanical engineering and a core member of the MIT Center for Computational Science and Engineering. The research will be presented at the International Conference on Learning Representations.
Improving a proven method
When scientists seek to solve a multifaceted problem but have expensive methods to evaluate success, like crash testing a car to know how good each design is, they often use a tried-and-true method called Bayesian optimization. This iterative method finds the best configuration for a complicated system by building a surrogate model that helps estimate what to explore next while considering the uncertainty of its predictions.
But the surrogate model must be retrained after each iteration, which can quickly become computationally intractable when the space of potential solutions is very large. In addition, scientists need to build a new model from scratch any time they want to tackle a different scenario.
To address both shortcomings, the MIT researchers utilized a generative AI system known as a tabular foundation model as the surrogate model inside a Bayesian optimization algorithm.
“A tabular foundation model is like a ChatGPT for spreadsheets. The input and output of these models are tabular data, which in the engineering domain is much more common to see and use than language,” Yu says.
Just like large language models such as ChatGPT, Claude, and Gemini, the model has been pre-trained on an enormous amount of tabular data. This makes it well-equipped to tackle a range of prediction problems. In addition, the model can be deployed as-is, without the need for any retraining.
To make their system more accurate and efficient for optimization, the researchers employed a trick that enables the model to identify features of the design space that will have the biggest impact on the solution.
“A car might have 300 design criteria, but not all of them are the main driver of the best design if you are trying to increase some safety parameters. Our algorithm can smartly select the most critical features to focus on,” Yu says.
It does this by using a tabular foundation model to estimate which variables (or combinations of variables) most influence the outcome.
It then focuses the search on those high-impact variables instead of wasting time exploring everything equally. For instance, if the size of the front crumple zone significantly increased and the car’s safety rating improved, that feature likely played a role in the enhancement.
Bigger problems, better solutions
One of their biggest challenges was finding the best tabular foundation model for this task, Yu says. Then they had to connect it with a Bayesian optimization algorithm in such a way that it could identify the most prominent design features.
“Finding the most prominent dimension is a well-known problem in math and computer science, but coming up with a way that leveraged the properties of a tabular foundation model was a real challenge,” Yu says.
With the algorithmic framework in place, the researchers tested their method by comparing it to five state-of-the-art optimization algorithms.
On 60 benchmark problems, including realistic situations like power grid design and car crash testing, their method consistently found the best solution between 10 and 100 times faster than the other algorithms.
“When an optimization problem gets more and more dimensions, our algorithm really shines,” Yu added.
But their method did not outperform the baselines on all problems, such as robotic path planning. This likely indicates that scenario was not well-defined in the model’s training data, Yu says.
In the future, the researchers want to study methods that could boost the performance of tabular foundation models. They also want to apply their technique to problems with thousands or even millions of dimensions, like the design of a naval ship.
“At a higher level, this work points to a broader shift: using foundation models not just for perception or language, but as algorithmic engines inside scientific and engineering tools, allowing classical methods like Bayesian optimization to scale to regimes that were previously impractical,” says Ahmed.
“The approach presented in this work, using a pretrained foundation model together with high‑dimensional Bayesian optimization, is a creative and promising way to reduce the heavy data requirements of simulation‑based design. Overall, this work is a practical and powerful step toward making advanced design optimization more accessible and easier to apply in real-world settings,” says Wei Chen, the Wilson-Cook Professor in Engineering Design and chair of the Department of Mechanical Engineering at Northwestern University, who was not involved in this research.
Additionality requirements of carbon markets could penalize Indigenous stewardship
Nature Climate Change, Published online: 04 March 2026; doi:10.1038/s41558-026-02576-2
Despite strong evidence that Indigenous stewardship sustains biodiversity and carbon stocks, carbon markets typically reward recovery from degradation rather than protection, often excluding Indigenous-managed lands. Rethinking additionality could align climate mitigation with care, equity and long-term ecosystem stewardship.EFF to Third Circuit: Electronic Device Searches at the Border Require a Warrant
EFF, along with the national ACLU and the ACLU affiliates in Pennsylvania, Delaware, and New Jersey, filed an amicus brief in the U.S. Court of Appeals for the Third Circuit urging the court to require a warrant for border searches of electronic devices, an argument EFF has been making in the courts and Congress for nearly a decade.
The case, U.S. v. Roggio, involves a man who had been under ongoing criminal investigation for illegal exports when he returned to the United States from an international trip via JFK airport. Border officers used the opportunity to bypass the Fourth Amendment’s warrant requirement when they seized several of his electronic devices (laptop, tablet, cell phone, and flash drive) and conducted forensic searches of them. As the district court explained, “investigative agents had a case coordination meeting and border search authority was discussed in early January 2017,” before Mr. Roggio traveled internationally in February 2017.
The district court denied Mr. Roggio’s motion to suppress the emails and other data obtained from the warrantless searches of his devices. He was subsequently convicted of illegally exporting gun manufacturing parts to Iraq (he was also charged in a superseding indictment with torture and also convicted of that).
The number of warrantless device searches at the border and the significant invasion of privacy they represent is only increasing. In Fiscal Year 2025, U.S. Customs and Border Protection (CBP) conducted 55,318 device searches, both manual (“basic”) and forensic (“advanced”).
While a manual search involves a border officer tapping or mousing around a device, a forensic search involves connecting another device to the traveler’s device and using software to extract and analyze the data to create a detailed report the device owner’s activities and communications. Border officers have access to forensic tools that help gain access to data on a locked or encrypted device they have physical access to. From public reporting, we know that more recent devices (and ones that have had the latest security updates applied) are more resistant to these type of tools, especially if they are turned off or turned on but not yet unlocked.
The U.S. Supreme Court has recognized for a century a border search exception to the Fourth Amendment’s warrant requirement, allowing not only warrantless but also often suspicionless “routine” searches of luggage, vehicles, and other items crossing the border.
The primary justification for the border search exception has been to find—in the items being searched—goods smuggled to avoid paying duties (i.e., taxes) and contraband such as drugs, weapons, and other prohibited items, thereby blocking their entry into the country. But a traveler’s privacy interests in their suitcase and its contents are minimal compared to those in all the personal data on the person’s phone or laptop.
In our amicus brief, we argue that the U.S. Supreme Court’s balancing test in Riley v. California (2014) should govern the analysis here. In that case, the Court weighed the government’s interests in warrantless and suspicionless access to cell phone data following an arrest against an arrestee’s privacy interests in the depth and breadth of personal information stored on a cell phone. The Court concluded that the search-incident-to-arrest warrant exception does not apply, and that police need to get a warrant to search an arrestee’s phone.
Travelers’ privacy interests in their cell phones, laptops and other electronic devices are, of course, the same as those considered in Riley. Modern devices, over a decade later, contain even more data that together reveal the most personal aspects of our lives, including political affiliations, religious beliefs and practices, sexual and romantic affinities, financial status, health conditions, and family and professional associations.
In considering the government’s interests in warrantless access to digital data at the border, Riley requires analyzing how closely such searches hew to the original purpose of the warrant exception—preventing the entry of prohibited goods themselves via the items being searched. We argue that the government’s interests are weak in seeking unfettered access to travelers’ electronic devices.
First, physical contraband (like drugs) can’t be found in digital data.
Second, digital contraband (such as child sexual abuse material) can’t be prevented from entering the country through a warrantless search of a device at the border because it’s likely, given the nature of cloud technology and how internet-connected devices work, that identical copies of the files are already in the country on servers accessible via the internet.
Finally, searching devices for evidence of contraband smuggling (for example, the emails here revealing details of the illegal import scheme) and other evidence for general law enforcement (i.e., investigating non-border-related domestic crimes) are too “untethered” from the original purpose of the border search exception, which is to find prohibited items themselves and not evidence to support a criminal prosecution. Therefore, emails or other data found on a digital device searched without a warrant at the border cannot and should not be used as evidence in court.
If the Third Circuit is not inclined to require a warrant for electronic device searches at the border, we also argue that such a search—whether manual or forensic—should be justified only by reasonable suspicion that the device contains digital contraband and be limited in scope to looking for digital contraband.
This extends the Ninth Circuit’s rule from U.S. v. Cano (2019) in which the court held that only forensic device searches at the border require reasonable suspicion that the device contains digital contraband—that is, some set of already known facts pointing to this possibility—while manual searches may be conducted without suspicion. But the Cano court also held that all searches must be limited in scope to looking for digital contraband (for example, call logs are off limits because they can’t contain digital contraband in the form of photos or files).
We hope that the Third Circuit will rise to the occasion and be the first circuit to fully protect travelers’ Fourth Amendment rights at the border.
The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People
The U.S. military has officially ended its $200 million contract with AI company Anthropic and has ordered all other military contractors to cease use of their products. Why? Because of a dispute over what the government could and could not use Anthropic’s technology to do. Anthropic had made it clear since it first signed the contract with the Pentagon in 2025 that it did not want its technology to be used for mass surveillance of people in the United States or for fully autonomous weapons systems. Starting in January, that became a problem for the Department of Defense, which ordered Anthropic to give them unrestricted use of the technology. Anthropic refused, and the DoD retaliated.
There is a lot we could learn from this conflict, but the biggest take away is this: the state of your privacy is being decided by contract negotiations between giant tech companies and the U.S. government—two entities with spotty track records for caring about your civil liberties. It’s good when CEOs step up and do the right thing—but it's not a sustainable or reliable solution to build our rights on. Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we needs serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.
Imposing and enforcing such those restrictions is properly a role for Congress and the courts, not the private sector.
The companies know this. When speaking about the specific risk that AI poses to privacy, the CEO of Anthropic Dario Amodei said in an interview, “I actually do believe it is Congress’s job. If, for example, there are possibilities with domestic mass surveillance—the government buying of bulk data has been produced on Americans, locations, personal information, political affiliations, to build profiles, and it’s not possible to analyze all of that with AI—the fact that that is legal—that seems like the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up.”
The example he cites here is a scarily realistic one—because it’s already happening. Customs and Border Protection has tapped into the online advertising world to buy data on Americans for surveillance purposes. Immigration and Customs Enforcement has been using a tool that maps millions of peoples’ devices based on purchased cell phone data. The Office of the Director of National Intelligence has proposed a centralized data broker marketplace to make it easier for intelligence agencies to buy commercially available data. Considering the government’s massive contracts with a bunch of companies that could do analysis, including Palantir, a company which does AI-enabled analysis of huge amounts of data, then the concerns are incredibly well founded.
But Congress is sadly neglecting its duties. For example, a bill that would close the loophole of the government buying personal information passed the House of Representatives in 2024, but the Senate stopped it. And because Congress did not act, Americans must rely on a tech company CEO has to try to protect our privacy—or at least refuse to help the government violate it.
Privacy in the digital age should be an easy bipartisan issue. Given that it’s wildly popular (71% of American adults are concerned about the government's use of their data and among adults that have heard of AI 70% have little to no trust in how companies use those products) you would think politicians would be leaping over each other to create the best legislation and companies would be promising us the most high-end privacy protecting features. Instead, for the time being, we are largely left adrift in a sea of constant surveillance, having to paddle our own life rafts.
EFF has, and always will, fight for real and sustainable protections for our civil liberties including a world where our privacy does not rest upon the whims of CEOs and back room deals with the surveillance state.
Injectable “satellite livers” could offer an alternative to liver transplantation
More than 10,000 Americans who suffer from chronic liver disease are on a waitlist for a liver transplant, but there are not enough donated organs for all of those patients. Additionally, many people with liver failure aren’t eligible for a transplant if they are not healthy enough to tolerate the surgery.
To help those patients, MIT engineers have developed “mini livers” that could be injected into the body and take over the functions of the failing liver.
In a new study in mice, the researchers showed that these injected liver cells could remain viable in the body for at least two months, and they were able to generate many of the enzymes and other proteins that the liver produces.
“We think of these as satellite livers. If we could deliver these cells into the body, while leaving the sick organ in place, that would provide booster function,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES).
Bhatia is the senior author of the new study, which appears today in the journal Cell Biomaterials. MIT postdoc Vardhman Kumar is the paper’s lead author.
Restoring liver function
The human liver plays a role in about 500 essential functions, including regulation of blood clotting, removing bacteria from the bloodstream, and metabolizing drugs. Most of these functions are performed by cells called hepatocytes.
Over the past decade, Bhatia’s lab has been working on ways to restore hepatocyte function without a surgical liver transplant. One possible approach is to embed hepatocytes into a biomaterial such as a hydrogel, but these gels also have to be surgically implanted.
Another option is to inject hepatocytes into the body, which eliminates the need for surgery. In this study, Bhatia’s lab sought to improve on this strategy by providing an engineered niche that could enhance the cells’ survival and facilitate noninvasive monitoring of graft health.
To achieve that, the researchers came up with the idea of injecting cells along with hydrogel microspheres that would help them stay together and form connections with nearby blood vessels. These spheres have special properties that allow them to act like a liquid when they are closely packed together, so they can be injected through a syringe and then regain their solid structure once inside the body.
In recent years, researchers have explored using hydrogel microspheres to promote wound healing, as they help cells to migrate into the spaces between the spheres and build new tissue. In the new study, the MIT team adapted them to help hepatocytes form a stable tissue graft after injection.
“What we did is use this technology to create an engineered niche for cell transplantation,” Kumar says. “If the cells are injected in the absence of these spheres, they would not integrate efficiently with the host, but these microspheres provide the hepatocytes with a niche where they can stay localized and become connected to the host circulation much faster.”
The injected mixture also includes fibroblast cells — supportive cells that help the hepatocytes survive and promote the growth of blood vessels into the tissue.
Working with Nicole Henning, an ultrasound research specialist at the Koch Institute, the researchers developed a way to inject the cell mixture using a syringe guided by ultrasound. After injection, the researchers can also use ultrasound to monitor the long-term stability of the implant.
In this study, the mini livers were injected into the fat tissue in the belly. In the future, similar grafts could be delivered to other sites in the body, such as into the spleen or near the kidneys. As long as they have enough space and access to blood vessels, the injected hepatocytes can function similarly to hepatocytes in the liver.
“For a vast majority of liver disorders, the graft does not need to sit close to the liver,” Kumar says.
An alternative to transplantation
In tests in mice, the researchers injected the mixture of liver cells and microspheres into an area of fatty tissue known as the perigonadal adipose tissue. Once the cells are localized in the body, they form a stable, compact structure. Over time, blood vessels begin to grow into the graft area, helping the injected hepatocytes to stay healthy.
“The new blood vessels formed right next to the hepatocytes, which is why they were able to survive,” Kumar says. “They were able to get the nutrients delivered right to them, they were able to function the way they're supposed to, and they produced the proteins that we expect them to.”
After injection, the cells remained viable and able to secrete specialized proteins into the host circulation for eight weeks, the length of the study. That suggests that the therapy could potentially work as a long-term treatment for liver disease, the researchers say.
“The way we see this technology is it can provide an alternative to surgery, but it can also serve as a bridge to transplantation where these grafts can provide support until a donor organ becomes available,” Kumar says. “And if we think they might need another therapy or more grafts, the barriers to do that are much less with this injectable technology than undergoing another surgery.”
With the current version of this technology, patients would likely need to take immunosuppressive drugs, but the researchers are exploring the possibility of developing “stealthy” hepatocytes that could evade the immune system, or using the hydrogel microspheres to deliver immunosuppressants locally.
The research was funded by the Koch Institute Support (core) grant from the National Cancer Institute, the National Institutes of Health, the Wellcome Leap HOPE Program, a National Science Foundation Graduate Research Fellowship, and the Howard Hughes Medical Institute.
