Feed aggregator
5 climate court battles to watch in 2026
Trump admin launches new bid to pressure US oil companies on Venezuela
Judge keeps Honolulu climate case alive
Deadly climate collision: Cutting forests and raging floods
Scientists go global in attempt to better predict atmospheric rivers
Court upholds New Jersey’s landmark environmental justice rule
Why Europe’s night-train renaissance derailed
UK set new annual heat and sunshine records last year
South Africa’s Ramaphosa names new presidential climate commission
Banks notch higher fees from green bonds than fossil fuel debt
AI-generated sensors open new paths for early cancer detection
Detecting cancer in the earliest stages could dramatically reduce cancer deaths because cancers are usually easier to treat when caught early. To help achieve that goal, MIT and Microsoft researchers are using artificial intelligence to design molecular sensors for early detection.
The researchers developed an AI model to design peptides (short proteins) that are targeted by enzymes called proteases, which are overactive in cancer cells. Nanoparticles coated with these peptides can act as sensors that give off a signal if cancer-linked proteases are present anywhere in the body.
Depending on which proteases are detected, doctors would be able to diagnose the particular type of cancer that is present. These signals could be detected using a simple urine test that could even be done at home.
“We’re focused on ultra-sensitive detection in diseases like the early stages of cancer, when the tumor burden is small, or early on in recurrence after surgery,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES).
Bhatia and Ava Amini ’16, a principal researcher at Microsoft Research and a former graduate student in Bhatia’s lab, are the senior authors of the study, which appears today in Nature Communications. Carmen Martin-Alonso PhD ’23, a founding scientist at Amplifyer Bio, and Sarah Alamdari, a senior applied scientist at Microsoft Research, are the paper’s lead authors.
Amplifying cancer signals
More than a decade ago, Bhatia’s lab came up with the idea of using protease activity as a marker of early cancer. The human genome encodes about 600 proteases, which are enzymes that can cut through other proteins, including structural proteins such as collagen. They are often overactive in cancer cells, as they help the cells escape their original locations by cutting through proteins of the extracellular matrix, which normally holds cells in place.
The researchers’ idea was to coat nanoparticles with peptides that can be cleaved by a specific protease. These particles could then be ingested or inhaled. As they traveled through the body, if they encountered any cancer-linked proteases, the peptides on the particles would be cleaved.
Those peptides would be secreted in the urine, where they could be detected using a paper strip similar to a pregnancy test strip. Measuring those signals would reveal the overactivity of proteases deep within the body.
“We have been advancing the idea that if you can make a sensor out of these proteases and multiplex them, then you could find signatures of where these proteases were active in diseases. And since the peptide cleavage is an enzymatic process, it can really amplify a signal,” Bhatia says.
The researchers have used this approach to demonstrate diagnostic sensors for lung, ovarian, and colon cancers.
However, in those studies, the researchers used a trial-and-error process to identify peptides that would be cleaved by certain proteases. In most cases, the peptides they identified could be cleaved by more than one protease, which meant that the signals that were read could not be attributed to a specific enzyme.
Nonetheless, using “multiplexed” arrays of many different peptides yielded distinctive sensor signatures that were diagnostic in animal models of many different types of cancer, even if the precise identity of the proteases responsible for the cleavage remained unknown.
In their new study, the researchers moved beyond the traditional trial-and-error process by developing a novel AI system, named CleaveNet, to design peptide sequences that could be cleaved efficiently and specifically by target proteases of interest.
Users can prompt CleaveNet with design criteria, and CleaveNet will generate candidate peptides likely to fit those criteria. In this way, CleaveNet enables users to tune the efficiency and specificity of peptides generated by the model, opening a path to improving the sensors’ diagnostic power.
“If we know that a particular protease is really key to a certain cancer, and we can optimize the sensor to be highly sensitive and specific to that protease, then that gives us a great diagnostic signal,” Amini says. “We can leverage the power of computation to try to specifically optimize for these efficiency and selectivity metrics.”
For a peptide that contains 10 amino acids, there are about 10 trillion possible combinations. Using AI to search that immense space allows for prediction, testing, and identification of useful sequences much faster than humans would be able to find them, while also considerably reducing experimental costs.
Predicting enzyme activity
To create CleaveNet, the researchers developed a protein language model to predict the amino acid sequences of peptides, analogous to how large language models can predict sequences of text. For the training data, they used publicly available data on about 20,000 peptides and their interactions with different proteases from a family known as matrix metalloproteinases (MMPs).
Using these data, the researchers trained one model to generate peptide sequences that are predicted to be cleaved by proteases. These sequences could then be fed into another model that predicted how efficiently each peptide would be cleaved by any protease of interest.
To demonstrate this approach, the researchers focused on a protease called MMP13, which cancer cells use to cut through collagen and help them metastasize from their original locations. Prompting CleaveNet with MMP13 as a target allowed the models to design peptides that could be cut by MMP13 with considerable selectivity and efficiency. This cleavage profile is particularly useful for diagnostic and therapeutic applications.
“When we set the model up to generate sequences that would be efficient and selective for MMP13, it actually came up with peptides that had never been observed in training, and yet these novel sequences did turn out to be both efficient and selective,” Martin-Alonso says. “That was very exciting to see.”
This kind of selectivity could help to reduce the number of different peptides needed to diagnose a given type of cancer, to identify novel biomarkers, and to provide insight into specific biological pathways for study and therapeutic testing, the researchers say.
Bhatia’s lab is currently part of an ARPA-H funded project to create reporters for an at-home diagnostic kit that could potentially detect and distinguish between 30 different types of cancer, in early stages of disease, based on measurements of protease activity. These sensors could include detection of not only MMP-mediated cleavage, but other enzymes such as serine proteases and cysteine proteases.
Peptides designed using CleaveNet could also be incorporated into cancer therapeutics such as antibody treatments. Using a specific peptide to attach a therapeutic such as a cytokine or small molecule drug to a targeting antibody could enable the medicine to be released only when the peptides are exposed to proteases in the tumor environment, improving efficacy and reducing side effects.
Beyond direct applications in diagnostics and therapeutics, combining efforts from the ARPA-H work with this modeling framework could enable the creation of a comprehensive “protease activity atlas” that spans multiple protease classes and cancers. Such a resource could further accelerate research in early cancer detection, protease biology, and AI models for peptide design.
The research was funded by La Caixa Foundation, the Ludwig Center at MIT, and the Marble Center for Cancer Nanomedicine.
Sean Luk: Addressing the urgent need for better immunotherapy
In elementary school, Sean Luk loved donning an oversized lab coat and helping her mom pipette chemicals at Johns Hopkins University. A few years later, she started a science blog and became fascinated by immunoengineering, which is now her concentration as a biological engineering major at MIT.
Her grandparents’ battles with cancer made Luk, now a senior, realize how urgently patients need advancements in immunotherapy, which leverages a patient’s immune system to fight tumors or pathogens.
“The idea of creating something that is actually able to improve human health is what really drives me now. You want to fight that sense of helplessness when you see a loved one suffering through this disease, and it just further motivates me to be excellent at what I do,” Luk says.
A varsity athlete and entrepreneur as well as a researcher, Luk thrives when bringing people together for a common cause.
Working with immunotherapies
Luk was introduced to immunotherapies in high school after she listened to a seminar about using components of the immune system, such as antibodies and cytokines, to improve graft tolerance.
“The complexity of the immune system really fascinated me, and it is incredible that we can build antibodies in a very logical way to address disease,” Luk says.
She worked in several Johns Hopkins labs as a high school student in Maryland, and a professor there connected her to MIT Professor Dane Wittrup. Luk has worked in the Wittrup lab throughout her time at MIT. One of her main projects involves developing ultra-stable cyclic peptide drugs to help treat autoimmune diseases, which could potentially be taken orally instead of injected.
Luk has been a co-author on two published articles and has become increasingly interested in the intersection between computational and experimental protein design. Currently, she is working on engineering an interferon gamma construct that preferentially targets myeloid cells in the tumor microenvironment.
“We're trying to target and reprogram the immunosuppressive myeloid cells surrounding the cancer cells, so that they can license T cells to attack cancer cells and kickstart the cancer immunity cycle,” she explains.
Communication for all
Through her work in high school with Best Buddies, an organization that aims to promote one-on-one friendships between students with and without intellectual and developmental disabilities, Luk became passionate about empowering people with special needs. At MIT, she started a project focusing on children with Down syndrome, with support from the Sandbox Innovation Fund.
“Through talking to a lot of parents and caretakers, the biggest issue that people with Down syndrome face is communication. And when you think about it, communication is crucial to everything that we do,” Luk says, “We want to communicate our thoughts. We want to be able to interact with our peers. And if people are unable to do that, it’s isolating, it’s frustrating.”
Her solution was to co-found EasyComm, an online game platform that helps children with Down syndrome work on verbal communication.
“We thought it would be a great way to improve their verbal communication skills while having fun and incentivize that kind of learning through gamification,” Luk says. She and her co-founder recently filed a provisional patent and plan to make the platform available to a wider audience.
A global perspective
Luk grew up in Hong Kong before moving to Maryland in the fifth grade. She’s always been athletic; in Hong Kong, she was a competitive jump roper. At just 9 years old, she won bronze in the Asian Jump Rope Championships among children 14 years old and younger. At 7 years old, she started playing soccer on her brother’s team, despite being the only girl. She says the sport was considered “manly” in Hong Kong, and girls were discouraged from joining, but her coaches and family were supportive.
Moving to the U.S. meant that her time in competitive jump roping was cut short, and Luk focused more on soccer. Her team in the U.S. felt far more intense than boys soccer in Hong Kong, but the Luk family was in it together, Luk says. She credits her success to the combination of her hard-working nature she learned from Hong Kong, and the innovation and experiences she was exposed to in the U.S.
“We had a really close bond within the family,” Luk says, “Figuring out taxes for my dad and our family, like driving and houses and all that stuff, it was totally new. But I think we really took it in stride, just adjusting as we went.”
Luk continued soccer throughout high school and eventually committed to play on the MIT team. She likes that the team allows players to prioritize academics while still being competitive. Last season, she was elected captain.
“It’s really a pleasure to be captain, and it’s challenging, but it’s also very rewarding when you see the team be cohesive. When you see the team out there winning games through grit,” Luk says.
During her first year at MIT, Luk got back in touch with her old soccer coach from Hong Kong, who then worked on the national team. After sending over some tape, she was offered a spot on the U-20 national team, and played in the U20 Asian Football Championship Qualifiers.
“It was so, so cool to be able to represent Hong Kong because I played soccer all my life but it just carries a different weight to it when you’re wearing your country’s jersey,” Luk says.
Besides her cross-cultural background, Luk is also proud of her international experiences playing soccer, staying with host families and doing lab work in Copenhagen, Denmark; Stuttgart, Germany; and Ancona, Italy. She speaks English, Cantonese, and Mandarin fluently.
“Aside from the textbook academic knowledge, I feel like a global perspective is so important when you’re trying to collaborate with other people from different walks of life,” Luk says, “When you’re just thinking about science or the impact that you can have in general, it’s important to realize you don’t have all the answers and to learn from the world outside your little bubble.”
MIT scientists investigate memorization risk in the age of clinical AI
What is patient privacy for? The Hippocratic Oath, thought to be one of the earliest and most widely known medical ethics texts in the world, reads: “Whatever I see or hear in the lives of my patients, whether in connection with my professional practice or not, which ought not to be spoken of outside, I will keep secret, as considering all such things to be private.”
As privacy becomes increasingly scarce in the age of data-hungry algorithms and cyberattacks, medicine is one of the few remaining domains where confidentiality remains central to practice, enabling patients to trust their physicians with sensitive information.
But a paper co-authored by MIT researchers investigates how artificial intelligence models trained on de-identified electronic health records (EHRs) can memorize patient-specific information. The work, which was recently presented at the 2025 Conference on Neural Information Processing Systems (NeurIPS), recommends a rigorous testing setup to ensure targeted prompts cannot reveal information, emphasizing that leakage must be evaluated in a health care context to determine whether it meaningfully compromises patient privacy.
Foundation models trained on EHRs should normally generalize knowledge to make better predictions, drawing upon many patient records. But in “memorization,” the model draws upon a singular patient record to deliver its output, potentially violating patient privacy. Notably, foundation models are already known to be prone to data leakage.
“Knowledge in these high-capacity models can be a resource for many communities, but adversarial attackers can prompt a model to extract information on training data,” says Sana Tonekaboni, a postdoc at the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard and first author of the paper. Given the risk that foundation models could also memorize private data, she notes, “this work is a step towards ensuring there are practical evaluation steps our community can take before releasing models.”
To conduct research on the potential risk EHR foundation models could pose in medicine, Tonekaboni approached MIT Associate Professor Marzyeh Ghassemi, who is a principal investigator at the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic) and a member of the Computer Science and Artificial Intelligence Lab. Ghassemi, a faculty member in the MIT Department of Electrical Engineering and Computer Science and Institute for Medical Engineering and Science, runs the Healthy ML group, which focuses on robust machine learning in health.
Just how much information does a bad actor need to expose sensitive data, and what are the risks associated with the leaked information? To assess this, the research team developed a series of tests that they hope will lay the groundwork for future privacy evaluations. These tests are designed to measure various types of uncertainty, and assess their practical risk to patients by measuring various tiers of attack possibility.
“We really tried to emphasize practicality here; if an attacker has to know the date and value of a dozen laboratory tests from your record in order to extract information, there is very little risk of harm. If I already have access to that level of protected source data, why would I need to attack a large foundation model for more?” says Ghassemi.
With the inevitable digitization of medical records, data breaches have become more commonplace. In the past 24 months, the U.S. Department of Health and Human Services has recorded 747 data breaches of health information affecting more than 500 individuals, with the majority categorized as hacking/IT incidents.
Patients with unique conditions are especially vulnerable, given how easy it is to pick them out. “Even with de-identified data, it depends on what sort of information you leak about the individual,” Tonekaboni says. “Once you identify them, you know a lot more.”
In their structured tests, the researchers found that the more information the attacker has about a particular patient, the more likely the model is to leak information. They demonstrated how to distinguish model generalization cases from patient-level memorization, to properly assess privacy risk.
The paper also emphasized that some leaks are more harmful than others. For instance, a model revealing a patient’s age or demographics could be characterized as a more benign leakage than the model revealing more sensitive information, like an HIV diagnosis or alcohol abuse.
The researchers note that patients with unique conditions are especially vulnerable given how easy it is to pick them out, which may require higher levels of protection. “Even with de-identified data, it really depends on what sort of information you leak about the individual,” Tonekaboni says. The researchers plan to expand the work to become more interdisciplinary, adding clinicians and privacy experts as well as legal experts.
“There’s a reason our health data is private,” Tonekaboni says. “There’s no reason for others to know about it.”
This work supported by the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, Wallenberg AI, the Knut and Alice Wallenberg Foundation, the U.S. National Science Foundation (NSF), a Gordon and Betty Moore Foundation award, a Google Research Scholar award, and the AI2050 Program at Schmidt Sciences. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute.
Telegram Hosting World’s Largest Darknet Market
Wired is reporting on Chinese darknet markets on Telegram.
The ecosystem of marketplaces for Chinese-speaking crypto scammers hosted on the messaging service Telegram have now grown to be bigger than ever before, according to a new analysis from the crypto tracing firm Elliptic. Despite a brief drop after Telegram banned two of the biggest such markets in early 2025, the two current top markets, known as Tudou Guarantee and Xinbi Guarantee, are together enabling close to $2 billion a month in money-laundering transactions, sales of scam tools like stolen data, fake investment websites, and AI deepfake tools, as well as other black market services as varied as ...
