Feed aggregator

Measuring flood underinsurance in the USA

Nature Climate Change - Fri, 08/15/2025 - 12:00am

Nature Climate Change, Published online: 15 August 2025; doi:10.1038/s41558-025-02396-w

Homeowners could benefit from flood insurance to offset the negative impacts of climate-induced natural disasters. However, with detailed micro-level data, researchers find substantial protection gaps and underinsurance across the USA that disproportionately affect low-income households.

President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare

EFF: Updates - Thu, 08/14/2025 - 7:46pm

The White House’s recently-unveiled “AI Action Plan” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even hate speech. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.

A new executive order called “Preventing Woke AI in the Federal Government,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda.

The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration claims, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn't otherwise, and those often roll down to the user. Doing so would impact the 60 percent of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government. 

Less Accuracy, More Bias and Discrimination

It’s no secret that AI models—including gen AI—tend to discriminate against racial and gender minorities. AI models use machine learning to identify and reproduce patterns in data that they are “trained” on. If the training data reflects biases against racial, ethnic, and gender minorities—which it often does—then the AI model will “learn” to discriminate against those groups. In other words, garbage in, garbage out. Models also often reflect the biases of the people who train, test, and evaluate them. 

This is true across different types of AI. For example, “predictive policing” tools trained on arrest data that reflects overpolicing of black neighborhoods frequently recommend heightened levels of policing in those neighborhoods, often based on inaccurate predictions that crime will occur there. Generative AI models are also implicated. LLMs already recommend more criminal convictions, harsher sentences, and less prestigious jobs for people of color. Despite that people of color account for less than half of the U.S. prison population, 80 percent of Stable Diffusion's AI-generated images of inmates have darker skin. Over 90 percent of AI-generated images of judges were men; in real life, 34 percent of judges are women. 

These models aren’t just biased—they’re fundamentally incorrect. Race and gender aren’t objective criteria for deciding who gets hired or convicted of a crime. Those discriminatory decisions reflected trends in the training data that could be caused by bias or chance—not some “objective” reality. Setting fairness aside, biased models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors will ultimately make models more accurate, not less. 

Biased LLMs Cause Serious Harm—Especially in the Hands of the Government

But inaccuracy is far from the only problem. When government agencies start using biased AI to make decisions, real people suffer. Government officials routinely make decisions that impact people’s personal freedom and access to financial resources, healthcare, housing, and more. The White House’s AI Action Plan calls for a massive increase in agencies’ use of LLMs and other AI—while all but requiring the use of biased models that automate systemic, historical injustice. Using AI simply to entrench the way things have always been done squanders the promise of this new technology.

We need strong safeguards to prevent government agencies from procuring biased, harmful AI tools. In a series of executive orders, as well as his AI Action Plan, the Trump Administration has rolled back the already-feeble Biden-era AI safeguards. This makes AI-enabled civil rights abuses far more likely, putting everyone’s rights at risk. 

And the Administration could easily exploit the new rules to pressure companies to make publicly available models worse, too. Corporations like healthcare companies and landlords increasingly use AI to make high-impact decisions about people, so more biased commercial models would also cause harm. 

We have argued against using machine learning to make predictive policing decisions or other punitive judgments for just these reasons, and will continue to protect your right not to be subject to biased government determinations influenced by machine learning.

Study sheds light on graphite’s lifespan in nuclear reactors

MIT Latest News - Thu, 08/14/2025 - 5:30pm

Graphite is a key structural component in some of the world’s oldest nuclear reactors and many of the next-generation designs being built today. But it also condenses and swells in response to radiation — and the mechanism behind those changes has proven difficult to study.

Now, MIT researchers and collaborators have uncovered a link between properties of graphite and how the material behaves in response to radiation. The findings could lead to more accurate, less destructive ways of predicting the lifespan of graphite materials used in reactors around the world.

“We did some basic science to understand what leads to swelling and, eventually, failure in graphite structures,” says MIT Research Scientist Boris Khaykovich, senior author of the new study. “More research will be needed to put this into practice, but the paper proposes an attractive idea for industry: that you might not need to break hundreds of irradiated samples to understand their failure point.”

Specifically, the study shows a connection between the size of the pores within graphite and the way the material swells and shrinks in volume, leading to degradation.

“The lifetime of nuclear graphite is limited by irradiation-induced swelling,” says co-author and MIT Research Scientist Lance Snead. “Porosity is a controlling factor in this swelling, and while graphite has been extensively studied for nuclear applications since the Manhattan Project, we still do not have a clear understanding of the porosity in both mechanical properties and swelling. This work addresses that.”

The open-access paper appears this week in Interdisciplinary Materials. It is co-authored by Khaykovich, Snead, MIT Research Scientist Sean Fayfar, former MIT research fellow Durgesh Rai, Stony Brook University Assistant Professor David Sprouster, Oak Ridge National Laboratory Staff Scientist Anne Campbell, and Argonne National Laboratory Physicist Jan Ilavsky.

A long-studied, complex material

Ever since 1942, when physicists and engineers built the world’s first nuclear reactor on a converted squash court at the University of Chicago, graphite has played a central role in the generation of nuclear energy. That first reactor, dubbed the Chicago Pile, was constructed from about 40,000 graphite blocks, many of which contained nuggets of uranium.

Today graphite is a vital component of many operating nuclear reactors and is expected to play a central role in next-generation reactor designs like molten-salt and high-temperature gas reactors. That’s because graphite is a good neutron moderator, slowing down the neutrons released by nuclear fission so they are more likely to create fissions themselves and sustain a chain reaction.

“The simplicity of graphite makes it valuable,” Khaykovich explains. “It’s made of carbon, and it’s relatively well-known how to make it cleanly. Graphite is a very mature technology. It’s simple, stable, and we know it works.”

But graphite also has its complexities.

“We call graphite a composite even though it’s made up of only carbon atoms,” Khaykovich says. “It includes ‘filler particles’ that are more crystalline, then there is a matrix called a ‘binder’ that is less crystalline, then there are pores that span in length from nanometers to many microns.”

Each graphite grade has its own composite structure, but they all contain fractals, or shapes that look the same at different scales.

Those complexities have made it hard to predict how graphite will respond to radiation in microscopic detail, although it’s been known for decades that when graphite is irradiated, it first densifies, reducing its volume by up to 10 percent, before swelling and cracking. The volume fluctuation is caused by changes to graphite’s porosity and lattice stress.

“Graphite deteriorates under radiation, as any material does,” Khaykovich says. “So, on the one hand we have a material that’s extremely well-known, and on the other hand, we have a material that is immensely complicated, with a behavior that’s impossible to predict through computer simulations.”

For the study, the researchers received irradiated graphite samples from Oak Ridge National Laboratory. Co-authors Campbell and Snead were involved in irradiating the samples some 20 years ago. The samples are a grade of graphite known as G347A.

The research team used an analysis technique known as X-ray scattering, which uses the scattered intensity of an X-ray beam to analyze the properties of material. Specifically, they looked at the distribution of sizes and surface areas of the sample’s pores, or what are known as the material’s fractal dimensions.

“When you look at the scattering intensity, you see a large range of porosity,” Fayfar says. “Graphite has porosity over such large scales, and you have this fractal self-similarity: The pores in very small sizes look similar to pores spanning microns, so we used fractal models to relate different morphologies across length scales.”

Fractal models had been used on graphite samples before, but not on irradiated samples to see how the material’s pore structures changed. The researchers found that when graphite is first exposed to radiation, its pores get filled as the material degrades.

“But what was quite surprising to us is the [size distribution of the pores] turned back around,” Fayfar says. “We had this recovery process that matched our overall volume plots, which was quite odd. It seems like after graphite is irradiated for so long, it starts recovering. It’s sort of an annealing process where you create some new pores, then the pores smooth out and get slightly bigger. That was a big surprise.”

The researchers found that the size distribution of the pores closely follows the volume change caused by radiation damage.

“Finding a strong correlation between the [size distribution of pores] and the graphite’s volume changes is a new finding, and it helps connect to the failure of the material under irradiation,” Khaykovich says. “It’s important for people to know how graphite parts will fail when they are under stress and how failure probability changes under irradiation.”

From research to reactors

The researchers plan to study other graphite grades and explore further how pore sizes in irradiated graphite correlate with the probability of failure. They speculate that a statistical technique known as the Weibull Distribution could be used to predict graphite’s time until failure. The Weibull Distribution is already used to describe the probability of failure in ceramics and other porous materials like metal alloys.

Khaykovich also speculated that the findings could contribute to our understanding of why materials densify and swell under irradiation.

“There’s no quantitative model of densification that takes into account what’s happening at these tiny scales in graphite,” Khaykovich says. “Graphite irradiation densification reminds me of sand or sugar, where when you crush big pieces into smaller grains, they densify. For nuclear graphite, the crushing force is the energy that neutrons bring in, causing large pores to get filled with smaller, crushed pieces. But more energy and agitation create still more pores, and so graphite swells again. It’s not a perfect analogy, but I believe analogies bring progress for understanding these materials.”

The researchers describe the paper as an important step toward informing graphite production and use in nuclear reactors of the future.

“Graphite has been studied for a very long time, and we’ve developed a lot of strong intuitions about how it will respond in different environments, but when you’re building a nuclear reactor, details matter,” Khaykovich says. “People want numbers. They need to know how much thermal conductivity will change, how much cracking and volume change will happen. If components are changing volume, at some point you need to take that into account.”

This work was supported, in part, by the U.S. Department of Energy.

Using generative AI, researchers design compounds that can kill drug-resistant bacteria

MIT Latest News - Thu, 08/14/2025 - 11:00am

With help from artificial intelligence, MIT researchers have designed novel antibiotics that can combat two hard-to-treat infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus (MRSA).

Using generative AI algorithms, the research team designed more than 36 million possible compounds and computationally screened them for antimicrobial properties. The top candidates they discovered are structurally distinct from any existing antibiotics, and they appear to work by novel mechanisms that disrupt bacterial cell membranes.

This approach allowed the researchers to generate and evaluate theoretical compounds that have never been seen before — a strategy that they now hope to apply to identify and design compounds with activity against other species of bacteria.

“We’re excited about the new possibilities that this project opens up for antibiotics development. Our work shows the power of AI from a drug design standpoint, and enables us to exploit much larger chemical spaces that were previously inaccessible,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.

Collins is the senior author of the study, which appears today in Cell. The paper’s lead authors are MIT postdoc Aarti Krishnan, former postdoc Melis Anahtar ’08, and Jacqueline Valeri PhD ’23.

Exploring chemical space

Over the past 45 years, a few dozen new antibiotics have been approved by the FDA, but most of these are variants of existing antibiotics. At the same time, bacterial resistance to many of these drugs has been growing. Globally, it is estimated that drug-resistant bacterial infections cause nearly 5 million deaths per year.

In hopes of finding new antibiotics to fight this growing problem, Collins and others at MIT’s Antibiotics-AI Project have harnessed the power of AI to screen huge libraries of existing chemical compounds. This work has yielded several promising drug candidates, including halicin and abaucin.

To build on that progress, Collins and his colleagues decided to expand their search into molecules that can’t be found in any chemical libraries. By using AI to generate hypothetically possible molecules that don’t exist or haven’t been discovered, they realized that it should be possible to explore a much greater diversity of potential drug compounds.

In their new study, the researchers employed two different approaches: First, they directed generative AI algorithms to design molecules based on a specific chemical fragment that showed antimicrobial activity, and second, they let the algorithms freely generate molecules, without having to include a specific fragment.

For the fragment-based approach, the researchers sought to identify molecules that could kill N. gonorrhoeae, a Gram-negative bacterium that causes gonorrhea. They began by assembling a library of about 45 million known chemical fragments, consisting of all possible combinations of 11 atoms of carbon, nitrogen, oxygen, fluorine, chlorine, and sulfur, along with fragments from Enamine’s REadily AccessibLe (REAL) space.

Then, they screened the library using machine-learning models that Collins’ lab has previously trained to predict antibacterial activity against N. gonorrhoeae. This resulted in nearly 4 million fragments. They narrowed down that pool by removing any fragments predicted to be cytotoxic to human cells, displayed chemical liabilities, and were known to be similar to existing antibiotics. This left them with about 1 million candidates.

“We wanted to get rid of anything that would look like an existing antibiotic, to help address the antimicrobial resistance crisis in a fundamentally different way. By venturing into underexplored areas of chemical space, our goal was to uncover novel mechanisms of action,” Krishnan says.

Through several rounds of additional experiments and computational analysis, the researchers identified a fragment they called F1 that appeared to have promising activity against N. gonorrhoeae. They used this fragment as the basis for generating additional compounds, using two different generative AI algorithms.

One of those algorithms, known as chemically reasonable mutations (CReM), works by starting with a particular molecule containing F1 and then generating new molecules by adding, replacing, or deleting atoms and chemical groups. The second algorithm, F-VAE (fragment-based variational autoencoder), takes a chemical fragment and builds it into a complete molecule. It does so by learning patterns of how fragments are commonly modified, based on its pretraining on more than 1 million molecules from the ChEMBL database.

Those two algorithms generated about 7 million candidates containing F1, which the researchers then computationally screened for activity against N. gonorrhoeae. This screen yielded about 1,000 compounds, and the researchers selected 80 of those to see if they could be produced by chemical synthesis vendors. Only two of these could be synthesized, and one of them, named NG1, was very effective at killing N. gonorrhoeae in a lab dish and in a mouse model of drug-resistant gonorrhea infection.

Additional experiments revealed that NG1 interacts with a protein called LptA, a novel drug target involved in the synthesis of the bacterial outer membrane. It appears that the drug works by interfering with membrane synthesis, which is fatal to cells.

Unconstrained design

In a second round of studies, the researchers explored the potential of using generative AI to freely design molecules, using Gram-positive bacteria, S. aureus as their target.

Again, the researchers used CReM and VAE to generate molecules, but this time with no constraints other than the general rules of how atoms can join to form chemically plausible molecules. Together, the models generated more than 29 million compounds. The researchers then applied the same filters that they did to the N. gonorrhoeae candidates, but focusing on S. aureus, eventually narrowing the pool down to about 90 compounds.

They were able to synthesize and test 22 of these molecules, and six of them showed strong antibacterial activity against multi-drug-resistant S. aureus grown in a lab dish. They also found that the top candidate, named DN1, was able to clear a methicillin-resistant S. aureus (MRSA) skin infection in a mouse model. These molecules also appear to interfere with bacterial cell membranes, but with broader effects not limited to interaction with one specific protein.

Phare Bio, a nonprofit that is also part of the Antibiotics-AI Project, is now working on further modifying NG1 and DN1 to make them suitable for additional testing.

“In a collaboration with Phare Bio, we are exploring analogs, as well as working on advancing the best candidates preclinically, through medicinal chemistry work,” Collins says. “We are also excited about applying the platforms that Aarti and the team have developed toward other bacterial pathogens of interest, notably Mycobacterium tuberculosis and Pseudomonas aeruginosa.”

The research was funded, in part, by the U.S. Defense Threat Reduction Agency, the National Institutes of Health, the Audacious Project, Flu Lab, the Sea Grape Foundation, Rosamund Zander and Hansjorg Wyss for the Wyss Foundation, and an anonymous donor.

LLM Coding Integrity Breach

Schneier on Security - Thu, 08/14/2025 - 7:08am

Here’s an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a “break” to a “continue.” That turned an error logging statement into an infinite loop, which crashed the system.

This is an integrity failure. Specifically, it’s a failure of processing integrity. And while we can think of particular patches that alleviate this exact failure, the larger problem is much harder to solve.

Davi Ottenheimer ...

In sudden shift, American emissions rise as China’s falls

ClimateWire News - Thu, 08/14/2025 - 6:33am
America swapped places with China as coal sees a minirevival in the U.S. In China, renewable energy is surging.

Industry shows tension over EPA plan to kill climate rule

ClimateWire News - Thu, 08/14/2025 - 6:33am
Electric utilities urged the agency to preserve its ability to regulate power plants for greenhouse gases.

Rubio threatens to retaliate against countries that support shipping carbon tax

ClimateWire News - Thu, 08/14/2025 - 6:32am
The move comes two months before 108 nations will vote on the emissions fee.

Federal government could regulate voluntary carbon market, GAO says

ClimateWire News - Thu, 08/14/2025 - 6:31am
A report suggests ways agencies could oversee a system that lets polluters fund climate projects and get credit for offsetting their own emissions.

NY Climate Action Council members call for delaying renewable targets

ClimateWire News - Thu, 08/14/2025 - 6:29am
Two panel members have raised concerns about climate plans and want the Public Service Commission to hold a hearing to defer key deadlines.

Truck makers vow to shun California emission deals after FTC probe

ClimateWire News - Thu, 08/14/2025 - 6:29am
The Federal Trade Commission said it had dropped an antitrust probe into the state's Clean Truck Partnership.

Fires kill at least 3, displace thousands across southern Europe

ClimateWire News - Thu, 08/14/2025 - 6:28am
Authorities have cited multiple causes for the massive wildfires, including careless farming practices, improperly maintained power cables and lightning storms.

Heavy rain pounds South Korea’s capital region, leaving 1 person missing

ClimateWire News - Thu, 08/14/2025 - 6:27am
More than 7.8 inches of rain fell in parts of Seoul and nearby cities, where residents salvaged belongings and used plastic containers to bail water from properties damaged by flash floods.

Publisher Correction: Consequential differences in satellite-era sea surface temperature trends across datasets

Nature Climate Change - Thu, 08/14/2025 - 12:00am

Nature Climate Change, Published online: 14 August 2025; doi:10.1038/s41558-025-02422-x

Publisher Correction: Consequential differences in satellite-era sea surface temperature trends across datasets

Streetscapes and heat tolerance

Nature Climate Change - Thu, 08/14/2025 - 12:00am

Nature Climate Change, Published online: 14 August 2025; doi:10.1038/s41558-025-02416-9

During hot weather, dense urban areas are often not conducive to outdoor recreation. However, pedestrian tolerance to heat can be increased by almost 2 °C through more climate-sensitive streetscape design.

A new way to test how well AI systems classify text

MIT Latest News - Wed, 08/13/2025 - 3:00pm

Is this movie review a rave or a pan? Is this news story about business or technology? Is this online chatbot conversation veering off into giving financial advice? Is this online medical information site giving out misinformation?

These kinds of automated conversations, whether they involve seeking a movie or restaurant review or getting information about your bank account or health records, are becoming increasingly prevalent. More than ever, such evaluations are being made by highly sophisticated algorithms, known as text classifiers, rather than by human beings. But how can we tell how accurate these classifications really are?

Now, a team at MIT’s Laboratory for Information and Decision Systems (LIDS) has come up with an innovative approach to not only measure how well these classifiers are doing their job, but then go one step further and show how to make them more accurate.

The new evaluation and remediation software was developed by Kalyan Veeramachaneni, a principal research scientist at LIDS, his students Lei Xu and Sarah Alnegheimish, and two others. The software package is being made freely available for download by anyone who wants to use it.

A standard method for testing these classification systems is to create what are known as synthetic examples — sentences that closely resemble ones that have already been classified. For example, researchers might take a sentence that has already been tagged by a classifier program as being a rave review, and see if changing a word or a few words while retaining the same meaning could fool the classifier into deeming it a pan. Or a sentence that was determined to be misinformation might get misclassified as accurate. This ability to fool the classifiers makes these adversarial examples.

People have tried various ways to find the vulnerabilities in these classifiers, Veeramachaneni says. But existing methods of finding these vulnerabilities have a hard time with this task and miss many examples that they should catch, he says.

Increasingly, companies are trying to use such evaluation tools in real time, monitoring the output of chatbots used for various purposes to try to make sure they are not putting out improper responses. For example, a bank might use a chatbot to respond to routine customer queries such as checking account balances or applying for a credit card, but it wants to ensure that its responses could never be interpreted as financial advice, which could expose the company to liability. “Before showing the chatbot’s response to the end user, they want to use the text classifier to detect whether it’s giving financial advice or not,” Veeramachaneni says. But then it’s important to test that classifier to see how reliable its evaluations are.

“These chatbots, or summarization engines or whatnot are being set up across the board,” he says, to deal with external customers and within an organization as well, for example providing information about HR issues. It’s important to put these text classifiers into the loop to detect things that they are not supposed to say, and filter those out before the output gets transmitted to the user.

That’s where the use of adversarial examples comes in — those sentences that have already been classified but then produce a different response when they are slightly modified while retaining the same meaning. How can people confirm that the meaning is the same? By using another large language model (LLM) that interprets and compares meanings. So, if the LLM says the two sentences mean the same thing, but the classifier labels them differently, “that is a sentence that is adversarial — it can fool the classifier,” Veeramachaneni says. And when the researchers examined these adversarial sentences, “we found that most of the time, this was just a one-word change,” although the people using LLMs to generate these alternate sentences often didn’t realize that.

Further investigation, using LLMs to analyze many thousands of examples, showed that certain specific words had an outsized influence in changing the classifications, and therefore the testing of a classifier’s accuracy could focus on this small subset of words that seem to make the most difference. They found that one-tenth of 1 percent of all the 30,000 words in the system’s vocabulary could account for almost half of all these reversals of classification, in some specific applications.

Lei Xu PhD ’23, a recent graduate from LIDS who performed much of the analysis as part of his thesis work, “used a lot of interesting estimation techniques to figure out what are the most powerful words that can change the overall classification, that can fool the classifier,” Veeramachaneni says. The goal is to make it possible to do much more narrowly targeted searches, rather than combing through all possible word substitutions, thus making the computational task of generating adversarial examples much more manageable. “He’s using large language models, interestingly enough, as a way to understand the power of a single word.”

Then, also using LLMs, he searches for other words that are closely related to these powerful words, and so on, allowing for an overall ranking of words according to their influence on the outcomes. Once these adversarial sentences have been found, they can be used in turn to retrain the classifier to take them into account, increasing the robustness of the classifier against those mistakes.

Making classifiers more accurate may not sound like a big deal if it’s just a matter of classifying news articles into categories, or deciding whether reviews of anything from movies to restaurants are positive or negative. But increasingly, classifiers are being used in settings where the outcomes really do matter, whether preventing the inadvertent release of sensitive medical, financial, or security information, or helping to guide important research, such as into properties of chemical compounds or the folding of proteins for biomedical applications, or in identifying and blocking hate speech or known misinformation.

As a result of this research, the team introduced a new metric, which they call p, which provides a measure of how robust a given classifier is against single-word attacks. And because of the importance of such misclassifications, the research team has made its products available as open access for anyone to use. The package consists of two components: SP-Attack, which generates adversarial sentences to test classifiers in any particular application, and SP-Defense, which aims to improve the robustness of the classifier by generating and using adversarial sentences to retrain the model.

In some tests, where competing methods of testing classifier outputs allowed a 66 percent success rate by adversarial attacks, this team’s system cut that attack success rate almost in half, to 33.7 percent. In other applications, the improvement was as little as a 2 percent difference, but even that can be quite important, Veeramachaneni says, since these systems are being used for so many billions of interactions that even a small percentage can affect millions of transactions.

The team’s results were published on July 7 in the journal Expert Systems in a paper by Xu, Veeramachaneni, and Alnegheimish of LIDS, along with Laure Berti-Equille at IRD in Marseille, France, and Alfredo Cuesta-Infante at the Universidad Rey Juan Carlos, in Spain. 

MIT gears up to transform manufacturing

MIT Latest News - Wed, 08/13/2025 - 3:00pm

“Manufacturing is the engine of society, and it is the backbone of robust, resilient economies,” says John Hart, head of MIT’s Department of Mechanical Engineering (MechE) and faculty co-director of the MIT Initiative for New Manufacturing (INM). “With manufacturing a lively topic in today’s news, there’s a renewed appreciation and understanding of the importance of manufacturing to innovation, to economic and national security, and to daily lives.”

Launched this May, INM will “help create a transformation of manufacturing through new technology, through development of talent, and through an understanding of how to scale manufacturing in a way that enables imparts higher productivity and resilience, drives adoption of new technologies, and creates good jobs,” Hart says.

INM is one of MIT’s strategic initiatives and builds on the successful three-year-old Manufacturing@MIT program. “It’s a recognition by MIT that manufacturing is an Institute-wide theme and an Institute-wide priority, and that manufacturing connects faculty and students across campus,” says Hart. Alongside Hart, INM’s faculty co-directors are Institute Professor Suzanne Berger and Chris Love, professor of chemical engineering.

The initiative is pursuing four main themes: reimagining manufacturing technologies and systems, elevating the productivity and human experience of manufacturing, scaling up new manufacturing, and transforming the manufacturing base.

Breaking manufacturing barriers for corporations

Amgen, Autodesk, Flex, GE Vernova, PTC, Sanofi, and Siemens are founding members of INM’s industry consortium. These industry partners will work closely with MIT faculty, researchers, and students across many aspects of manufacturing-related research, both in broad-scale initiatives and in particular areas of shared interests. Membership requires a minimum three-year commitment of $500,000 a year to manufacturing-related activities at MIT, including the INM membership fee of $275,000 per year, which supports several core activities that engage the industry members.

One major thrust for INM industry collaboration is the deployment and adoption of AI and automation in manufacturing. This effort will include seed research projects at MIT, collaborative case studies, and shared strategy development.

INM also offers companies participation in the MIT-wide New Manufacturing Research effort, which is studying the trajectories of specific manufacturing industries and examining cross-cutting themes such as technology and financing.

Additionally, INM will concentrate on education for all professions in manufacturing, with alliances bringing together corporations, community colleges, government agencies, and other partners. “We'll scale our curriculum to broader audiences, from aspiring manufacturing workers and aspiring production line supervisors all the way up to engineers and executives,” says Hart.

In workforce training, INM will collaborate with companies broadly to help understand the challenges and frame its overall workforce agenda, and with individual firms on specific challenges, such as acquiring suitably prepared employees for a new factory.

Importantly, industry partners will also engage directly with students. Founding member Flex, for instance, hosted MIT researchers and students at the Flex Institute of Technology in Sorocaba, Brazil, developing new solutions for electronics manufacturing.

“History shows that you need to innovate in manufacturing alongside the innovation in products,” Hart comments. “At MIT, as more students take classes in manufacturing, they’ll think more about key manufacturing issues as they decide what research problems they want to solve, or what choices they make as they prototype their devices. The same is true for industry — companies that operate at the frontier of manufacturing, whether through internal capabilities or their supply chains, are positioned to be on the frontier of product innovation and overall growth.”

“We’ll have an opportunity to bring manufacturing upstream to the early stage of research, designing new processes and new devices with scalability in mind,” he says.

Additionally, MIT expects to open new manufacturing-related labs and to further broaden cooperation with industry at existing shared facilities, such as MIT.nano. Hart says that facilities will also invite tighter collaborations with corporations — not just providing advanced equipment, but working jointly on, say, new technologies for weaving textiles, or speeding up battery manufacturing.

Homing in on the United States

INM is a global project that brings a particular focus on the United States, which remains the world’s second-largest manufacturing economy, but has suffered a significant decline in manufacturing employment and innovation.

One key to reversing this trend and reinvigorating the U.S. manufacturing base is advocacy for manufacturing’s critical role in society and the career opportunities it offers.

“No one really disputes the importance of manufacturing,” Hart says. “But we need to elevate interest in manufacturing as a rewarding career, from the production workers to manufacturing engineers and leaders, through advocacy, education programs, and buy-in from industry, government, and academia.”

MIT is in a unique position to convene industry, academic, and government stakeholders in manufacturing to work together on this vital issue, he points out.

Moreover, in times of radical and rapid changes in manufacturing, “we need to focus on deploying new technologies into factories and supply chains,” Hart says. “Technology is not all of the solution, but for the U.S. to expand our manufacturing base, we need to do it with technology as a key enabler, embracing companies of all sizes, including small and medium enterprises.”

“As AI becomes more capable, and automation becomes more flexible and more available, these are key building blocks upon which you can address manufacturing challenges,” he says. “AI and automation offer new accelerated ways to develop, deploy, and monitor production processes, which present a huge opportunity and, in some cases, a necessity.”

“While manufacturing is always a combination of old technology, new technology, established practice, and new ways of thinking, digital technology gives manufacturers an opportunity to leapfrog competitors,” Hart says. “That’s very, very powerful for the U.S. and any company, or country, that aims to create differentiated capabilities.”

Fortunately, in recent years, investors have increasingly bought into new manufacturing in the United States. “They see the opportunity to re-industrialize, to build the factories and production systems of the future,” Hart says.

“That said, building new manufacturing is capital-intensive, and takes time,” he adds. “So that’s another area where it’s important to convene stakeholders and to think about how startups and growth-stage companies build their capital portfolios, how large industry can support an ecosystem of small businesses and young companies, and how to develop talent to support those growing companies.”

All these concerns and opportunities in the manufacturing ecosystem play to MIT’s strengths. “MIT’s DNA of cross-disciplinary collaboration and working with industry can let us create a lot of impact,” Hart emphasizes. “We can understand the practical challenges. We can also explore breakthrough ideas in research and cultivate successful outcomes, all the way to new companies and partnerships. Sometimes those are seen as disparate approaches, but we like to bring them together.”

The art and science of being an MIT teaching assistant

MIT Latest News - Wed, 08/13/2025 - 3:00pm

“It’s probably the hardest thing I’ve ever done at MIT,” says Haley Nakamura, a second-year MEng student in the MIT Department of Electrical Engineering and Computer Science (EECS). She’s not reflecting on a class, final exam, or research paper. Nakamura is talking about the experience of being a teaching assistant (TA). “It’s really an art form, in that there is no formula for being a good teacher. It’s a skill, and something you have to continuously work at and adapt to different people.”

Nakamura, like approximately 16 percent of her EECS MEng peers, balances her own coursework with teaching responsibilities. The TA role is complex, nuanced, and at MIT, can involve much more planning and logistics than you might imagine. Nakamura works on a central computer science (CS) course, 6.3900 (Introduction to Machine Learning), which registers around 400-500 students per semester. For that enrollment, the course requires eight instructors at the lecturer/professor level; 15 TAs, between the undergraduate and graduate level; and about 50 lab assistants (LAs). Students are split across eight sections corresponding to each senior instructor, with a group of TAs and LAs for each section of 60-70 students.

To keep everyone moving forward at the same pace, coordination and organization are key. “A lot of the reason I got my initial TA-ship was because I was pretty organized,” Nakamura explains. “Everyone here at MIT can be so busy that it can be difficult to be on top of things, and students will be the first to point out logistical confusion and inconsistencies. If they’re worried about some quirk on the website, or wondering how their grades are being calculated, those things can prevent them from focusing on content.” 

Nakamura's organizational skills made her a good candidate to spot and deal with potential wrinkles before they derailed a course section. “When I joined the course, we wanted someone on the TA side to be more specifically responsible for underlying administrative tasks, so I became the first head TA for the course. Since then, we’ve built that role up more and more. There is now a head TA, a head undergraduate TA, and section leads working on internal documentation such as instructions for how to improve content and how to manage office hours.” The result of this administrative work is consistency across sections and semesters.

The other side of a TA-ship is, of course, teaching. “I was eager to engage with students in a meaningful way,” says Soroush Araei, a sixth-year graduate student who had already fulfilled the teaching requirement for his degree in electrical engineering, but who jumped at the chance to teach alongside his PhD advisor. “I enjoy teaching, and have always found that explaining concepts to others deepens my own understanding.” He was recently awarded the ​MIT School of Engineering’s 2025 Graduate Student Teaching and Mentoring Award, which honors “a graduate student in the School of Engineering who has demonstrated extraordinary teaching and mentoring as a teaching or research assistant.” Araei’s dedication comes at the price of sleep. “Juggling my own research with my TA duties was no small feat. I often found myself in the lab for long hours, helping students troubleshoot their circuits. While their design simulations looked perfect, the circuits they implemented on protoboards didn’t always perform as expected. I had to dive deep into the issues alongside the students, which often required considerable time and effort.”

The rewards for Araei’s work are often intrinsic. “Teaching has shown me that there are always deeper layers to understanding. There are concepts I thought I had mastered, but I realized gaps in my own knowledge when trying to explain them,” he says. Another challenge: the variety of background knowledge between students in a single class. “Some had never encountered transistors, while others had tape-out experience. Designing problem sets and selecting questions for office hours required careful planning to keep all students engaged.” For Araei, some of the best moments have come during office hours. “Witnessing the ‘aha’ moment on a student’s face when a complex concept finally clicked was incredibly rewarding.”

The pursuit of the “aha” moment is a common thread between TAs. “I still struggle with the feeling that you’re responsible for someone’s understanding in a given topic, and, if you’re not doing a good job, that could affect that person for the rest of their life,” says Nakamura. “But the flip side of that moment of confusion is when someone has the ‘aha!’ moment as you’re talking to them, when you’re able to explain something that wasn’t conveyed in the other materials. It was your help that broke through and gave understanding. And that reward really overruns the fear of causing confusion.”

Hope Dargan ’21, MEng ’23, a second-year PhD student in EECS, uses her role as a graduate instructor to try to reach students who may not fit into the stereotype of the scientist. She started her career at MIT planning to major in CS and become a software engineer, but a missionary trip to Sweden in 2016-17 (when refugees from the Syrian civil war were resettling in the region) sparked a broader interest in both the Middle East and in how groups of people contextualized their own narratives. When Dargan returned to MIT, she took on a history degree, writing her thesis on the experiences of queer Mormon women. Additionally, she taught for MEET (the Middle East Entrepreneurs of Tomorrow), an educational initiative for Israeli and Palestinian high school students. “I realized I loved teaching, and this experience set me on a trajectory to teaching as a career.” 

Dargan gained her teaching license as an undergrad through the MIT Scheller Teacher Education Program (STEP), then joined the MEng program, in which she designed an educational intervention for students who were struggling in class 6.101 (Fundamentals of Programming). The next step was a PhD. “Teaching is so context-dependent,” says Dargan, who was awarded the Goodwin Medal for her teaching efforts in 2023. “When I taught students for MEET, it was very different from when I was teaching eighth graders at Josiah Quincy Upper School for my teaching license, and very different now when I teach students in 6.101, versus when I teach the LGO [Leaders for Global Operations] students Python in the summers. Each student has their own unique perspective on what’s motivating them, how they learn, and what they connect to … So even if I’ve taught the material for five years (as I have for 6.101, because I was an LA, then a TA, and now an instructor), improving my teaching is always challenging. Getting better at adapting my teaching to the context of the students and their stories, which are ever-evolving, is always interesting.”

Although Dargan considers teaching one of her greatest passions, she is clear-eyed about the cost of the profession. “I think the things that we’re passionate about tell us a lot about ourselves, both our strengths and our weaknesses, and teaching has taught me a lot about my weaknesses,” she says. “Teaching is a tough career, because it tends to take people who care a lot and are perfectionists, and it can lead to a lot of burnout.”

Dargan's students have also expressed enthusiasm and gratitude for her work. “Hope is objectively the most helpful instructor I’ve ever had,” said one anonymous reviewer. Another wrote, “I never felt judged when I asked her questions, and she was great at guiding me through problems by asking motivating questions … I truly felt like she cared about me as a student and person.” Dargan herself is modest about her role, saying, “For me, the trade-off between teaching and research is that teaching has an immediate day-to-day impact, while research has this unknown potential for long-term impact.” 

With the responsibility to instruct an ever-growing percentage of the Institute’s students, the Department of Electrical Engineering and Computer Science relies heavily on dedicated and passionate students like Nakamura, Araei, and Dargan. As their caring and humane influence ripples outward through thousands of new electrical engineers and computer scientists, the day-to-day impact of their work is clear; but the long-term impact may be greater than any of them know.

🫥 Spotify Face Scans Are Just the Beginning | EFFector 37.10

EFF: Updates - Wed, 08/13/2025 - 2:43pm

Catching up on your backlog of digital rights news has never been easier! EFF has a one-stop-shop to keep you up to date on the latest in the fight against censorship and surveillance—our EFFector newsletter.

This time we're covering an act of government intimidation in Florida when the state subpoenaed a venue for surveillance video after hosting an LGBTQ+ pride event, calling out data brokers in California for failing to respond to requests for personal data—even though responses are required by state law, and explaining why Canada's Bill C-2 would open the floodgates for U.S. surveillance.

Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF Senior Speech and Privacy Activist Paige Collings covers the harms of age verification measures that are being passed across the globe. Listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.10 - Spotify Face Scans Are Just the Beginning

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

AI Applications in Cybersecurity

Schneier on Security - Wed, 08/13/2025 - 12:28pm

There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here’s where to register to attend, or participate, in the fourth.

Some really great stuff here.

Pages