Feed aggregator

On Its 30th Birthday, Section 230 Remains The Lynchpin For Users’ Speech

EFF: Updates - Mon, 02/09/2026 - 1:53pm

For thirty years, internet users have benefited from a key federal law that allows everyone to express themselves, find community, organize politically, and participate in society. Section 230, which protects internet users’ speech by protecting the online intermediaries we rely on, is the legal support that sustains the internet as we know it.

Yet as Section 230 turns 30 this week, there are bipartisan proposals in Congress to either repeal or sunset the law. These proposals seize upon legitimate concerns with the harmful and anti-competitive practices of the largest tech companies, but then misdirect that anger toward Section 230.

But rolling back or eliminating Section 230 will not stop invasive corporate surveillance that harms all internet users. Killing Section 230 won’t end to the dominance of the current handful of large tech companies—it would cement their monopoly power

The current proposals also ignore a crucial question: what legal standard should replace Section 230? The bills provide no answer, refusing to grapple with the tradeoffs inherent in making online intermediaries liable for users’ speech.

This glaring omission shows what these proposals really are: grievances masquerading as legislation, not serious policy. Especially when the speech problems with alternatives to Section 230’s immunity are readily apparent, both in the U.S. and around the world. Experience shows that those systems result in more censorship of internet users’ lawful speech.

Let’s be clear: EFF defends Section 230 because it is the best available system to protect users’ speech online. By immunizing intermediaries for their users’ speech, Section 230 benefits users. Services can distribute our speech without filters, pre-clearance, or the threat of dubious takedown requests. Section 230 also directly protects internet users when they distribute other people’s speech online, such as when they reshare another users’ post or host a comment section on their blog.

It was the danger of losing the internet as a forum for diverse political discourse and culture that led to the law in 1996. Congress created Section 230’s limited civil immunity  because it recognized that promoting more user speech outweighed potential harms. Congress decided that when harmful speech occurs, it’s the speaker that should be held responsible—not the service that hosts the speech. The law also protects social platforms when they remove posts that are obscene or violate the services’ own standards. And Section 230 has limits: it does not immunize services if they violate federal criminal laws.

Section 230 Alternatives Would Protect Less Speech

With so much debate around the downsides of Section 230, it’s worth considering: What are some of the alternatives to immunity, and how would they shape the internet?

The least protective legal regime for online speech would be strict liability. Here, intermediaries always would be liable for their users’ speech—regardless of whether they contributed to the harm, or even knew about the harmful speech. It would likely end the widespread availability and openness of social media and web hosting services we’re used to. Instead, services would not let users speak without vetting the content first, via upload filters or other means. Small intermediaries with niche communities may simply disappear under the weight of such heavy liability.

Another alternative: Imposing legal duties on intermediaries, such as requiring that they act “reasonably” to limit harmful user content. This would likely result in platforms monitoring users’ speech before distributing it, and being extremely cautious about what they allow users to say. That inevitably would lead to the removal of lawful speech—probably on a large scale. Intermediaries would not be willing to defend their users’ speech in court, even it is entirely lawful. In a world where any service could be easily sued over user speech, only the biggest services will survive. They’re the ones that would have the legal and technical resources to weather the flood of lawsuits.

Another option is a notice-and-takedown regime, like what exists under the Digital Millennium Copyright Act. That will also result in takedowns of legitimate speech. And there’s no doubt such a system will be abused. EFF has documented how the DMCA leads to widespread removal  https://www.eff.org/takedownsof lawful speech based on frivolous copyright infringement claims. Replacing Section 230 with a takedown system will invite similar behavior, and powerful figures and government officials will use it to silence their critics.

The closest alternative to Section 230’s immunity provides protections from liability until an impartial court has issued a full and final ruling that user-generated content is illegal, and ordered that it be removed. These systems ensure that intermediaries will not have to cave to frivolous claims. But they still leave open the potential for censorship because intermediaries are unlikely to fight every lawsuit that seeks to remove lawful speech. The cost of vindicating lawful speech in court may be too high for intermediaries to handle at scale.

By contrast, immunity takes the variable of whether an intermediary will stand up for their users’ speech out of the equation. That is why Section 230 maximizes the ability for users to speak online.

In some narrow situations, Section 230 may leave victims without a legal remedy. Proposals aimed at those gaps should be considered, though lawmakers should pay careful attention that in vindicating victims, they do not broadly censor users’ speech. But those legitimate concerns are not the criticisms that Congress is levying against Section 230.

EFF will continue to fight for Section 230, as it remains the best available system to protect everyone’s ability to speak online.

RIP Dave Farber, EFF Board Member and Friend

EFF: Updates - Mon, 02/09/2026 - 1:48pm

We are sad to report the passing of longtime EFF Board member, Dave Farber. Dave was 91 and lived in Tokyo from age 83, where he was the Distinguished Professor at Keio University and Co-Director of the Keio Cyber Civilization Research Center (CCRC).  Known as the Grandfather of the Internet, Dave made countless contributions to the internet, both directly and through his support for generations of students.  

Dave was the longest-serving EFF Board member, having joined in the early 1990s, before the creation of the World Wide Web or the widespread adoption of the internet.  Throughout the growth of the internet and the corresponding growth of EFF, Dave remained a consistent, thoughtful, and steady presence on our Board.  Dave always gave us credibility as well as ballast.  He seemed to know and be respected by everyone who had helped build the internet, having worked with or mentored too many of them to count.  He also had an encyclopedic knowledge of the internet's technical history. 

From the beginning, Dave saw both the promise and the danger to human rights that would come with the spread of the internet around the world. He committed to helping make sure that the rights and liberties of users and developers, especially the open source community, were protected. He never wavered in that commitment.  Ever the teacher, Dave was also a clear explainer of internet technologies and basically unflappable.  

Dave also managed the Interesting People email list, which provided news and connection for so many internet pioneers and served as model for how people from disparate corners of the world could engage in a rolling conversation about all things digital.  His role as the Chief Technologist at the U.S. Federal Communications Commission from 2000 to 2001 gave him a strong perspective on the ways that government could help or hinder civil liberties in the digital world. 

We will miss his calm, thoughtful voice, both inside EFF and out in the world. May his memory be a blessing.  

A quick stretch switches this polymer’s capacity to transport heat

MIT Latest News - Mon, 02/09/2026 - 1:00pm

Most materials have an inherent capacity to handle heat. Plastic, for instance, is typically a poor thermal conductor, whereas materials like marble move heat more efficiently. If you were to place one hand on a marble countertop and the other on a plastic cutting board, the marble would conduct more heat away from your hand, creating a colder sensation compared to the plastic.

Typically, a material’s thermal conductivity cannot be changed without re-manufacturing it. But MIT engineers have now found that a relatively common material can switch its thermal conductivity. Simply stretching the material quickly dials up its heat conductance, from a baseline similar to that of plastic to a higher capacity closer to that of marble. When the material springs back to its unstretched form, it returns to its plastic-like properties.

The thermally reversible material is an olefin block copolymer — a soft and flexible polymer that is used in a wide range of commercial products. The team found that when the material is quickly stretched, its ability to conduct heat more than doubles. This transition occurs within just 0.22 seconds, which is the fastest thermal switching that has been observed in any material.

This material could be used to engineer systems that adapt to changing temperatures in real time. For instance, switchable fibers could be woven into apparel that normally retains heat. When stretched, the fabric would instantly conduct heat away from a person’s body to cool them down. Similar fibers can be built into laptops and infrastructure to keep devices and buildings from overheating. The researchers are working on further optimizing the polymer and on engineering new materials with similar properties.

“We need cheap and abundant materials that can quickly adapt to environmental temperature changes,” says Svetlana Boriskina, principal research scientist in MIT’s Department of Mechanical Engineering. “Now that we’ve seen this thermal switching, this changes the direction where we can look for and build new adaptive materials.”

Boriskina and her colleagues have published their results in a study appearing today in the journal Advanced Materials. The study’s co-authors include Duo Xu, Buxuan Li, You Lyu, and Vivian Santamaria-Garcia of MIT, and Yuan Zhu of Southern University of Science and Technology in Shenzhen, China.

Elastic chains

The key to the new phenomenon is that when the material is stretched, its microscopic structures align in ways that suddenly allow heat to travel through easily, increasing the material’s thermal conductivity. In its unstretched state, the same microstructures are tangled and bunched, effectively blocking heat’s path.

As it happens, Boriskina and her colleagues didn’t set out to find a heat-switching material. They were initially looking for more sustainable alternatives to spandex, which is a synthetic fabric made from petroleum-based plastics that is traditionally difficult to recycle. As a potential replacement, the team was investigating fibers made from a different polymer known as polyethylene.

“Once we started working with the material, we realized it had other properties that were more interesting than the fact that it was elastic,” Boriskina says. “What makes polyethylene unique is it has this backbone of carbon atoms arranged along a simple chain. And carbon is a very good conductor of heat.”

The microstructure of most polymer materials, including polyethylene, contains many carbon chains. However, these chains exist in a messy, spaghetti-like tangle known as an amorphous phase. Despite the fact that carbon is a good heat conductor, the disordered arrangement of chains typically impedes heat flow. Polyethylene and most other polymers, therefore, generally have low thermal conductivity.

In previous work, MIT Professor Gang Chen and his collaborators found ways to untangle the mess of carbon chains and push polyethylene to shift from a disordered amorphous state to a more aligned, crystalline phase. This transition effectively straightened the carbon chains, providing clear highways for heat to flow through and increasing the material’s thermal conductivity. In those experiments however, the switch was permanent; once the material’s phase changed, it could not be reversed.

As Boriskina’s team explored polyethylene, they also considered other closely related materials, including olefin block copolymer (OBC). OBC is predominantly an amorphous material, made from highly tangled chains of carbon and hydrogen atoms. Scientists had therefore assumed that OBC would exhibit low thermal conductivity. If its conductance could be increased, it would likely be permanent, similar to polyethylene.

But when the team carried out experiments to test the elasticity of OBC, they found something quite different.

“As we stretched and released the material, we realized that its thermal conductivity was really high when it was stretched and lower when it was relaxed, over thousands of cycles,” says study co-author and MIT graduate student Duo Xu. “This switch was reversible, while the material stayed mostly amorphous. That was unexpected.”

A stretchy mess

The team then took a closer look at OBC, and how it might be changing as it was stretched. The researchers used a combination of X-ray and Raman spectroscopy to observe the material’s microscopic structure as they stretched and relaxed it repeatedly. They observed that, in its unstretched state, the material consists mainly of amorphous tangles of carbon chains, with just a few islands of ordered, crystalline domains scattered here and there. When stretched, the crystalline domains seemed to align and the amorphous tangles straightened out, similar to what Gang Chen observed in polyethylene.

However, rather than transitioning entirely into a crystalline phase, the straightened tangles stayed in their amorphous state. In this way, the team found that the tangles were able to switch back and forth, from straightened to bunched and back again, as the material was stretched and relaxed repeatedly.

“Our material is always in a mostly amorphous state; it never crystallizes under strain,” Xu notes. “So it leaves you this opportunity to go back and forth in thermal conductivity a thousand times. It’s very reversible.”

The team also found that this thermal switching happens extremely fast: The material’s thermal conductivity more than doubled within just 0.22 seconds of being stretched.

“The resulting difference in heat dissipation through this material is comparable to a tactile difference between touching a plastic cutting board versus a marble countertop,” Boriskina says.

She and her colleagues are now taking the results of their experiments and working them into models to see how they can tweak a material’s amorphous structure, to trigger an even bigger change when stretched.

“Our fibers can quickly react to dissipate heat, for electronics, fabrics, and building infrastructure.” Boriskina says. “If we could make further improvements to switch their thermal conductivity from that of plastic to that closer to diamond, it would have a huge industrial and societal impact.”

This research was supported, in part, by the U.S. Department of Energy, the Office of Naval Research Global via Tec de Monterrey, MIT Evergreen Graduate Innovation Fellowship, MathWorks MechE Graduate Fellowship, and the MIT-SUSTech Centers for Mechanical Engineering Research and Education, and carried out, in part, with the use of MIT.nano and ISN facilities.

Op-ed: Weakening Section 230 Would Chill Online Speech

EFF: Updates - Mon, 02/09/2026 - 11:19am

(This appeared as an op-ed published Friday, Feb. 6 in the Daily Journal, a California legal newspaper.)

Section 230, “the 26 words that created the internet,” was enacted 30 years ago this week. It was no rush-job—rather, it was the result of wise legislative deliberation and foresight, and it remains the best bulwark to protect free expression online.

The internet lets people everywhere connect, share ideas and advocate for change without needing immense resources or technical expertise. Our unprecedented ability to communicate online—on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive—is not an accident. In writing Section 230, Congress recognized that for free expression to thrive on the internet, it had to protect the services that power users’ speech. Section 230 does this by preventing most civil suits against online services that are based on what users say. The law also protects users who act like intermediaries when they, for example, forward an email, retweet another user or host a comment section on their blog.

The merits of immunity, both for internet users who rely on intermediaries—from ISPs to email providers to social media platforms, and for internet users who are intermediaries—are readily apparent when compared with the alternatives.

One alternative would be to provide no protection at all for intermediaries, leaving them liable for anything and everything anyone says using their service. This legal risk would essentially require every intermediary to review and legally assess every word, sound or image before it’s published—an impossibility at scale, and a death knell for real-time user-generated content.

Another option: giving protection to intermediaries only if they exercise a specified duty of care, such as where an intermediary would be liable if they fail to act reasonably in publishing a user’s post. But negligence and other objective standards are almost always insufficient to protect freedom of expression because they introduce significant uncertainty into the process and create real chilling effects for intermediaries. That is, intermediaries will choose not to publish anything remotely provocative—even if it’s clearly protected speech—for fear of having to defend themselves in court, even if they are likely to ultimately prevail. Many Section 230 critics bemoan the fact that it prevented courts from developing a common law duty of care for online intermediaries. But the criticism rarely acknowledges the experience of common law courts around the world, few of which adopted an objective standard, and many of which adopted immunity or something very close to it.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages.

Another alternative is a knowledge-based system in which an intermediary is liable only after being notified of the presence of harmful content and failing to remove it within a certain amount of time. This notice-and-takedown system invites tremendous abuse, as seen under the Digital Millennium Copyright Act’s approach: It’s too easy for someone to notify an intermediary that content is illegal or tortious simply to get something they dislike depublished. Rather than spending the time and money required to adequately review such claims, intermediaries would simply take the content down.

All these alternatives would lead to massive depublication in many, if not most, cases, not because the content deserves to be taken down, nor because the intermediaries want to do so, but because it’s not worth assessing the risk of liability or defending the user’s speech. No intermediary can be expected to champion someone else’s free speech at its own considerable expense.Nor is the United States the only government to eschew “upload filtering,” the requirement that someone must review content before publication. European Union rules avoid this also, recognizing how costly and burdensome it is. Free societies recognize that this kind of pre-publication review will lead risk-averse platforms to nix anything that anyone anywhere could deem controversial, leading us to the most vanilla, anodyne internet imaginable.

The advent of artificial intelligence doesn’t change this. Perhaps there’s a tool that can detect a specific word or image, but no AI can make legal determinations or be prompted to identify all defamation or harassment. Human expression is simply too contextual for AI to vet; even if a mechanism could flag things for human review, the scale is so massive that such human review would still be overwhelmingly burdensome.

Congress’ purposeful choice of Section 230’s immunity is the best way to preserve the ability of millions of people in the U.S. to publish their thoughts, photos and jokes online, to blog and vlog, post, and send emails and messages. Each of those acts requires numerous layers of online services, all of which face potential liability without immunity.

This law isn’t a shield for “big tech.” Its ultimate beneficiaries are all of us who want to post things online without having to code it ourselves, and so that we can read and watch content that others create. If Congress eliminated Section 230 immunity, for example, we would be asking email providers and messaging platforms to read and legally assess everything a user writes before agreeing to send it. 

For many critics of Section 230, the chilling effect is the point: They want a system that will discourage online services to publish protected speech that some find undesirable. They want platforms to publish less than what they would otherwise choose to publish, even when that speech is protected and nonactionable.

When Section 230 was passed in 1996, about 40 million people used the internet worldwide; by 2025, estimates ranged from five billion to north of six billion. In 1996, there were fewer than 300,000 websites; by last year, estimates ranged up to 1.3 billion. There is no workforce and no technology that can police the enormity of everything that everyone says.

Internet intermediaries—whether social media platforms, email providers or users themselves—are protected by Section 230 so that speech can flourish online.

LLMs are Getting a Lot Better and Faster at Finding and Exploiting Zero-Days

Schneier on Security - Mon, 02/09/2026 - 7:04am

This is amazing:

Opus 4.6 is notably better at finding high-severity vulnerabilities than previous models and a sign of how quickly things are moving. Security teams have been automating vulnerability discovery for years, investing heavily in fuzzing infrastructure and custom harnesses to find bugs at scale. But what stood out in early testing is how quickly Opus 4.6 found vulnerabilities out of the box without task-specific tooling, custom scaffolding, or specialized prompting. Even more interesting is how it found them. Fuzzers work by throwing massive amounts of random inputs at code to see what breaks. Opus 4.6 reads and reasons about code the way a human researcher would­—looking at past fixes to find similar bugs that weren’t addressed, spotting patterns that tend to cause problems, or understanding a piece of logic well enough to know exactly what input would break it. When we pointed Opus 4.6 at some of the most well-tested codebases (projects that have had fuzzers running against them for years, ...

Longtime Exxon lawyers retreat from oil company’s climate cases

ClimateWire News - Mon, 02/09/2026 - 6:54am
Attorneys from the law firm Paul, Weiss are no longer representing the oil company in at least four lawsuits that ask the fossil fuel industry to pay for climate impacts.

Climate science removed from judicial manual after GOP complaints

ClimateWire News - Mon, 02/09/2026 - 6:53am
Republican attorneys general argued the new chapter would put judges “firmly on one side” of climate lawsuits against the fossil fuel industry.

Oregon Democrats call for climate superfund

ClimateWire News - Mon, 02/09/2026 - 6:52am
Legislators say they need it to help pay for wildfires. Vermont and New York already have passed climate superfund laws.

Poll shows Democrats hold edge over Trump in energy cost battle

ClimateWire News - Mon, 02/09/2026 - 6:51am
Energy affordability is expected to play a role in the midterm elections this year.

Antarctica hit by first wildlife die-off due to avian flu

ClimateWire News - Mon, 02/09/2026 - 6:49am
A new study confirms the H5N1 virus was responsible for at least 46 skua deaths on the Antarctic peninsula in 2024.

Giant snails, tiny insects threaten the South’s rice, crawfish farms

ClimateWire News - Mon, 02/09/2026 - 6:48am
Much about these snails and insects is still a mystery, and researchers are trying to learn more about what’s fueling their spread.

More EV models offer deluxe backup power features for blackouts

ClimateWire News - Mon, 02/09/2026 - 6:48am
One in 5 electric vehicles purchased in the past quarter had so-called vehicle-to-home capabilities.

Shutdown of Kenya’s Koko biofuel firm wipes out clean cooking options

ClimateWire News - Mon, 02/09/2026 - 6:46am
For more than a decade, Koko Networks helped shift over 1.5 million Kenyan homes without access to public gas systems away from smoky charcoal stoves to bioethanol.

Big Japan emitters buy carbon credits ahead of compliance market

ClimateWire News - Mon, 02/09/2026 - 6:46am
Under proposed rules, polluters can use the voluntary credits to offset up to 10 percent of their emissions.

Study: Platforms that rank the latest LLMs can be unreliable

MIT Latest News - Mon, 02/09/2026 - 12:00am

A firm that wants to use a large language model (LLM) to summarize sales reports or triage customer inquiries can choose between hundreds of unique LLMs with dozens of model variations, each with slightly different performance.

To narrow down the choice, companies often rely on LLM ranking platforms, which gather user feedback on model interactions to rank the latest LLMs based on how they perform on certain tasks.

But MIT researchers found that a handful of user interactions can skew the results, leading someone to mistakenly believe one LLM is the ideal choice for a particular use case. Their study reveals that removing a tiny fraction of crowdsourced data can change which models are top-ranked.

They developed a fast method to test ranking platforms and determine whether they are susceptible to this problem. The evaluation technique identifies the individual votes most responsible for skewing the results so users can inspect these influential votes.

The researchers say this work underscores the need for more rigorous strategies to evaluate model rankings. While they didn’t focus on mitigation in this study, they provide suggestions that may improve the robustness of these platforms, such as gathering more detailed feedback to create the rankings.

The study also offers a word of warning to users who may rely on rankings when making decisions about LLMs that could have far-reaching and costly impacts on a business or organization.

“We were surprised that these ranking platforms were so sensitive to this problem. If it turns out the top-ranked LLM depends on only two or three pieces of user feedback out of tens of thousands, then one can’t assume the top-ranked LLM is going to be consistently outperforming all the other LLMs when it is deployed,” says Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS); a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society; an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author of this study.

She is joined on the paper by lead authors and EECS graduate students Jenny Huang and Yunyi Shen as well as Dennis Wei, a senior research scientist at IBM Research. The study will be presented at the International Conference on Learning Representations.

Dropping data

While there are many types of LLM ranking platforms, the most popular variations ask users to submit a query to two models and pick which LLM provides the better response.

The platforms aggregate the results of these matchups to produce rankings that show which LLM performed best on certain tasks, such as coding or visual understanding.

By choosing a top-performing LLM, a user likely expects that model’s top ranking to generalize, meaning it should outperform other models on their similar, but not identical, application with a set of new data.

The MIT researchers previously studied generalization in areas like statistics and economics. That work revealed certain cases where dropping a small percentage of data can change a model’s results, indicating that those studies’ conclusions might not hold beyond their narrow setting.

The researchers wanted to see if the same analysis could be applied to LLM ranking platforms.

“At the end of the day, a user wants to know whether they are choosing the best LLM. If only a few prompts are driving this ranking, that suggests the ranking might not be the end-all-be-all,” Broderick says.

But it would be impossible to test the data-dropping phenomenon manually. For instance, one ranking they evaluated had more than 57,000 votes. Testing a data drop of 0.1 percent means removing each subset of 57 votes out of the 57,000, (there are more than 10194 subsets), and then recalculating the ranking.

Instead, the researchers developed an efficient approximation method, based on their prior work, and adapted it to fit LLM ranking systems.

“While we have theory to prove the approximation works under certain assumptions, the user doesn’t need to trust that. Our method tells the user the problematic data points at the end, so they can just drop those data points, re-run the analysis, and check to see if they get a change in the rankings,” she says.

Surprisingly sensitive

When the researchers applied their technique to popular ranking platforms, they were surprised to see how few data points they needed to drop to cause significant changes in the top LLMs. In one instance, removing just two votes out of more than 57,000, which is 0.0035 percent, changed which model is top-ranked.

A different ranking platform, which uses expert annotators and higher quality prompts, was more robust. Here, removing 83 out of 2,575 evaluations (about 3 percent) flipped the top models.

Their examination revealed that many influential votes may have been a result of user error. In some cases, it appeared there was a clear answer as to which LLM performed better, but the user chose the other model instead, Broderick says.

“We can never know what was in the user’s mind at that time, but maybe they mis-clicked or weren’t paying attention, or they honestly didn’t know which one was better. The big takeaway here is that you don’t want noise, user error, or some outlier determining which is the top-ranked LLM,” she adds.

The researchers suggest that gathering additional feedback from users, such as confidence levels in each vote, would provide richer information that could help mitigate this problem. Ranking platforms could also use human mediators to assess crowdsourced responses.

For the researchers’ part, they want to continue exploring generalization in other contexts while also developing better approximation methods that can capture more examples of non-robustness.

“Broderick and her students’ work shows how you can get valid estimates of the influence of specific data on downstream processes, despite the intractability of exhaustive calculations given the size of modern machine-learning models and datasets,” says Jessica Hullman, the Ginni Rometty Professor of Computer Science at Northwestern University, who was not involved with this work.  “The recent work provides a glimpse into the strong data dependencies in routinely applied — but also very fragile — methods for aggregating human preferences and using them to update a model. Seeing how few preferences could really change the behavior of a fine-tuned model could inspire more thoughtful methods for collecting these data.”

This research is funded, in part, by the Office of Naval Research, the MIT-IBM Watson AI Lab, the National Science Foundation, Amazon, and a CSAIL seed award.

How MIT’s 10th president shaped the Cold War

MIT Latest News - Mon, 02/09/2026 - 12:00am

Today, MIT plays a key role in maintaining U.S. competitiveness, technological leadership, and national defense — and much of the Institute’s work to support the nation’s standing in these areas can be traced back to 1953.

Two months after he took office that year, U.S. President Dwight Eisenhower received a startling report from the military: The USSR had successfully exploded a nuclear bomb nine months sooner than intelligence sources had predicted. The rising Communist power had also detonated a hydrogen bomb using development technology more sophisticated than that of the U.S. And lastly, there was evidence of a new Soviet bomber that rivaled the B-52 in size and range — and the aircraft was of an entirely original design from within the USSR. There was, the report concluded, a significant chance of a surprise nuclear attack on the United States.

Eisenhower’s understanding of national security was vast (he had led the Allies to victory in World War II and served as the first supreme commander of NATO), but the connections he’d made during his two-year stint as president of Columbia University would prove critical to navigating the emerging challenges of the Cold War. He sent his advisors in search of a plan for managing this threat, and he suggested they start with James Killian, then president of MIT.

Killian had an unlikely path to the presidency of MIT. “He was neither a scientist nor an engineer,” says David Mindell, the Dibner Professor of the History of Engineering and Manufacturing and a professor of aeronautics and astronautics at MIT. “But Killian turned out to be a truly gifted administrator.”

While he was serving as editor of MIT Technology Review (where he founded what became the MIT Press), Killian was tapped by then-president Karl Compton to join his staff. As the war effort ramped up on the MIT campus in the 1940s, Compton deputized Killian to lead the RadLab — a 4,000-person effort to develop and deploy the radar systems that proved decisive in the Allied victory.

Killian was named MIT’s 10th president in 1948. In 1951, he launched MIT Lincoln Laboratory, a federally funded research center where MIT and U.S. Air Force scientists and engineers collaborated on new air defense technologies to protect the nation against a nuclear attack.

Two years later, within weeks of Eisenhower’s 1953 request, Killian convened a group of leading scientists at MIT. The group proposed a three-part study: The U.S. needed to reassess its offensive capabilities, its continental defense, and its intelligence operations. Eisenhower agreed.

Killian mobilized 42 engineers and scientists from across the country into three panels matching the committee’s charge. Between September 1954 and February 1955, the panels held 307 meetings with every major defense and intelligence organization in the U.S. government. They had unrestricted access to every project, plan, and program involving national defense. The result, a 190-page report titled “Meeting the Threat of a Surprise Attack,” was delivered to Eisenhower’s desk on Feb. 14, 1955.

The Killian Report, as it came to be known, would go on to play a dramatic role in defining the frontiers of military technology, intelligence gathering, national security policy, and global affairs over the next several decades. Killian’s input would also have dramatic impacts on Eisenhower’s presidency and the relationship between the federal government and higher education.

Foreseeing an evolving competition

The Killian Report opens by anticipating four projected “periods” in the shifting balance of power between the U.S. and the Soviet Union.

In 1955, the U.S. had a decided offensive advantage over the USSR, but it was overly vulnerable to surprise attack. In 1956 and 1957, the U.S. would have an even larger offensive advantage and be only somewhat less vulnerable to surprise. By 1960, the U.S.’ offensive advantage would be narrower, but it would be in a better position to anticipate an attack. Within a decade, the report stated, the two nations would enter “Period IV” — during which “an attack by either side would result in mutual destruction … [a period] so fraught with danger to the U.S. that we should push all promising technological development so that we may stay in Periods II and III as long as possible.”

The report went on to make extensive, detailed recommendations — accelerated development of intercontinental ballistic missiles and high-energy aircraft fuels, expansion and increased ground security for “delivery system” facilities, increased cooperation with Canada and more studies about establishing monitoring stations on polar pack ice, and “studies directed toward better understanding of the radiological hazards that may result from the detonation of large numbers of nuclear weapons,” among others.

“Eisenhower really wanted to draw the perspectives of scientists and engineers into his decision-making,” says Mindell. “Generals and admirals tend to ask for more arms and more boots on the ground. The president didn’t want to be held captive by these views — and Killian’s report really delivered this for him.”

On the day it arrived, President Eisenhower circulated the Killian Report to the head of every department and agency in the federal government and asked them to comment on its recommendations. The Cold War arms race was on — and it would be between scientists and engineers in the United States and those in the Soviet Union.

An odd couple

The Killian Report made many recommendations based on “the correctness of the current national intelligence estimates” — even though “Eisenhower was frustrated with his whole intelligence apparatus,” says Will Hitchcock, the James Madison Professor of History at the University of Virginia and author of “The Age of Eisenhower.” “He felt it was still too much World War II ‘exploding-cigar’ stuff. There wasn’t enough work on advance warning, on seeing what’s over the hill. But that’s what Eisenhower really wanted to know.” The surprise attack on Pearl Harbor still lingered in the minds of many Americans, Hitchcock notes, and “that needed to be avoided.”

Killian needed an aggressive, innovative thinker to assess U.S. intelligence, so he turned to Edwin Land. The cofounder of Polaroid, Land was an astonishingly bold engineer and inventor. He also had military experience, having developed new ordnance targeting systems, aerial photography devices, and other photographic and visual surveillance technologies during World War II. Killian approached Land knowing their methods and work style were quite different. (When the offer to lead the intelligence panel was made, Land was in Hollywood advising filmmakers on the development of 3D movies; Land told Killian he had a personal rule that any committee he served on “must fit into a taxicab.”)

In fall 1954, Land and his five-person panel quickly confirmed Killian and Eisenhower’s suspicions: “We would go in and interview generals and admirals in charge of intelligence and come away worried,” Land reported to Killian later. “We were [young scientists] asking questions — and they couldn’t answer them.” Killian and Land realized this would set their report and its recommendations on a complicated path: While they needed to acknowledge and address the challenges of broadly upgrading intelligence activities, they also needed to make rapid progress on responding to the Soviet threat.

As work on the report progressed, Land and Killian held briefings with Eisenhower. They used these meetings to make two additional proposals — neither of which, President Eisenhower decided, would be spelled out in the final report for security reasons. The first was the development of missile-firing submarines, a long-term prospect that would take a decade to complete. (The technology developed for Polaris-class submarines, Mindell notes, transferred directly to the rockets that powered the Apollo program to the moon.)

The second proposal — to fast-track development of the U-2, a new high-altitude spy plane —could be accomplished within a year, Land told Eisenhower. The president agreed to both ideas, but he put a condition on the U-2 program. As Killian later wrote: “The president asked that it should be handled in an unconventional way so that it would not become entangled in the bureaucracy of the Defense Department or troubled by rivalries among the services.”

Powered by Land’s revolutionary imaging devices, the U-2 would become a critical tool in the U.S.’ ability to assess and understand the Soviet Union’s nuclear capacity. But the spy plane would also go on to have disastrous consequences for the peace process and for Eisenhower.

The aftermath(s)

The Killian Report has a very complex legacy, says Christopher Capozzola, the Elting Morison Professor of History. “There is a series of ironies about the whole undertaking,” he says. “For example, Eisenhower was trying to tamp down interservice rivalries by getting scientists to decide things. But within a couple of years those rivalries have all gotten worse.” Similarly, Capozzola notes, Eisenhower — who famously coined the phrase “military-industrial complex” and warned against it — amplified the militarization of scientific research “more than anyone else.”

Another especially painful irony emerged on May 1, 1960. Two weeks before a meeting between Eisenhower and Khrushchev in Paris to discuss how the U.S. and USSR could ease Cold War tensions and slow the arms race, a U-2 was shot down in Soviet airspace. After a public denial by the U.S. that the aircraft was being used for espionage, the Soviets produced the plane’s wreckage, cameras, and pilot — who admitted he was working for the CIA. The peace process, which had become the centerpiece of Eisenhower’s intended legacy, collapsed.

There were also some brighter outcomes of the Killian Report, Capozzola says. It marked a dramatic reset of the national government’s relationship with academic scientists and engineers — and with MIT specifically. “The report really greased the wheels between MIT scientists and Washington,” he notes. “Perhaps more than the report itself, the deep structures and relationships that Killian set up had implications for MIT and other research universities. They started to orient their missions toward the national interest,” he adds.

The report also cemented Eisenhower’s relationship with Killian. After the launch of Sputnik, which induced a broad public panic in the U.S. about Soviet scientific capabilities, the president called on Killian to guide the national response. Eisenhower later named Killian the first special assistant to the president for science and technology. In the years that followed, Killian would go on to help launch NASA, and MIT engineers would play a critical role in the Apollo mission that landed the first person on the moon. To this day, researchers at MIT and Lincoln Laboratory uphold this legacy of service, advancing knowledge in areas vital to national security, economic competitiveness, and quality of life for all Americans.

As Eisenhower’s special assistant, Killian met with him almost daily and became one of his most trusted advisors. “Killian could talk to the president, and Eisenhower really took his advice,” says Capozzola. “Not very many people can do that. The fact that Killian had that and used it was different.”

A key to their relationship, Capozzola notes, was Killian’s approach to his work. “He exemplified the notion that if you want to get something done, don’t take the credit. At no point did Killian think he was setting science policy. He was advising people on their best options, including decision-makers who would have to make very difficult decisions. That’s it.”

In 1977, after many tours of duty in Washington and his retirement from MIT, Killian summarized his experience working for Eisenhower in his memoir, “Sputnik, Scientists, and Eisenhower.” Killian said of his colleagues: “They were held together in close harmony not only by the challenge of the scientific and technical work they were asked to undertake but by their abiding sense of the opportunity they had to serve a president they admired and the country they loved. They entered the corridors of power in a moment of crisis and served there with a sense of privilege and of admiration for the integrity and high purpose of the White House.”

Mountains magnify mechanisms in climate change biology

Nature Climate Change - Mon, 02/09/2026 - 12:00am

Nature Climate Change, Published online: 09 February 2026; doi:10.1038/s41558-025-02549-x

Mountains, with their sharp climatic contrasts, are emblematic of climate-driven species movement and, ultimately, loss. Here, we argue that these same contrasts make mountains powerful natural laboratories for discovering the mechanisms that underlie biological change.

Preserving mountains

Nature Climate Change - Mon, 02/09/2026 - 12:00am

Nature Climate Change, Published online: 09 February 2026; doi:10.1038/s41558-026-02572-6

Disappearing glaciers and missing snow in mountain regions are some of the most immediate signs of global change today. In this issue, we focus on the broader changes in mountains and how they affect people living both within and far away from their peaks and valleys.

Melting glaciers as symbols of tourism paradoxes

Nature Climate Change - Mon, 02/09/2026 - 12:00am

Nature Climate Change, Published online: 09 February 2026; doi:10.1038/s41558-025-02544-2

Visitors are increasingly drawn to disappearing glacier landscapes for their beauty and scientific value. This Comment examines the paradoxes reshaping relationships among glaciers, people and communities, and highlights research needed to avoid maladaptation harming local communities.

Melting ice and transforming beliefs

Nature Climate Change - Mon, 02/09/2026 - 12:00am

Nature Climate Change, Published online: 09 February 2026; doi:10.1038/s41558-025-02551-3

Mountains and their ecosystems have been important to religious beliefs in many regions around the world. In this Viewpoint, researchers describe how climate change in mountain regions is interpreted by local communities and how they transform their spiritual practice in response to it.

Pages