Feed aggregator

Banks face New Zealand anti-cartel probe over climate targets

ClimateWire News - Fri, 03/21/2025 - 6:32am
The Commerce Commission is examining the impact of climate targets on banks, including commitments imposed by the Net-Zero Banking Alliance.

After floods, Valencia seeks catharsis in traditional sculpture burning

ClimateWire News - Fri, 03/21/2025 - 6:31am
This year’s Las Fallas festival has taken on special meaning after over 220 people died in October’s flooding in Spain.

Device enables direct communication among multiple quantum processors

MIT Latest News - Fri, 03/21/2025 - 6:00am

Quantum computers have the potential to solve complex problems that would be impossible for the most powerful classical supercomputer to crack.

Just like a classical computer has separate, yet interconnected, components that must work together, such as a memory chip and a CPU on a motherboard, a quantum computer will need to communicate quantum information between multiple processors.

Current architectures used to interconnect superconducting quantum processors are “point-to-point” in connectivity, meaning they require a series of transfers between network nodes, with compounding error rates.

On the way to overcoming these challenges, MIT researchers developed a new interconnect device that can support scalable, “all-to-all” communication, such that all superconducting quantum processors in a network can communication directly with each other.

They created a network of two quantum processors and used their interconnect to send microwave photons back and forth on demand in a user-defined direction. Photons are particles of light that can carry quantum information.

The device includes a superconducting wire, or waveguide, that shuttles photons between processors and can be routed as far as needed. The researchers can couple any number of modules to it, efficiently transmitting information between a scalable network of processors.

They used this interconnect to demonstrate remote entanglement, a type of correlation between quantum processors that are not physically connected. Remote entanglement is a key step toward developing a powerful, distributed network of many quantum processors.

“In the future, a quantum computer will probably need both local and nonlocal interconnects. Local interconnects are natural in arrays of superconducting qubits. Ours allows for more nonlocal connections. We can send photons at different frequencies, times, and in two propagation directions, which gives our network more flexibility and throughput,” says Aziza Almanakly, an electrical engineering and computer science graduate student in the Engineering Quantum Systems group of the Research Laboratory of Electronics (RLE) and lead author of a paper on the interconnect.

Her co-authors include Beatriz Yankelevich, a graduate student in the EQuS Group; senior author William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science (EECS) and professor of Physics, director of the Center for Quantum Engineering, and associate director of RLE; and others at MIT and Lincoln Laboratory. The research appears today in Nature Physics.

A scalable architecture

The researchers previously developed a quantum computing module, which enabled them to send information-carrying microwave photons in either direction along a waveguide.

In the new work, they took that architecture a step further by connecting two modules to a waveguide in order to emit photons in a desired direction and then absorb them at the other end.

Each module is composed of four qubits, which serve as an interface between the waveguide carrying the photons and the larger quantum processors.

The qubits coupled to the waveguide emit and absorb photons, which are then transferred to nearby data qubits.

The researchers use a series of microwave pulses to add energy to a qubit, which then emits a photon. Carefully controlling the phase of those pulses enables a quantum interference effect that allows them to emit the photon in either direction along the waveguide. Reversing the pulses in time enables a qubit in another module any arbitrary distance away to absorb the photon.

“Pitching and catching photons enables us to create a ‘quantum interconnect’ between nonlocal quantum processors, and with quantum interconnects comes remote entanglement,” explains Oliver.

“Generating remote entanglement is a crucial step toward building a large-scale quantum processor from smaller-scale modules. Even after that photon is gone, we have a correlation between two distant, or ‘nonlocal,’ qubits. Remote entanglement allows us to take advantage of these correlations and perform parallel operations between two qubits, even though they are no longer connected and may be far apart,” Yankelevich explains.

However, transferring a photon between two modules is not enough to generate remote entanglement. The researchers need to prepare the qubits and the photon so the modules “share” the photon at the end of the protocol.

Generating entanglement

The team did this by halting the photon emission pulses halfway through their duration. In quantum mechanical terms, the photon is both retained and emitted. Classically, one can think that half-a-photon is retained and half is emitted.

Once the receiver module absorbs that “half-photon,” the two modules become entangled.

But as the photon travels, joints, wire bonds, and connections in the waveguide distort the photon and limit the absorption efficiency of the receiving module.

To generate remote entanglement with high enough fidelity, or accuracy, the researchers needed to maximize how often the photon is absorbed at the other end.

“The challenge in this work was shaping the photon appropriately so we could maximize the absorption efficiency,” Almanakly says.

They used a reinforcement learning algorithm to “predistort” the photon. The algorithm optimized the protocol pulses in order to shape the photon for maximal absorption efficiency.

When they implemented this optimized absorption protocol, they were able to show photon absorption efficiency greater than 60 percent.

This absorption efficiency is high enough to prove that the resulting state at the end of the protocol is entangled, a major milestone in this demonstration.

“We can use this architecture to create a network with all-to-all connectivity. This means we can have multiple modules, all along the same bus, and we can create remote entanglement among any pair of our choosing,” Yankelevich says.

In the future, they could improve the absorption efficiency by optimizing the path over which the photons propagate, perhaps by integrating modules in 3D instead of having a superconducting wire connecting separate microwave packages. They could also make the protocol faster so there are fewer chances for errors to accumulate.

“In principle, our remote entanglement generation protocol can also be expanded to other kinds of quantum computers and bigger quantum internet systems,” Almanakly says.

This work was funded, in part, by the U.S. Army Research Office, the AWS Center for Quantum Computing, and the U.S. Air Force Office of Scientific Research. 

AI tool generates high-quality images faster than state-of-the-art approaches

MIT Latest News - Fri, 03/21/2025 - 12:00am

The ability to generate high-quality images quickly is crucial for producing realistic simulated environments that can be used to train self-driving cars to avoid unpredictable hazards, making them safer on real streets.

But the generative artificial intelligence techniques increasingly being used to produce such images have drawbacks. One popular type of model, called a diffusion model, can create stunningly realistic images but is too slow and computationally intensive for many applications. On the other hand, the autoregressive models that power LLMs like ChatGPT are much faster, but they produce poorer-quality images that are often riddled with errors.

Researchers from MIT and NVIDIA developed a new approach that brings together the best of both methods. Their hybrid image-generation tool uses an autoregressive model to quickly capture the big picture and then a small diffusion model to refine the details of the image.

Their tool, known as HART (short for hybrid autoregressive transformer), can generate images that match or exceed the quality of state-of-the-art diffusion models, but do so about nine times faster.

The generation process consumes fewer computational resources than typical diffusion models, enabling HART to run locally on a commercial laptop or smartphone. A user only needs to enter one natural language prompt into the HART interface to generate an image.

HART could have a wide range of applications, such as helping researchers train robots to complete complex real-world tasks and aiding designers in producing striking scenes for video games.

“If you are painting a landscape, and you just paint the entire canvas once, it might not look very good. But if you paint the big picture and then refine the image with smaller brush strokes, your painting could look a lot better. That is the basic idea with HART,” says Haotian Tang SM ’22, PhD ’25, co-lead author of a new paper on HART.

He is joined by co-lead author Yecheng Wu, an undergraduate student at Tsinghua University; senior author Song Han, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the MIT-IBM Watson AI Lab, and a distinguished scientist of NVIDIA; as well as others at MIT, Tsinghua University, and NVIDIA. The research will be presented at the International Conference on Learning Representations.

The best of both worlds

Popular diffusion models, such as Stable Diffusion and DALL-E, are known to produce highly detailed images. These models generate images through an iterative process where they predict some amount of random noise on each pixel, subtract the noise, then repeat the process of predicting and “de-noising” multiple times until they generate a new image that is completely free of noise.

Because the diffusion model de-noises all pixels in an image at each step, and there may be 30 or more steps, the process is slow and computationally expensive. But because the model has multiple chances to correct details it got wrong, the images are high-quality.

Autoregressive models, commonly used for predicting text, can generate images by predicting patches of an image sequentially, a few pixels at a time. They can’t go back and correct their mistakes, but the sequential prediction process is much faster than diffusion.

These models use representations known as tokens to make predictions. An autoregressive model utilizes an autoencoder to compress raw image pixels into discrete tokens as well as reconstruct the image from predicted tokens. While this boosts the model’s speed, the information loss that occurs during compression causes errors when the model generates a new image.

With HART, the researchers developed a hybrid approach that uses an autoregressive model to predict compressed, discrete image tokens, then a small diffusion model to predict residual tokens. Residual tokens compensate for the model’s information loss by capturing details left out by discrete tokens.

“We can achieve a huge boost in terms of reconstruction quality. Our residual tokens learn high-frequency details, like edges of an object, or a person’s hair, eyes, or mouth. These are places where discrete tokens can make mistakes,” says Tang.

Because the diffusion model only predicts the remaining details after the autoregressive model has done its job, it can accomplish the task in eight steps, instead of the usual 30 or more a standard diffusion model requires to generate an entire image. This minimal overhead of the additional diffusion model allows HART to retain the speed advantage of the autoregressive model while significantly enhancing its ability to generate intricate image details.

“The diffusion model has an easier job to do, which leads to more efficiency,” he adds.

Outperforming larger models

During the development of HART, the researchers encountered challenges in effectively integrating the diffusion model to enhance the autoregressive model. They found that incorporating the diffusion model in the early stages of the autoregressive process resulted in an accumulation of errors. Instead, their final design of applying the diffusion model to predict only residual tokens as the final step significantly improved generation quality.

Their method, which uses a combination of an autoregressive transformer model with 700 million parameters and a lightweight diffusion model with 37 million parameters, can generate images of the same quality as those created by a diffusion model with 2 billion parameters, but it does so about nine times faster. It uses about 31 percent less computation than state-of-the-art models.

Moreover, because HART uses an autoregressive model to do the bulk of the work — the same type of model that powers LLMs — it is more compatible for integration with the new class of unified vision-language generative models. In the future, one could interact with a unified vision-language generative model, perhaps by asking it to show the intermediate steps required to assemble a piece of furniture.

“LLMs are a good interface for all sorts of models, like multimodal models and models that can reason. This is a way to push the intelligence to a new frontier. An efficient image-generation model would unlock a lot of possibilities,” he says.

In the future, the researchers want to go down this path and build vision-language models on top of the HART architecture. Since HART is scalable and generalizable to multiple modalities, they also want to apply it for video generation and audio prediction tasks.

This research was funded, in part, by the MIT-IBM Watson AI Lab, the MIT and Amazon Science Hub, the MIT AI Hardware Program, and the U.S. National Science Foundation. The GPU infrastructure for training this model was donated by NVIDIA. 

Colonial legacies in tropical forestry hinder good management

Nature Climate Change - Fri, 03/21/2025 - 12:00am

Nature Climate Change, Published online: 21 March 2025; doi:10.1038/s41558-025-02288-z

Colonial legacies in tropical forestry hinder good management

Local fossil fuel ad ban as a catalyst for global change

Nature Climate Change - Fri, 03/21/2025 - 12:00am

Nature Climate Change, Published online: 21 March 2025; doi:10.1038/s41558-025-02267-4

The Hague in the Netherlands was the first city in the world to enact a law prohibiting advertisements for fossil fuel products and services. Although the ban is restricted to The Hague’s jurisdiction, the decision to implement the ban challenges norms and conventions that drive fossil-fuel consumption worldwide and sets an example for other governments to follow.

Improving future climate meetings

Nature Climate Change - Fri, 03/21/2025 - 12:00am

Nature Climate Change, Published online: 21 March 2025; doi:10.1038/s41558-025-02293-2

Improving future climate meetings

Glaciers give way to new coasts

Nature Climate Change - Fri, 03/21/2025 - 12:00am

Nature Climate Change, Published online: 21 March 2025; doi:10.1038/s41558-025-02275-4

Climate change is causing rapid shrinkage of high-latitude glaciers, fundamentally altering the nature of Arctic landscapes. Now, research quantifies the substantial, yet under-reported, development of new coastlines and islands that are revealed as marine-terminating glaciers fall back from the sea.

New coasts emerging from the retreat of Northern Hemisphere marine-terminating glaciers in the twenty-first century

Nature Climate Change - Fri, 03/21/2025 - 12:00am

Nature Climate Change, Published online: 21 March 2025; doi:10.1038/s41558-025-02282-5

As marine-terminating glaciers retreat, they reveal new coastlines in many regions. Here the authors use satellite data to quantify these changes for the Northern Hemisphere, finding that between 2000 and 2020, a total of 2,466 km of new coastline has been uncovered.

How Do You Solve a Problem Like Google Search? Courts Must Enable Competition While Protecting Privacy.

EFF: Updates - Thu, 03/20/2025 - 6:28pm

Can we get from a world where Google is synonymous with search to a world  where other search engines have a real chance to compete? The U.S. and state governments’ bipartisan antitrust suit, challenging the many ways that Google has maintained its search monopoly, offers an opportunity.

Antitrust enforcers have proposed a set of complementary remedies, from giving users a choice of search engine, to forcing Google to spin off Chrome and possibly Android into separate companies. Overall, this is the right approach. Google’s dominance in search is too entrenched to yield to a single fix. But there are real risks to users in the mix as well: Forced sharing of people’s sensitive search queries with competitors could seriously undermine user privacy, as could a breakup without adequate safeguards.

Let’s break it down.

The Antitrust Challenge to Google Search

The Google Search antitrust suit began in 2020 under the first Trump administration, brought by the Department of Justice and 11 states. (Another 38 states filed a companion suit.) The heart of the suit was Google’s agreements with mobile phone makers, browser makers, and wireless carriers, requiring that Google Search be the default search engine, in return for revenue share payments including up to $20 billion per year that Google paid to Apple. A separate case, filed in 2023, challenged Google’s dominance in online advertising. Following a bench trial in summer 2023, Judge Amit Mehta of the D.C. federal court found Google’s search placement agreements to be illegal under the Sherman Antitrust Act, because they foreclosed competition in the markets for “general search” and “general search text advertising.”

The antitrust enforcers proposed a set of remedies in fall 2024, and filed a revised version this month, signalling that the new administration remains committed to the case. A hearing on remedies is scheduled for April.

The Obvious Fix: Ban Search Engine Exclusivity and Other Anticompetitive Agreements

The first part of the government’s remedy proposal bans Google from making the kinds of agreements that led to this lawsuit: agreements to make Google the default search engine on a variety of platforms, agreements to pre-install Google Search products on a platform, and other agreements that would give platforms an incentive not to develop a general search engine of their own. This would mean the end of Google’s pay-for-placement agreements with Apple, Samsung, other hardware makers, and browser vendors like Mozilla.

In practice, a ban on search engine default agreements means presenting users with a screen that prompts them to choose a default search engine from among various competitors. Choice screens aren’t a perfect solution, because people tend to stick with what they know. Still, research shows that choice screens can have a positive impact on competition if they are implemented thoughtfully. The court, and the technical committee appointed to oversee Google’s compliance, should apply the lessons of this research.

It makes sense that the first step of a remedy for illegal conduct should be stopping that illegal conduct. But that’s not enough on its own. Many users choose Google Search, and will continue to choose it, because it works well enough and is familiar. Also, as the evidence in this case demonstrated, the walls that Google has built around its search monopoly have kept potential rivals from gaining enough scale to deliver the best results for uncommon search queries. So we’ll need more tools to fix the competition problem.

Safe Sharing: Syndication and Search Index

The enforcers’ proposal also includes some measures that are meant to enable competitors to overcome the scale advantages that Google illegally obtained. One is requiring Google to let competitors use “syndicated” Google search results for 10 years, with no conditions or use restrictions other than “that Google may take reasonable steps to protect its brand, its reputation, and security.” Google would also have to share the results of “synthetic queries”—search terms generated by competitors to test Google’s results—and the “ranking signals” that underlie those queries. Many search engines, including DuckDuckGo, use syndicated search results from Microsoft’s Bing, and a few, like Startpage, receive syndicated results from Google. But Google currently limits re-ranking and mixing of those results—techniques that could allow competitors to offer real alternatives. Syndication is a powerful mechanism for allowing rivals the benefits of scale and size, giving them a chance to achieve a similar scale.

Importantly, syndication doesn’t reveal Google users’ queries or other personal information, so it is a privacy-conscious tool.

Similarly, the proposal orders Google to make its index – the snapshot of the web that forms the basis for its search results - available to competitors. This too is reasonably privacy-conscious, because it presumably includes only data from web pages that were already visible to the public.

Scary Sharing: Users’ “Click and Query” Data

Another data-sharing proposal is more complicated from a privacy perspective: requiring Google to provide qualified competitors with “user-side data,” including users’ search queries and data sets used to train Google's ranking algorithms. Those queries and data sets can include intensely personal details, including medical issues, political opinions and activities, and personal conflicts. Google is supposed to apply “security and privacy safeguards,” but it's not clear how this will be accomplished. An order that requires Google to share even part of this data with competitors raises the risk of data breaches, improper law enforcement access, commercial data mining and aggregation, and other serious privacy harms.

Some in the search industry, including privacy-conscious companies like DuckDuckGo, argue that filtering this “click and query” data to remove personally identifying information can adequately protect users’ privacy while still helping Google’s competitors generate more useful search results. For example, Google could share only queries that were used by some number of unique users. This is the approach Google already takes to sharing user data under the European Union’s Digital Markets Act, though Google sets a high threshold that eliminates about 97% of the data. Other rules that could apply are excluding strings of numbers that could be Social Security or other identification numbers, and other patterns of data that may be sensitive information.

But click and query data sharing still sets up a direct conflict between competition and privacy. Google, naturally, wants to share as little data as possible, while competitors will want more. It’s not clear to us that there’s an optimal point that both protects users’ privacy well and also meaningfully promotes competition. More research might reveal a better answer, but until then, this is a dangerous path, where pursuing the benefits of competition for users might become a race to the bottom for users’ privacy.

The Sledgehammer: Splitting off Chrome and Maybe Android

The most dramatic part of the enforcers’ proposal calls for an order to split off the Chrome browser as a separate company, and potentially also the Android operating system. This could be a powerful way to open up search competition. An independent Chrome and Android could provide many opportunities for users to choose alternative search engines, and potentially to integrate with AI-based information location tools and other new search competitors. A breakup would complement the ban on agreements for search engine exclusivity by applying the same ban to Chrome and Android as to iOS and other platforms.

The complication here is that a newly independent Chrome or Android might have an incentive to exploit users’ privacy in other ways. Given a period of exclusivity in which Google could not offer a competing browser or mobile operating system, Chrome and Android could adopt a business model of monetizing users’ personal data to an even greater extent than Google. To prevent this, a divestiture (breakup) order would also have to include privacy safeguards, to keep the millions of Chrome and Android users from facing an even worse privacy landscape than they do now.

The DOJ and states are pursuing a strong, comprehensive remedy for Google’s monopoly abuses in search, and we hope they will see that effort through to a remedies hearing and the inevitable appeals. We’re also happy to see that the antitrust enforcers are seeking to preserve users’ privacy. To achieve that goal, and keep internet users’ consumer welfare squarely in sight, they should proceed with caution on any user data sharing, and on breakups.

SeaPerch: A robot with a mission

MIT Latest News - Thu, 03/20/2025 - 3:40pm

The SeaPerch underwater robot is a popular educational tool for students in grades 5 to 12.  Building and piloting SeaPerch, a remotely operated vehicle (ROV), involves a variety of hand fabrication processes, electronics techniques, and STEM concepts. Through the SeaPerch program, educators and students explore structures, electronics, and underwater dynamics.  

“SeaPerch has had a tremendous impact on the fields of ocean science and engineering,” says Andrew Bennett ’85, PhD ’97, MIT SeaGrant education administrator and senior lecturer in the Department of Mechanical Engineering (MechE).

The original SeaPerch project was launched by MIT Sea Grant in 2003. In the decades that followed, it quickly spread across the country and overseas, creating a vibrant community of builders. Now under the leadership of RoboNation, SeaPerch continues to thrive with competitions around the world. These competitions introduce challenging real-world problems to foster creative solutions. Some recent topics have included deep sea mining and collecting data on hydrothermal vents.

SeaPerch II, which has been in development at MIT Sea Grant since 2021, builds on the original program by adding robotics and elements of marine and climate science. It remains a “do-it-yourself” maker project with objectives that are achievable by middle and high school students. Bennett says he hopes SeaPerch II will enable an even greater impact by providing an approachable path to learning more about sensors, robotics, climate science, and more.

“What I think is most valuable about it is that it uses hardware store components that need to be cut, waterproofed, connected, soldered, or somehow processed before becoming part of the robot or controller,” says Diane Brancazio ME ’90, K-12 maker team leader for the MIT Edgerton Center, who co-leads the MIT SeaPerch initiative with Bennett. “[It’s] kind of like making a cake from scratch, instead of from a mix — you see what goes into the final product and how it all comes together.”

SeaPerch II is a family of modules that allow students and educators to create educational adventures tailored to their particular wants or requirements. Offerings include a pressure and temperature sensing module that can be used on its own; an autonomy module that the students can use to construct a closed-loop automatic depth control system for their SeaPerch; and a lesson module for soft robotic “fingers” that can be configured into grippers, distance sensors, and bump sensors.

The basic SeaPerch is a PVC pipe structure with three motors and a tether to a switch box. Through the building process, students learn about buoyancy, structural design, hand fabrication, and electric circuits. SeaPerch II leverages technologies that are more advanced, less expensive, and more accessible than they were when SeaPerch was first conceived. Bennett says SeaPerch II is meant to extend the original SeaPerch program without invalidating any of the existing system.

Teagan Sullivan, a third-year student in mechanical engineering, first became involved with the project in January 2023 through an Undergraduate Research Opportunities Program project with MIT Sea Grant. Initially, she continued development of the soft robotics portion of the project, before switching to a more general focus where she worked on frame design for SeaPerch II, making sure components could fit and that stability could be maintained. Later she helped run outreach programs, taking feedback from the students she worked with to help modify designs and make them “more robust and kid-friendly.”

“I have been able to see the impact of SeaPerch II on a small scale by working directly with students,” Sullivan says. “I have seen how it encourages creativity, and how it has taught kids that collaboration is the best road to success. SeaPerch II teaches the basics of electronics, coding, and manufacturing, but its best strength is the ability to challenge the way people think and encourage critical thinking.”

The team’s vision is to create opportunities for young people to engage in authentic science investigations and engineering challenges, developing a passion for engineering, science, and the aquatic environment. MIT Sea Grant is continuing to develop new SeaPerch II modules, including incorporating land-water communication, salinity and dissolved oxygen sensors, and fluorometers.

Sullivan says she hopes the program will reach more students and inspire them to take an interest in engineering while teaching the skills they need to be the next generation of problem-solvers. Brancazio says she hopes this project inspires and prepares young people to work on climate change issues.

“Robots are supposed to help people do things they couldn’t otherwise do,” Brancazio says. “SeaPerch is a robot with a mission.”

Professor Emeritus Lee Grodzins, pioneer in nuclear physics, dies at 98

MIT Latest News - Thu, 03/20/2025 - 3:00pm

Nuclear physicist and MIT Professor Emeritus Lee Grodzins died on March 6 at his home in the Maplewood Senior Living Community at Weston, Massachusetts. He was 98.   

Grodzins was a pioneer in nuclear physics research. He was perhaps best known for the highly influential experiment determining the helicity of the neutrino, which led to a key understanding of what's known as the weak interaction. He was also the founder of Niton Corp. and the nonprofit Cornerstones of Science, and was a co-founder of the Union of Concerned Scientists.

He retired in 1999 after serving as an MIT physics faculty member for 40 years. As a member of the Laboratory for Nuclear Science (LNS), he initiated the relativistic heavy-ion physics program. He published over 170 scientific papers and held 64 U.S. patents.

“Lee was a very good experimental physicist, especially with his hands making gadgets,” says Heavy Ion Group and Francis L. Friedman Professor Emeritus Wit Busza PhD ’64. “His enthusiasm for physics spilled into his enthusiasm for how physics was taught in our department.”

Industrious son of immigrants

Grodzins was born July 10, 1926, in Lowell, Massachusetts, the middle child of Eastern European Jewish immigrants David and Taube Grodzins. He grew up in Manchester, New Hampshire. His two sisters were Ethel Grodzins Romm, journalist, author, and businesswoman who later ran his company, Niton Corp.; and Anne Lipow, who became a librarian and library science expert.

His father, who ran a gas station and a used-tire business, died when Lee was 15. To help support his family, Lee sold newspapers, a business he grew into the second-largest newspaper distributor in Manchester.

At 17, Grodzins attended the University of New Hampshire, graduating in less than three years with a degree in mechanical engineering.  However, he decided to be a physicist after disagreeing with a textbook that used the word “never.”

“I was pretty good in math and was undecided about my future,” Grodzins said in a 1958 New York Daily News article. “It wasn’t until my senior year that I unexpectedly realized I wanted to be a physicist. I was reading a physics text one day when suddenly this sentence hit me: ‘We will never be able to see the atom.’ I said to myself that that was as stupid a statement as I’d ever read. What did he mean ‘never!’ I got so annoyed that I started devouring other writers to see what they had to say and all at once I found myself in the midst of modern physics.”

He wrote his senior thesis on “Atomic Theory.”

After graduating in 1946, he approached potential employers by saying, “I have a degree in mechanical engineering, but I don’t want to be one. I’d like to be a physicist, and I’ll take anything in that line at whatever you will pay me.”

He accepted an offer from General Electric’s Research Laboratory in Schenectady, New York, where he worked in fundamental nuclear research building cosmic ray detectors, while also pursuing his master’s degree at Union College. “I had a ball,” he recalled. “I stayed in the lab 12 hours a day. They had to kick me out at night.”

Brookhaven

After earning his PhD from Purdue University in 1954, he spent a year as a lecturer there, before becoming a researcher at Brookhaven National Laboratory (BNL) with Maurice Goldhaber’s nuclear physics group, probing the properties of the nuclei of atoms.

In 1957, he, with Goldhaber and Andy Sunyar, used a simple table-top experiment to measure the helicity of the neutrino. Helicity characterizes the alignment of a particle’s intrinsic spin vector with that particle’s direction of motion. 

The research provided new support for the idea that the principle of conservation of parity — which had been accepted for 30 years as a basic law of nature before being disproven the year before, leading to the 1957 Nobel Prize in Physics — was not as inviolable as the scientists thought it was, and did not apply to the behavior of some subatomic particles.

The experiment took about 10 days to complete, followed by a month of checks and rechecks. They submitted a letter on “Helicity of Neutrinos” to Physical Review on Dec. 11, 1957, and a week later, Goldhaber told a Stanford University audience that the neutrino is left-handed, meaning that the weak interaction was probably one force. This work proved crucial to our understanding of the weak interaction, the force that governs nuclear beta decay.

“It was a real upheaval in our understanding of physics,” says Grodzins’ longtime colleague Stephen Steadman. The breakthrough was commemorated in 2008, with a conference at BNL on “Neutrino Helicity at 50.” 

Steadman also recalls Grodzins’ story about one night at Brookhaven, where he was working on an experiment that involved a radioactive source inside a chamber. Lee noticed that a vacuum pump wasn’t working, so he tinkered with it a while before heading home. Later that night, he gets a call from the lab. “They said, ‘Don't go anywhere!’” recalls Steadman. It turns out the radiation source in the lab had exploded, and the pump filled the lab with radiation. “They were actually able to trace his radioactive footprints from the lab to his home,” says Steadman. “He kind of shrugged it off.”

The MIT years       

Grodzins joined the faculty of MIT in 1959, where he taught physics for four decades. He inherited Robley Evans’ Radiation Laboratory, which used radioactive sources to study properties of nuclei, and led the Relativistic Heavy Ion Group, which was affiliated with the LNS.

In 1972, he launched a program at BNL using the then-new Tandem Van de Graaff accelerator to study interactions of heavy ions with nuclei. “As the BNL tandem was getting commissioned, we started a program, together with Doug Cline at the University of Rochester, tandem to investigate Coulomb-nuclear interference,” says Steadman, a senior research scientist at LNS. “The experimental results were decisive but somewhat controversial at the time. We clearly detected the interference effect.” The experimental work was published in Physical Review Letters.

Grodzins’ team looked for super-heavy elements using the Lawrence Berkeley National Laboratory Super-Hilac, investigated heavy-ion fission and other heavy-ion reactions, and explored heavy-ion transfer reactions. The latter research showed with precise detail the underlying statistical behavior of the transfer of nucleons between the heavy-ion projectile and target, using a theoretical statistical model of Surprisal Analysis developed by Rafi Levine and his graduate student. Recalls Steadman, “these results were both outstanding in their precision and initially controversial in interpretation.”

In 1985, he carried out the first computer axial tomographic experiment using synchrotron radiation, and in 1987, his group was involved in the first run of Experiment 802, a collaborative experiment with about 50 scientists from around the world that studied relativistic heavy ion collisions at Brookhaven. The MIT responsibility was to build the drift chambers and design the bending magnet for the experiment.

“He made significant contributions to the initial design and construction phases, where his broad expertise and knowledge of small area companies with unique capabilities was invaluable,” says George Stephans, physics senior lecturer and senior research scientist at MIT.

Professor emeritus of physics Rainer Weiss ’55, PhD ’62 recalls working on a Mossbauer experiment to establish if photons changed frequency as they traveled through bright regions. “It was an idea held by some to explain the ‘apparent’ red shift with distance in our universe,” says Weiss. “We became great friends in the process, and of course, amateur cosmologists.”

“Lee was great for developing good ideas,” Steadman says. “He would get started on one idea, but then get distracted with another great idea. So, it was essential that the team would carry these experiments to their conclusion: they would get the papers published.”

MIT mentor

Before retiring in 1999, Lee supervised 21 doctoral dissertations and was an early proponent of women graduate students in physics. He also oversaw the undergraduate thesis of Sidney Altman, who decades later won the Nobel Prize in Chemistry. For many years, he helped teach the Junior Lab required of all undergraduate physics majors. He got his favorite student evaluation, however, for a different course, billed as offering a “superficial overview” of nuclear physics. The comment read: “This physics course was not superficial enough for me.”

“He really liked to work with students,” says Steadman. “They could always go into his office anytime. He was a very supportive mentor.”

“He was a wonderful mentor, avuncular and supportive of all of us,” agrees Karl van Bibber ’72, PhD ’76, now at the University of California at Berkeley. He recalls handing his first paper to Grodzins for comments. “I was sitting at my desk expecting a pat on the head. Quite to the contrary, he scowled, threw the manuscript on my desk and scolded, ‘Don't even pick up a pencil again until you've read a Hemingway novel!’ … The next version of the paper had an average sentence length of about six words; we submitted it, and it was immediately accepted by Physical Review Letters.”

Van Bibber has since taught the “Grodzins Method” in his graduate seminars on professional orientation for scientists and engineers, including passing around a few anthologies of Hemingway short stories. “I gave a copy of one of the dog-eared anthologies to Lee at his 90th birthday lecture, which elicited tears of laughter.”

Early in George Stephans’ MIT career as a research scientist, he worked with Grodzins’ newly formed Relativistic Heavy Ion Group. “Despite his wide range of interests, he paid close attention to what was going on and was always very supportive of us, especially the students. He was a very encouraging and helpful mentor to me, as well as being always pleasant and engaging to work with. He actively pushed to get me promoted to principal research scientist relatively early, in recognition of my contributions.”

“He always seemed to know a lot about everything, but never acted condescending,” says Stephans. “He seemed happiest when he was deeply engaged digging into the nitty-gritty details of whatever unique and unusual work one of these companies was doing for us.”

Al Lazzarini ’74, PhD ’78 recalls Grodzins’ investigations using proton-induced X-ray emission (PIXE) as a sensitive tool to measure trace elemental amounts. “Lee was a superb physicist,” says Lazzarini. “He gave an enthralling seminar on an investigation he had carried out on a lock of Napoleon’s hair, looking for evidence of arsenic poisoning.”

Robert Ledoux ’78, PhD ’81, a former professor of physics at MIT who is now program director of the U.S. Advanced Research Projects Agency with the Department of Energy, worked with Grodzins as both a student and colleague. “He was a ‘nuclear physicist’s physicist’ — a superb experimentalist who truly loved building and performing experiments in many areas of nuclear physics. His passion for discovery was matched only by his generosity in sharing knowledge.”

The research funding crisis starting in 1969 led Grodzins to become concerned that his graduate students would not find careers in the field. He helped form the Economic Concerns Committee of the American Physical Society, for which he produced a major report on the “Manpower Crisis in Physics” (1971), and presented his results before the American Association for the Advancement of Science, and at the Karlsruhe National Lab in Germany.   

Grodzins played a significant role in bringing the first Chinese graduate students to MIT in the 1970s and 1980s.

One of the students he welcomed was Huan Huang PhD ’90. “I am forever grateful to him for changing my trajectory,” says Huang, now at the University of California at Los Angeles. “His unwavering support and ‘go do it’ attitude inspired us to explore physics at the beginning of a new research field of high energy heavy ion collisions in the 1980s. I have been trying to be a ‘nice professor’ like Lee all my academic career.”

Even after he left MIT, Grodzins remained available for his former students. “Many tell me how much my lifestyle has influenced them, which is gratifying,” Huang says. “They’ve been a central part of my life. My biography would be grossly incomplete without them.”

Niton Corp. and post-MIT work

Grodzins liked what he called “tabletop experiments,” like the one used in his 1957 neutrino experiment, which involved a few people building a device that could fit on a tabletop. “He didn’t enjoy working in large collaborations, which nuclear physics embraced.” says Steadman. “I think that’s why he ultimately left MIT.”

In the 1980s, he launched what amounted to a new career in detection technology. In 1987, after developing a scanning proton-induced X-ray microspectrometer for use measuring elemental concentrations in air, he founded the Niton Corp., which developed, manufactured, and marketed test kits and instruments to measure radon gas in buildings, lead-based paint detection, and other nondestructive testing applications. (“Niton” is an obsolete term for radon.)

“At the time, there was a big scare about radon in New England, and he thought he could develop a radon detector that was inexpensive and easy to use,” says Steadman. “His radon detector became a big business.”

He later developed devices to detect explosives, drugs, and other contraband in luggage and cargo containers. Handheld devices used X-ray fluorescence to determine the composition of metal alloys and to detect other materials. The handheld XL Spectrum Analyzer could detect buried and surface lead on painted surfaces, to protect children living in older homes. Three Niton X-ray fluorescence analyzers earned R&D 100 awards.

“Lee was very technically gifted,” says Steadman.

In 1999, Grodzins retired from MIT and devoted his energies to industry, including directing the R&D group at Niton.

His sister Ethel Grodzins Romm was the president and CEO of Niton, followed by his son Hal. Many of Niton’s employees were MIT graduates. In 2005, he and his family sold Niton to Thermo Fisher Scientific, where Lee remained as a principal scientist until 2010.

In the 1990s, he was vice president of American Science and Engineering, and between the ages of 70 and 90, he was awarded three patents a year. 

“Curiosity and creativity don’t stop after a certain age,” Grodzins said to UNH Today. “You decide you know certain things, and you don’t want to change that thinking. But thinking outside the box really means thinking outside your box.”

“I miss his enthusiasm,” says Steadman. “I saw him about a couple of years ago and he was still on the move, always ready to launch a new effort, and he was always trying to pull you into those efforts.”

A better world

In the 1950s, Grodzins and other Brookhaven scientists joined the American delegation at the Second United Nations International Conference on the Peaceful Uses of Atomic Energy in Geneva.

Early on, he joined several Manhattan Project alums at MIT in their concern about the consequences of nuclear bombs. In Vietnam-era 1969, Grodzins co-founded the Union of Concerned Scientists, which calls for scientific research to be directed away from military technologies and toward solving pressing environmental and social problems. He served as its chair in 1970 and 1972. He also chaired committees for the American Physical Society and the National Research Council.

As vice president for advanced products at American Science and Engineering, which made homeland security equipment, he became a consultant on airport security, especially following the 9/11 attacks. As an expert witness, he testified at the celebrated trial to determine whether Pan Am was negligent for the bombing of Flight 103 over Lockerbie, Scotland, and he took part in a weapons inspection trip on the Black Sea. He also was frequently called as an expert witness on patent cases.

In 1999, Grodzins founded the nonprofit Cornerstones in Science, a public library initiative to improve public engagement with science. Based originally at the Curtis Memorial Library in Brunswick, Maine, Cornerstones now partners with libraries in Maine, Arizona, Texas, Massachusetts, North Carolina, and California. Among their initiatives was one that has helped supply telescopes to libraries and astronomy clubs around the country.

“He had a strong sense of wanting to do good for mankind,” says Steadman.

Awards

Grodzins authored more than 170 technical papers and holds more than 60 U.S. patents. His numerous accolades included being named a Guggenheim Fellow in 1964 and 1971, and a senior von Humboldt fellow in 1980. He was a fellow of the American Physical Society and the American Academy of Arts and Sciences, and received an honorary doctor of science degree from Purdue University in 1998.

In 2021, the Denver X-Ray Conference gave Grodzins the Birks Award in X-Florescence Spectrometry, for having introduced “a handheld XRF unit which expanded analysis to in-field applications such as environmental studies, archeological exploration, mining, and more.”

Personal life

One evening in 1955, shortly after starting his work at Brookhaven, Grodzins decided to take a walk and explore the BNL campus. He found just one building that had lights on and was open, so he went in. Inside, a group was rehearsing a play. He was immediately smitten with one of the actors, Lulu Anderson, a young biologist. “I joined the acting company, and a year-and-a-half later, Lulu and I were married,” Grodzins had recalled. They were happily married for 62 years, until Lulu’s death in 2019.

They raised two sons, Dean, now of Cambridge, Massachusetts, and Hal Grodzins, who lives in Maitland, Florida. Lee and Lulu owned a succession of beloved huskies, most of them named after physicists.

After living in Arlington, Massachusetts, the Grodzins family moved to Lexington, Massachusetts, in 1972 and bought a second home a few years later in Brunswick, Maine. Starting around 1990, Lee and Lulu spent every weekend, year-round, in Brunswick. In both places, they were avid supporters of their local libraries, museums, theaters, symphonies, botanical gardens, public radio, and TV stations.

Grodzins took his family along to conferences, fellowships, and other invitations. They all lived in Denmark for two sabbaticals, in 1964-65 and 1971-72, while Lee worked at the Neils Bohr Institute. They also traveled together to China for a month in 1975, and for two months in 1980. As part of the latter trip, they were among the first American visitors to Tibet since the 1940s. Lee and Lulu also traveled the world, from Antarctica to the Galapagos Islands to Greece.

His homes had basement workshops well-stocked with tools. His sons enjoyed a playroom he built for them in their Arlington home. He also once constructed his own high-fidelity record player, patched his old Volvo with fiberglass, changed his own oil, and put on the winter tires and chains himself. He was an early adopter of the home computer.

“His work in science and technology was part of a general love of gadgets and of fixing and making things,” his son, Dean, wrote in a Facebook post.

Lee is survived by Dean, his wife, Nora Nykiel Grodzins, and their daughter, Lily; and by Hal and his wife Cathy Salmons. 

A remembrance and celebration for Lee Grodzins is planned for this summer. Donations in his name may be made to Cornerstones of Science.

State AGs Must Act: EFF Expands Call to Investigate Crisis Pregnancy Centers

EFF: Updates - Thu, 03/20/2025 - 12:01pm

Back in January, EFF called on attorneys general in Florida, Texas, Arkansas, and Missouri to investigate potential privacy violations and hold accountable crisis pregnancy centers (CPCs) that engage in deceptive practices. Since then, some of these centers have begun to change their websites, quietly removing misleading language and privacy claims; the Hawaii legislature is considering a bill calling on the attorney general to investigate CPCs in the state, and legislators in Georgia have introduced a slate of bills to tackle deceptive CPC practices.

But there is much more to do. Today, we’re expanding our call to attorneys general in Tennessee, Oklahoma, Nebraska, and North Carolina, urging them to investigate the centers in their states.

Many CPCs have been operating under a veil of misleading promises for years—suggesting that clients’ personal health data is protected under HIPAA, even though numerous reports suggest otherwise; that privacy policies are not followed consistently, and that clients' personal data may be shared across networks without appropriate consent. For example, in a case in Louisiana, we saw firsthand how a CPC inadvertently exposed personal data from multiple clients in a software training video. This kind of error not only violates individuals’ privacy but could also lead to emotional and psychological harm for individuals who trusted these centers with their sensitive information.

We list multiple examples from CPCs in each of the states that claim to comply with HIPAA in our letters to Attorneys General Hilgers, Jackson, Drummond, and Skrmetti. Those include:

  • Gateway Women’s Care in North Carolina claims that “we hold your right to confidentiality with the utmost care and respect and comply with HIPAA privacy standards, which protect your personal and health information” in a blog post titled “Is My Visit Confidential?” Gateway Women’s Care received $56,514 in government grants in 2023. 
  • Assure Women’s Center in Nebraska stresses that it is “HIPAA compliant!” in a blog post that expressly urges people to visit them “before your doctor.”

As we’ve noted before, there are far too few protections for user privacy–including medical privacy—and individuals have little control over how their personal data is collected, stored, and used. Until Congress passes a comprehensive privacy law that includes a private right of action, state attorneys general must take proactive steps to protect their constituents from unfair or deceptive privacy practices.

It’s time for state and federal leaders to reassess how public funds are allocated to these centers. Our elected officials are responsible for ensuring that personal information, especially our sensitive medical data, is protected. After all, no one should have to choose between their healthcare and their privacy.

Critical GitHub Attack

Schneier on Security - Thu, 03/20/2025 - 11:14am

This is serious:

A sophisticated cascading supply chain attack has compromised multiple GitHub Actions, exposing critical CI/CD secrets across tens of thousands of repositories. The attack, which originally targeted the widely used “tj-actions/changed-files” utility, is now believed to have originated from an earlier breach of the “reviewdog/action-setup@v1” GitHub Action, according to a report.

[…]

CISA confirmed the vulnerability has been patched in version 46.0.1.

Given that the utility is used by more than 23,000 GitHub repositories, the scale of potential impact has raised significant alarm throughout the developer community...

Trump backs away from his threat to abolish FEMA

ClimateWire News - Thu, 03/20/2025 - 6:51am
A new executive order to create a national resilience plan signals that the president wants to overhaul disaster response while maintaining a federal role.

Democratic AGs sue EPA for climate grant cancellations

ClimateWire News - Thu, 03/20/2025 - 6:50am
Four state attorneys general say the agency is violating separation of powers and causing “irreparable reputational harm to the green banks.”

EPA watchdog launches audit of $7B solar program

ClimateWire News - Thu, 03/20/2025 - 6:50am
The scrutiny comes as Solar for All and other Greenhouse Gas Reduction Fund programs face attacks from EPA Administrator Lee Zeldin.

How the Greenpeace defamation verdict could stifle public protest

ClimateWire News - Thu, 03/20/2025 - 6:49am
A jury in North Dakota ordered Greenpeace to pay more than $660 million in damages to Dakota Access pipeline developer Energy Transfer.

Is the 1.5-degree limit toast? Climate experts search for universal metric.

ClimateWire News - Thu, 03/20/2025 - 6:48am
As the world gets closer to the Paris agreement threshold, the World Meteorological Organization races to establish a single way to monitor current warming.

DOT takes aim at transit systems in NYC, DC and California

ClimateWire News - Thu, 03/20/2025 - 6:48am
The Transportation secretary is threatening to cut off federal funds for New York City's transit agency unless it turns over information on crime and budget.

Pages