Feed aggregator
Why animals are a critical part of forest carbon absorption
A lot of attention has been paid to how climate change can drive biodiversity loss. Now, MIT researchers have shown the reverse is also true: Reductions in biodiversity can jeopardize one of Earth’s most powerful levers for mitigating climate change.
In a paper published in PNAS, the researchers showed that following deforestation, naturally-regrowing tropical forests, with healthy populations of seed-dispersing animals, can absorb up to four times more carbon than similar forests with fewer seed-dispersing animals.
Because tropical forests are currently Earth’s largest land-based carbon sink, the findings improve our understanding of a potent tool to fight climate change.
“The results underscore the importance of animals in maintaining healthy, carbon-rich tropical forests,” says Evan Fricke, a research scientist in the MIT Department of Civil and Environmental Engineering and the lead author of the new study. “When seed-dispersing animals decline, we risk weakening the climate-mitigating power of tropical forests.”
Fricke’s co-authors on the paper include César Terrer, the Tianfu Career Development Associate Professor at MIT; Charles Harvey, an MIT professor of civil and environmental engineering; and Susan Cook-Patton of The Nature Conservancy.
The study combines a wide array of data on animal biodiversity, movement, and seed dispersal across thousands of animal species, along with carbon accumulation data from thousands of tropical forest sites.
The researchers say the results are the clearest evidence yet that seed-dispersing animals play an important role in forests’ ability to absorb carbon, and that the findings underscore the need to address biodiversity loss and climate change as connected parts of a delicate ecosystem rather as separate problems in isolation.
“It’s been clear that climate change threatens biodiversity, and now this study shows how biodiversity losses can exacerbate climate change,” Fricke says. “Understanding that two-way street helps us understand the connections between these challenges, and how we can address them. These are challenges we need to tackle in tandem, and the contribution of animals to tropical forest carbon shows that there are win-wins possible when supporting biodiversity and fighting climate change at the same time.”
Putting the pieces together
The next time you see a video of a monkey or bird enjoying a piece of fruit, consider that the animals are actually playing an important role in their ecosystems. Research has shown that by digesting the seeds and defecating somewhere else, animals can help with the germination, growth, and long-term survival of the plant.
Fricke has been studying animals that disperse seeds for nearly 15 years. His previous research has shown that without animal seed dispersal, trees have lower survival rates and a harder time keeping up with environmental changes.
“We’re now thinking more about the roles that animals might play in affecting the climate through seed dispersal,” Fricke says. “We know that in tropical forests, where more than three-quarters of trees rely on animals for seed dispersal, the decline of seed dispersal could affect not just the biodiversity of forests, but how they bounce back from deforestation. We also know that all around the world, animal populations are declining.”
Regrowing forests is an often-cited way to mitigate the effects of climate change, but the influence of biodiversity on forests’ ability to absorb carbon has not been fully quantified, especially at larger scales.
For their study, the researchers combined data from thousands of separate studies and used new tools for quantifying disparate but interconnected ecological processes. After analyzing data from more than 17,000 vegetation plots, the researchers decided to focus on tropical regions, looking at data on where seed-dispersing animals live, how many seeds each animal disperses, and how they affect germination.
The researchers then incorporated data showing how human activity impacts different seed-dispersing animals’ presence and movement. They found, for example, that animals move less when they consume seeds in areas with a bigger human footprint.
Combining all that data, the researchers created an index of seed-dispersal disruption that revealed a link between human activities and declines in animal seed dispersal. They then analyzed the relationship between that index and records of carbon accumulation in naturally regrowing tropical forests over time, controlling for factors like drought conditions, the prevalence of fires, and the presence of grazing livestock.
“It was a big task to bring data from thousands of field studies together into a map of the disruption of seed dispersal,” Fricke says. “But it lets us go beyond just asking what animals are there to actually quantifying the ecological roles those animals are playing and understanding how human pressures affect them.”
The researchers acknowledged that the quality of animal biodiversity data could be improved and introduces uncertainty into their findings. They also note that other processes, such as pollination, seed predation, and competition influence seed dispersal and can constrain forest regrowth. Still, the findings were in line with recent estimates.
“What’s particularly new about this study is we’re actually getting the numbers around these effects,” Fricke says. “Finding that seed dispersal disruption explains a fourfold difference in carbon absorption across the thousands of tropical regrowth sites included in the study points to seed dispersers as a major lever on tropical forest carbon.”
Quantifying lost carbon
In forests identified as potential regrowth sites, the researchers found seed-dispersal declines were linked to reductions in carbon absorption each year averaging 1.8 metric tons per hectare, equal to a reduction in regrowth of 57 percent.
The researchers say the results show natural regrowth projects will be more impactful in landscapes where seed-dispersing animals have been less disrupted, including areas that were recently deforested, are near high-integrity forests, or have higher tree cover.
“In the discussion around planting trees versus allowing trees to regrow naturally, regrowth is basically free, whereas planting trees costs money, and it also leads to less diverse forests,” Terrer says. “With these results, now we can understand where natural regrowth can happen effectively because there are animals planting the seeds for free, and we also can identify areas where, because animals are affected, natural regrowth is not going to happen, and therefore planting trees actively is necessary.”
To support seed-dispersing animals, the researchers encourage interventions that protect or improve their habitats and that reduce pressures on species, ranging from wildlife corridors to restrictions on wildlife trade. Restoring the ecological roles of seed dispersers is also possible by reintroducing seed-dispersing species where they’ve been lost or planting certain trees that attract those animals.
The findings could also make modeling the climate impact of naturally regrowing forests more accurate.
“Overlooking the impact of seed-dispersal disruption may overestimate natural regrowth potential in many areas and underestimate it in others,” the authors write.
The researchers believe the findings open up new avenues of inquiry for the field.
“Forests provide a huge climate subsidy by sequestering about a third of all human carbon emissions,” Terrer says. “Tropical forests are by far the most important carbon sink globally, but in the last few decades, their ability to sequester carbon has been declining. We will next explore how much of that decline is due to an increase in extreme droughts or fires versus declines in animal seed dispersal.”
Overall, the researchers hope the study helps improves our understanding of the planet’s complex ecological processes.
“When we lose our animals, we’re losing the ecological infrastructure that keeps our tropical forests healthy and resilient,” Fricke says.
The research was supported by the MIT Climate and Sustainability Consortium, the Government of Portugal, and the Bezos Earth Fund.
Staff members honored with 2025 Excellence Awards, Collier Medal, and Staff Award for Distinction in Service
On Thursday, June 5, 11 individuals and four teams were awarded MIT Excellence Awards — the highest awards for staff at the Institute. Cheers from colleagues holding brightly colored signs and pompoms rang out in Kresge Auditorium in celebration of the honorees. In addition to the Excellence Awards, staff members received the Collier Medal, the Staff Award for Distinction in Service, and the Gordon Y. Billard Award.
The Collier Medal honors the memory of Officer Sean Collier, who gave his life protecting and serving MIT. The medal recognizes an individual or group whose actions demonstrate the importance of community, and whose contributions exceed the boundaries of their profession. The Staff Award for Distinction in Service is presented to an individual whose service results in a positive, lasting impact on the MIT community. The Gordon Y. Billard Award is given to staff or faculty members, or MIT-affiliated individuals, who provide "special service of outstanding merit performed for the Institute."
The 2025 MIT Excellence Award recipients and their award categories are:
Bringing Out the Best
- Timothy Collard
- Whitney Cornforth
- Roger Khazan
Embracing Inclusion
- Denise Phillips
Innovative Solutions
- Ari Jacobovits
- Stephanie Tran
- MIT Health Rebranding Team, Office of the Executive Vice President and Treasurer: Ann Adelsberger, Amy Ciarametaro, Kimberly Schive, Emily Wade
Outstanding Contributor
- Sharon Clarke
- Charles "Chip" Coldwell
- Jeremy Mineweaser
- Christopher "Petey" Peterson
- MIT Health Accreditation Team, Office of the Executive Vice President and Treasurer: Christianne Garcia, David Podradchik, Janis Puibello, Kristen Raymond
- MIT Museum Visitor Experience Supervisor Team, Associate Provost for the Arts: Mariah Crowley, Brianna Vega
Serving Our Community
- Nada Miqdadi El-Alami
- MIT International Scholars Office, Office of the Vice President for Research: Portia Brummitt-Vachon, Amanda Doran, Brianna L. Drakos, Fumiko Futai, Bay Heidrich, Benjamin Hull, Penny Rosser, Henry Rotchford, Patricia Toledo, Makiko Wada
- Building 68 Kitchen Staff, Department of Biology, School of Science: Brikti Abera, AnnMarie Budhai, Nicholas Budhai, Daniel Honiker, Janet Katin, Umme Khan, Shuming Lin, Kelly McKinnon, Karen O'Leary
The 2025 Collier Medal recipient was Kathleen Monagle, associate dean and director of disability and access services, student support, and wellbeing in the Division of Student Life. Monagle oversees a team that supports almost 600 undergraduate, graduate, and MITx students with more than 4,000 accommodations. She works with faculty to ensure those students have the best possible learning experience — both in MIT’s classrooms and online.
This year’s recipient of the 2025 Staff Award for Distinction in Service was Stu Schmill, dean of admissions and student financial services in the Office of the Vice Chancellor. Schmill graduated from MIT in 1986 and has since served the Institute in a variety of roles. His colleagues admire his passion for sharing knowledge; his insight and integrity; and his deep love for MIT’s culture, values, and people.
Three community members were honored with a 2025 Gordon Y. Billard Award.
William "Bill" Cormier, project technician, Department of Mechanical Engineering, School of Engineering
John E. Fernández, professor, Department of Architecture, School of Architecture and Planning; and director of MIT Environmental Solutions Initiative, Office of the Vice President for Research
Tony Lee, coach, MIT Women's Volleyball Club, Student Organizations, Leadership, and Engagement, Division of Student Life
Presenters included President Sally Kornbluth; MIT Chief of Police John DiFava and Deputy Chief Steven DeMarco; Dean of the School of Science Nergis Mavalvala; Vice President for Human Resources Ramona Allen; Executive Vice President and Treasurer Glen Shor; Lincoln Laboratory Assistant Director Justin Brooke; Chancellor Melissa Nobles; and Provost Anantha Chandrakasan.
Visit the MIT Human Resources website for more information about the award recipients, categories, and to view photos and video of the event.
New system dramatically speeds the search for polymer materials
Scientists often seek new materials derived from polymers. Rather than starting a polymer search from scratch, they save time and money by blending existing polymers to achieve desired properties.
But identifying the best blend is a thorny problem. Not only is there a practically limitless number of potential combinations, but polymers interact in complex ways, so the properties of a new blend are challenging to predict.
To accelerate the discovery of new materials, MIT researchers developed a fully autonomous experimental platform that can efficiently identify optimal polymer blends.
The closed-loop workflow uses a powerful algorithm to explore a wide range of potential polymer blends, feeding a selection of combinations to a robotic system that mixes chemicals and tests each blend.
Based on the results, the algorithm decides which experiments to conduct next, continuing the process until the new polymer meets the user’s goals.
During experiments, the system autonomously identified hundreds of blends that outperformed their constituent polymers. Interestingly, the researchers found that the best-performing blends did not necessarily use the best individual components.
“I found that to be good confirmation of the value of using an optimization algorithm that considers the full design space at the same time,” says Connor Coley, the Class of 1957 Career Development Assistant Professor in the MIT departments of Chemical Engineering and Electrical Engineering and Computer Science, and senior author of a paper on this new approach. “If you consider the full formulation space, you can potentially find new or better properties. Using a different approach, you could easily overlook the underperforming components that happen to be the important parts of the best blend.”
This workflow could someday facilitate the discovery of polymer blend materials that lead to advancements like improved battery electrolytes, more cost-effective solar panels, or tailored nanoparticles for safer drug delivery.
Coley is joined on the paper by lead author Guangqi Wu, a former MIT postdoc who is now a Marie Skłodowska-Curie Postdoctoral Fellow at Oxford University; Tianyi Jin, an MIT graduate student; and Alfredo Alexander-Katz, the Michael and Sonja Koerner Professor in the MIT Department of Materials Science and Engineering. The work appears today in Matter.
Building better blends
When scientists design new polymer blends, they are faced with a nearly endless number of possible polymers to start with. Once they select a few to mix, they still must choose the composition of each polymer and the concentration of polymers in the blend.
“Having that large of a design space necessitates algorithmic solutions and higher-throughput workflows because you simply couldn’t test all the combinations using brute force,” Coley adds.
While researchers have studied autonomous workflows for single polymers, less work has focused on polymer blends because of the dramatically larger design space.
In this study, the MIT researchers sought new random heteropolymer blends, made by mixing two or more polymers with different structural features. These versatile polymers have shown particularly promising relevance to high-temperature enzymatic catalysis, a process that increases the rate of chemical reactions.
Their closed-loop workflow begins with an algorithm that, based on the user’s desired properties, autonomously identifies a handful of promising polymer blends.
The researchers originally tried a machine-learning model to predict the performance of new blends, but it was difficult to make accurate predictions across the astronomically large space of possibilities. Instead, they utilized a genetic algorithm, which uses biologically inspired operations like selection and mutation to find an optimal solution.
Their system encodes the composition of a polymer blend into what is effectively a digital chromosome, which the genetic algorithm iteratively improves to identify the most promising combinations.
“This algorithm is not new, but we had to modify the algorithm to fit into our system. For instance, we had to limit the number of polymers that could be in one material to make discovery more efficient,” Wu adds.
In addition, because the search space is so large, they tuned the algorithm to balance its choice of exploration (searching for random polymers) versus exploitation (optimizing the best polymers from the last experiment).
The algorithm sends 96 polymer blends at a time to the autonomous robotic platform, which mixes the chemicals and measures the properties of each.
The experiments were focused on improving the thermal stability of enzymes by optimizing the retained enzymatic activity (REA), a measure of how stable an enzyme is after mixing with the polymer blends and being exposed to high temperatures.
These results are sent back to the algorithm, which uses them to generate a new set of polymers until the system finds the optimal blend.
Accelerating discovery
Building the robotic system involved numerous challenges, such as developing a technique to evenly heat polymers and optimizing the speed at which the pipette tip moves up and down.
“In autonomous discovery platforms, we emphasize algorithmic innovations, but there are many detailed and subtle aspects of the procedure you have to validate before you can trust the information coming out of it,” Coley says.
When tested, the optimal blends their system identified often outperformed the polymers that formed them. The best overall blend performed 18 percent better than any of its individual components, achieving an REA of 73 percent.
“This indicates that, instead of developing new polymers, we could sometimes blend existing polymers to design new materials that perform even better than individual polymers do,” Wu says.
Moreover, their autonomous platform can generate and test 700 new polymer blends per day and only requires human intervention for refilling and replacing chemicals.
While this research focused on polymers for protein stabilization, their platform could be modified for other uses, like the development or new plastics or battery electrolytes.
In addition to exploring additional polymer properties, the researchers want to use experimental data to improve the efficiency of their algorithm and develop new algorithms to streamline the operations of the autonomous liquid handler.
“Technologically, there are urgent needs to enhance thermal stability of proteins and enzymes. The results demonstrated here are quite impressive. Being a platform technology and given the rapid advancement in machine learning and AI for material science, one can envision the possibility for this team to further enhance random heteropolymer performances or to optimize design based on end needs and usages,” says Ting Xu, an associate professor at the University of California at Berkeley, who was not involved with this work.
This work is funded, in part, by the U.S. Department of Energy, the National Science Foundation, and the Class of 1947 Career Development Chair.
Microsoft SharePoint Zero-Day
Chinese hackers are exploiting a high-severity vulnerability in Microsoft SharePoint to steal data worldwide:
The vulnerability, tracked as CVE-2025-53770, carries a severity rating of 9.8 out of a possible 10. It gives unauthenticated remote access to SharePoint Servers exposed to the Internet. Starting Friday, researchers began warning of active exploitation of the vulnerability, which affects SharePoint Servers that infrastructure customers run in-house. Microsoft’s cloud-hosted SharePoint Online and Microsoft 365 are not affected.
Just Banning Minors From Social Media Is Not Protecting Them
By publishing its guidelines under Article 28 of the Digital Services Act, the European Commission has taken a major step towards social media bans that will undermine privacy, expression, and participation rights for young people that are already enshrined in international human rights law.
EFF recently submitted feedback to the Commission’s consultation on the guidelines, emphasizing a critical point: Online safety for young people must include privacy and security for them and must not come at the expense of freedom of expression and equitable access to digital spaces.
Article 28 requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. But the article also prohibits targeting minors with personalized ads, a measure that would seem to require that platforms know that a user is a minor. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy and requiring platforms to know the age of every user. The DSA does not resolve this tension. Rather, it states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage.
Thus, the question of age checks is a key to understanding the obligations of online platforms to safeguard minors online. Our submission explained the serious concerns that age checks pose to the rights and security of minors. All methods for conducting age checks come with serious drawbacks. Approaches to verify a user’s age generally involve some form of government-issued ID document, which millions of people in Europe—including migrants, members of marginalized groups and unhoused people, exchange students, refugees and tourists—may not have access to.
Other age assurance methods, like biometric age estimation, age estimation based on email addresses or user activity, involve the processing of vast amounts of personal, sensitive data – usually in the hands of third parties. Beyond being potentially exposed to discrimination and erroneous estimations, users are asked to trust platforms’ intransparent supply chains and hope for the best. Age assurance methods always impact the rights of children and teenagers: Their rights to privacy and data protection, free expression, information and participation.
The Commission's guidelines contain a wealth of measures elucidating the Commission's understanding of "age appropriate design" of online services. We have argued that some of them, including default settings to protect users’ privacy, effective content moderation and ensuring that recommender systems’ don’t rely on the collection of behavioral data, are practices that would benefit all users.
But while the initial Commission draft document considered age checks as only a tool to determine users’ ages to be able to tailor their online experiences according to their age, the final guidelines go far beyond that. Crucially, the European Commission now seems to consider “measures restricting access based on age to be an effective means to ensure a high level of privacy, safety and security for minors on online platforms” (page 14).
This is a surprising turn, as many in Brussels have considered social media bans like the one Australia passed (and still doesn’t know how to implement) disproportionate. Responding to mounting pressure from Member States like France, Denmark, and Greece to ban young people under a certain age from social media platforms, the guidelines contain an opening clause for national rules on age limits for certain services. According to the guidelines, the Commission considers such access restrictions appropriate and proportionate where “union or national law, (...) prescribes a minimum age to access certain products or services (...), including specifically defined categories of online social media services”. This opens the door for different national laws introducing different age limits for services like social media platforms.
It’s concerning that the Commission generally considers the use of age verification proportionate in any situation where a provider of an online platform identifies risks to minors’ privacy, safety, or security and those risks “cannot be mitigated by other less intrusive measures as effectively as by access restrictions supported by age verification” (page 17). This view risks establishing a broad legal mandate for age verification measures.
It is clear that such bans will do little in the way of making the internet a safer space for young people. By banning a particularly vulnerable group of users from accessing platforms, the providers themselves are let off the hook: If it is enough for platforms like Instagram and TikTok to implement (comparatively cheap) age restriction tools, there are no incentives anymore to actually make their products and features safer for young people. Banning a certain user group changes nothing about problematic privacy practices, insufficient content moderation or business models based on the exploitation of people’s attention and data. And assuming that teenagers will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences.
Researchers quietly planned a major test to dim sunlight, records show
EPA to suspend methane limits without public input
Economists, physicians and legal scholars back kids climate lawsuit
Green groups sue California over air pollution from a climate law
Panel sets markup on disaster, good-government bills
Fight over carbon storage in Texas spills into public hearing
Climate was a safe space for the EU and China. Not anymore.
MEP in charge of EU’s 2040 climate target moves to kill it
‘Unprecedented’ ocean heat waves suggest climate tipping point
London’s financial district workers face dangerously hot commute
Wind droughts threaten energy reliability
Nature Climate Change, Published online: 28 July 2025; doi:10.1038/s41558-025-02383-1
Wind energy is helping to mitigate climate change. But now a study shows that climate change may make wind power less reliable.Reduction of methane emissions through improved landfill management
Nature Climate Change, Published online: 28 July 2025; doi:10.1038/s41558-025-02391-1
Solid waste disposal is a major source of anthropogenic methane, yet estimating these emissions is difficult. Here the authors use satellite data to assess emissions from high-emitting landfills and find that transforming open sites to sanitary landfills could offer a large mitigation potential.Prolonged wind droughts in a warming climate threaten global wind power security
Nature Climate Change, Published online: 28 July 2025; doi:10.1038/s41558-025-02387-x
Prolonged low wind speeds can lead to a strong reduction in wind power generation. Here, the authors show that such wind drought events become more frequent and extended under global warming, threatening energy security in some regions.Famous double-slit experiment holds up when stripped to its quantum essentials
MIT physicists have performed an idealized version of one of the most famous experiments in quantum physics. Their findings demonstrate, with atomic-level precision, the dual yet evasive nature of light. They also happen to confirm that Albert Einstein was wrong about this particular quantum scenario.
The experiment in question is the double-slit experiment, which was first performed in 1801 by the British scholar Thomas Young to show how light behaves as a wave. Today, with the formulation of quantum mechanics, the double-slit experiment is now known for its surprisingly simple demonstration of a head-scratching reality: that light exists as both a particle and a wave. Stranger still, this duality cannot be simultaneously observed. Seeing light in the form of particles instantly obscures its wave-like nature, and vice versa.
The original experiment involved shining a beam of light through two parallel slits in a screen and observing the pattern that formed on a second, faraway screen. One might expect to see two overlapping spots of light, which would imply that light exists as particles, a.k.a. photons, like paintballs that follow a direct path. But instead, the light produces alternating bright and dark stripes on the screen, in an interference pattern similar to what happens when two ripples in a pond meet. This suggests light behaves as a wave. Even weirder, when one tries to measure which slit the light is traveling through, the light suddenly behaves as particles and the interference pattern disappears.
The double-slit experiment is taught today in most high school physics classes as a simple way to illustrate the fundamental principle of quantum mechanics: that all physical objects, including light, are simultaneously particles and waves.
Nearly a century ago, the experiment was at the center of a friendly debate between physicists Albert Einstein and Niels Bohr. In 1927, Einstein argued that a photon particle should pass through just one of the two slits and in the process generate a slight force on that slit, like a bird rustling a leaf as it flies by. He proposed that one could detect such a force while also observing an interference pattern, thereby catching light’s particle and wave nature at the same time. In response, Bohr applied the quantum mechanical uncertainty principle and showed that the detection of the photon’s path would wash out the interference pattern.
Scientists have since carried out multiple versions of the double-slit experiment, and they have all, to various degrees, confirmed the validity of the quantum theory formulated by Bohr. Now, MIT physicists have performed the most “idealized” version of the double-slit experiment to date. Their version strips down the experiment to its quantum essentials. They used individual atoms as slits, and used weak beams of light so that each atom scattered at most one photon. By preparing the atoms in different quantum states, they were able to modify what information the atoms obtained about the path of the photons. The researchers thus confirmed the predictions of quantum theory: The more information was obtained about the path (i.e. the particle nature) of light, the lower the visibility of the interference pattern was.
They demonstrated what Einstein got wrong. Whenever an atom is “rustled” by a passing photon, the wave interference is diminished.
“Einstein and Bohr would have never thought that this is possible, to perform such an experiment with single atoms and single photons,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics and leader of the MIT team. “What we have done is an idealized Gedanken experiment.”
Their results appear in the journal Physical Review Letters. Ketterle’s MIT co-authors include first author Vitaly Fedoseev, Hanzhen Lin, Yu-Kun Lu, Yoo Kyung Lee, and Jiahao Lyu, who all are affiliated with MIT’s Department of Physics, the Research Laboratory of Electronics, and the MIT-Harvard Center for Ultracold Atoms.
Cold confinement
Ketterle’s group at MIT experiments with atoms and molecules that they super-cool to temperatures just above absolute zero and arrange in configurations that they confine with laser light. Within these ultracold, carefully tuned clouds, exotic phenomena that only occur at the quantum, single-atom scale can emerge.
In a recent experiment, the team was investigating a seemingly unrelated question, studying how light scattering can reveal the properties of materials built from ultracold atoms.
“We realized we can quantify the degree to which this scattering process is like a particle or a wave, and we quickly realized we can apply this new method to realize this famous experiment in a very idealized way,” Fedoseev says.
In their new study, the team worked with more than 10,000 atoms, which they cooled to microkelvin temperatures. They used an array of laser beams to arrange the frozen atoms into an evenly spaced, crystal-like lattice configuration. In this arrangement, each atom is far enough away from any other atom that each can effectively be considered a single, isolated and identical atom. And 10,000 such atoms can produce a signal that is more easily detected, compared to a single atom or two.
The group reasoned that with this arrangement, they might shine a weak beam of light through the atoms and observe how a single photon scatters off two adjacent atoms, as a wave or a particle. This would be similar to how, in the original double-slit experiment, light passes through two slits.
“What we have done can be regarded as a new variant to the double-slit experiment,” Ketterle says. “These single atoms are like the smallest slits you could possibly build.”
Tuning fuzz
Working at the level of single photons required repeating the experiment many times and using an ultrasensitive detector to record the pattern of light scattered off the atoms. From the intensity of the detected light, the researchers could directly infer whether the light behaved as a particle or a wave.
They were particularly interested in the situation where half the photons they sent in behaved as waves, and half behaved as particles. They achieved this by using a method to tune the probability that a photon will appear as a wave versus a particle, by adjusting an atom’s “fuzziness,” or the certainty of its location. In their experiment, each of the 10,000 atoms is held in place by laser light that can be adjusted to tighten or loosen the light’s hold. The more loosely an atom is held, the fuzzier, or more “spatially extensive,” it appears. The fuzzier atom rustles more easily and records the path of the photon. Therefore, in tuning up an atom’s fuzziness, researchers can increase the probability that a photon will exhibit particle-like behavior. Their observations were in full agreement with the theoretical description.
Springs away
In their experiment, the group tested Einstein’s idea about how to detect the path of the photon. Conceptually, if each slit were cut into an extremely thin sheet of paper that was suspended in the air by a spring, a photon passing through one slit should shake the corresponding spring by a certain degree that would be a signal of the photon’s particle nature. In previous realizations of the double slit experiment, physicists have incorporated such a spring-like ingredient, and the spring played a major role in describing the photon’s dual nature.
But Ketterle and his colleagues were able to perform the experiment without the proverbial springs. The team’s cloud of atoms is initially held in place by laser light, similar to Einstein’s conception of a slit suspended by a spring. The researchers reasoned that if they were to do away with their “spring,” and observe exactly the same phenomenon, then it would show that the spring has no effect on a photon’s wave/particle duality.
This, too, was what they found. Over multiple runs, they turned off the spring-like laser holding the atoms in place and then quickly took a measurement in a millionth of a second, before the atoms became more fuzzy and eventually fell down due to gravity. In this tiny amount of time, the atoms were effectively floating in free space. In this spring-free scenario, the team observed the same phenomenon: A photon’s wave and particle nature could not be observed simultaneously.
“In many descriptions, the springs play a major role. But we show, no, the springs do not matter here; what matters is only the fuzziness of the atoms,” Fedoseev says. “Therefore, one has to use a more profound description, which uses quantum correlations between photons and atoms.”
The researchers note that the year 2025 has been declared by the United Nations as the International Year of Quantum Science and Technology, celebrating the formulation of quantum mechanics 100 years ago. The discussion between Bohr and Einstein about the double-slit experiment took place only two years later.
“It’s a wonderful coincidence that we could help clarify this historic controversy in the same year we celebrate quantum physics,” says co-author Lee.
This work was supported, in part, by the National Science Foundation, the U.S. Department of Defense, and the Gordon and Betty Moore Foundation.
Zero Knowledge Proofs Alone Are Not a Digital ID Solution to Protecting User Privacy
In the past few years, governments across the world have rolled out digital identification options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the first in this short series that will explain digital ID and the pending use case of age verification. The following posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.
Age verification measures are having a moment, with policymakers in the U.S. and around the world passing legislation mandating online services and companies to introduce technologies that require people to verify their identities to access content deemed appropriate for their age. But for most people, having physical government documentation like a driver's license, passport, or other ID is not a simple binary of having it or not. Physical ID systems involve hundreds of factors that impact their accuracy and validity, and everyday situations occur where identification attributes can change, or an ID becomes invalid or inaccurate or needs to be reissued: addresses change, driver’s licenses expire or have suspensions lifted, or temporary IDs are issued in lieu of obtaining permanent identification.
The digital ID systems currently being introduced potentially solve some problems like identity fraud for business and government services, but leave the holder of the digital ID vulnerable to the needs of the companies collecting such information. State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance and other services as facilitate them. That’s why legal protections are as important as the digital IDs themselves. To add to this, in places that lack comprehensive data privacy legislation, verifiers are not heavily restricted in what they can and can’t ask the holder. In response, some privacy mechanisms have been suggested and few have been made mandatory, such as the promise that a feature called Zero Knowledge Proofs (ZKPs) will easily solve the privacy aspects of sharing ID attributes.
Zero Knowledge Proofs: The Good NewsThe biggest selling point of modern digital ID offerings, especially to those seeking to solve mass age verification, is being able to incorporate and share something called a Zero Knowledge Proof (ZKP) for a website or mobile application to verify ID information, and not have to share the ID itself or information explicitly on it. ZKPs provide a cryptographic way to not give something away, like your exact date of birth and age from your ID, instead offering a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. More specifically, two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and governments to make it hard for an ID holder to present forged information (the holder won’t know the “secret”). Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information like a birth date, just cryptographic proof that said information exists and is valid. There have been recent announcements from major tech companies like Google who plan to integrate ZKPs for age verification and “where appropriate in other Google products”.
Zero Knowledge Proofs: The Bad NewsWhat ZKPs don’t do is mitigate verifier abuse or limit their requests, such as over-asking for information they don’t need or limiting the number of times they request your age over time. They don’t prevent websites or applications from collecting other kinds of observable personally identifiable information like your IP address or other device information while interacting with them.
ZKPs are a great tool for sharing less data about ourselves over time or in a one time transaction. But this doesn’t do a lot about the data broker industry that already has massive, existing profiles of data on people. We understand that this was not what ZKPs for age verification were presented to solve. But it is still imperative to point out that utilizing this technology to share even more about ourselves online through mandatory age verification establishes a wider scope for sharing in an already saturated ecosystem of easily linked, existing personal information online. Going from presenting your physical ID maybe 2-3 times a week to potentially proving your age to multiple websites and apps every day online is going to render going online itself as a burden at minimum and a barrier entirely at most for those who can’t obtain an ID.
Protecting The Way ForwardMandatory age verification takes the potential privacy benefits of mobile ID and proposed ZKPs solutions, then warps them into speech chilling mechanisms.
Until the hard questions of power imbalances for potentially abusive verifiers and prevention of phoning home to ID issuers are addressed, these systems should not be pushed forward without proper protections in place. A more private, holder-centric ID is more than just ZKPs as a catch all for privacy concerns. The case of safety online is not solved through technology alone, and involves multiple, ongoing conversations. Yes, that sounds harder to do than age checks online for everyone. Maybe, that’s why this is so tempting to implement. However, we encourage policy and law makers to look into what is best, and not what is easy.