Feed aggregator
China’s $6.8T green finance boom seen advancing further
Climate change worsened rains in southern Africa, study shows
Taking the heat out of industrial chemical separations
The modern world runs on chemicals and fuels that require a huge amount of energy to produce: Industrial chemical separation accounts for 10 to 15 percent of the world’s total energy consumption. That’s because most separations today rely on heat to boil off unwanted materials and isolate compounds.
The MIT spinout Osmoses is making industrial chemical separations more efficient by reducing the need for all that heat. The company, founded by former MIT postdoc Francesco Maria Benedetti; Katherine Mizrahi Rodriguez ’17, PhD ’22; Professor Zachary Smith; and Holden Lai, has developed a polymer technology capable of filtering gases with unprecedented selectivity.
Gases — consisting of some of the smallest molecules in the world — have historically been the hardest to separate. Osmoses says its membranes enable industrial customers to increase production, use less energy, and operate in a smaller footprint than is possible using conventional heat-based separation processes.
Osmoses has already begun working with partners to demonstrate its technology’s performance, including its ability to upgrade biogas, which involves separating CO2 and methane. The company also has projects in the works to recover hydrogen from large chemical facilities and, in a partnership with the U.S. Department of Energy, to pull helium from underground hydrogen wells.
“Chemical separations really matter, and they are a bottleneck to innovation and progress in an industry where innovation is challenging, yet an existential need,” Benedetti says. “We want to make it easier for our customers to reach their revenue targets, their decarbonization goals, and expand their markets to move the industry forward.”
Better separations
Benedetti joined Smith’s lab in MIT’s Department of Chemical Engineering in 2017. He was joined by Mizrahi Rodriguez the following year, and the pair spent the next few years conducting fundamental research into membrane materials for gas separations, collaborating with chemists at MIT and beyond, including Lai as he conducted his PhD at Stanford University with Professor Yan Xia.
“I was fascinated by the projects [Smith] was thinking about,” Benedetti says. “It was high-risk, high-reward, and that’s something I love. I had the opportunity to work with talented chemists, and they were synthesizing amazing polymers. The idea was for us chemical engineers at MIT to study those polymers, support chemists in taking next steps, and find an application in the separations world.”
The researchers slowly iterated on the membranes, gradually achieving better performance until, in 2020, a group including Lai, Benedetti, Xia, and Smith broke records for gas separation selectivity with a class of three-dimensional polymers whose structural backbone could be tuned to optimize performance. They filed patents with Stanford and MIT over the next two years, publishing their results in the journal Science in 2022.
“We were facing a decision of what to do with this incredible innovation,” Benedetti recalls. “By then, we’d published a lot of papers where, as the introduction, we described the huge energy footprint of thermal gas separations and the potential of membranes to solve that. We thought rather than wait for somebody to pick up the paper and do something with it, we wanted to lead the effort to commercialize the technology.”
Benedetti joined forces with Mizrahi Rodriguez, Lai, and industrial advisor Xinjin Zhao PhD ’92 to go through the National Science Foundation’s I-Corps Program, which challenges researchers to speak to potential customers in industry. The researchers interviewed more than 100 people, which confirmed for them the huge impact their technology could have.
Benedetti received grants from the MIT Deshpande Center for Technological Innovation, MIT Sandbox, and was a fellow with the MIT Energy Initiative. Osmoses also won the MIT $100K Entrepreneurship Competition in 2021, the same year they founded the company.
“I spent a lot of time talking to stakeholders of companies, and it was a window into the challenges the industry is facing,” Benedetti says. “It helped me determine this was a problem they were facing, and showed me the problem was massive. We realized if we could solve the problem, we could change the world.”
Today, Benedetti says more than 90 percent of energy in the chemicals industry is used to thermally separate gases. One study in Nature found that replacing thermal distillation could reduce annual U.S. energy costs by $4 billion and save 100 million tons of carbon dioxide emissions.
Made up of a class of molecules with tunable structures called hydrocarbon ladder polymers, Osmoses’ membranes are capable of filtering gas molecules with high levels of selectivity, at scale. The technology reduces the size of separation systems, making it easier to add to existing spaces and lowering upfront costs for customers.
“This technology is a paradigm shift with respect to how most separations are happening in industry today,” Benedetti says. “It doesn’t require any thermal processes, which is the reason why the chemical and petrochemical industries have such high energy consumption. There are huge inefficiencies in how separations are done today because of the traditional systems used.”
From the lab to the world
In the lab, the founders were making single grams of their membrane polymers for experiments. Since then, they’ve scaled up production dramatically, reducing the cost of the material with an eye toward producing potentially hundreds of kilograms in the future.
The company is currently working toward its first pilot project upgrading biogas at a landfill operated by a large utility in North America. It is also planning a pilot at a dairy farm in North America. Mizrahi Rodriguez says waste gas from landfills and agricultural make up over 80 percent of the biogas upgrading market overall and represent a promising alternative source of renewable methane for customers.
“In the near term, our goal is to validate this technology at scale,” Benedetti says, noting Osmoses aims to scale up its pilot projects. “It has been a big accomplishment to secure funded pilots in all of the verticals that will serve as a springboard for our next commercial phase.”
Osmoses’ other two pilot projects focus on recovering valuable gas, including helium with the Department of Energy.
“Helium is a scarce resource that we need for a variety of applications, like MRIs, and our membranes’ high performance can be used to extract small amounts of it from underground wells,” Mizrahi Rodriguez explains. “Helium is very important in the semiconductor industry to build chips and graphical processing units that are powering the AI revolution. It’s a strategic resource that the U.S. has a growing interest to produce domestically.”
Benedetti says further down the line, Osmoses’ technology could be used in carbon capture, gas “sweetening” to remove acid gases from natural gas, to separate oxygen and nitrogen, to reuse refrigerants, and more.
“There will be a progressive expansion of our capabilities and markets to deliver on our mission of redefining the backbone of the chemical, petrochemical, and energy industries,” Benedetti says. “Separations should not be a bottleneck to innovation and progress anymore.”
Biodiversity implications of land-intensive carbon dioxide removal
Nature Climate Change, Published online: 30 January 2026; doi:10.1038/s41558-026-02557-5
Carbon dioxide removal (CDR) plays an important role in decarbonization pathways to meet climate goals, but some methods are land-intensive. Multimodel analysis reveals conflicts between biodiversity and CDR that are distributed unevenly, and shows that synergies are crucial to meet climate and conservation goals.Q&A: A simpler way to understand syntax
For decades, MIT Professor Ted Gibson has taught the meaning of language to first-year graduate students in the Department of Brain and Cognitive Sciences (BCS). A new book, Gibson’s first, brings together his years of teaching and research to detail the rules of how words combine.
“Syntax: A Cognitive Approach,” released by MIT Press on Dec. 16, lays out the grammar of a language from the perspective of a cognitive scientist, outlining the components of language structure and the model of syntax that Gibson advocates: dependency grammar.
It was his research collaborator and wife, associate professor of BCS and McGovern Institute for Brain Research investigator Ev Fedorenko, who encouraged him to put pen to paper. Here, Gibson takes some time to discuss the book.
Q: Where did the process for “Syntax” begin?
A: I think it started with my teaching. Course 9.012 (Cognitive Science), which I teach with Josh Tenenbaum and Pawan Sinha, divides language into three components: sound, structure, and meaning. I work on the structure and meaning parts of language: words and how they get put together. That’s called syntax.
I’ve spent a lot of time over the last 30 years trying to understand the compositional rules of syntax, and even though there are many grammar rules in any language, I actually don’t think the form for grammar rules is that complicated. I’ve taught it in a very simple way for many years, but I’ve never written it all down in one place. My wife, Ev, is a longtime collaborator, and she suggested I write a paper. It turned into a book.
Q: How do you like to explain syntax?
A: For any sentence, for any utterance in any human language, there’s always going to be a word that serves as the head of that sentence, and every other other word will somehow depend on that headword, maybe as an immediate dependent, or further away, through some other dependent words. This is called dependency grammar; it means there’s a root word in each sentence, and dependents of that root, on down, for all the words in the sentence, form a simple tree structure. I have cognitive reasons to suggest that this model is correct, but it isn’t my model; it was first proposed in the 1950s. I adopted it because it aligns with human cognitive phenomena.
That very simple framework gives you the following observation: that longer-distance connections between words are harder to produce and understand than shorter-distance ones. This is because of limitations in human memory. The closer the words are together, the easier it is for me to produce them in a sentence, and the easier it is for you to understand them. If they’re far apart, then it’s a complicated memory problem to produce and understand them.
This gives rise to a cool observation: Languages optimize their rules in order to keep the words close together. We can have very different orders of the same elements across languages, such as the difference in word orders for English versus Japanese, where the order of the words in the English sentence “Mary eats an apple” is “Mary apple eats” in Japanese. But then the ordering rules in English and Japanese are aligned within themselves in order to minimize dependency lengths on average for the language.
Q: How does the book challenge some longstanding ideas in the field of linguistics?
A: In 1957, a book called “Syntactic Structures” by Noam Chomsky was published. It is a wonderful book that provides mathematical approaches to describe what human language is. It is very influential in the field of linguistics, and for good reason.
One of the key components of the theory that Chomsky proposed was the “transformation,” such that words and phrases can move from a deep structure to the structure that we produce. He thought it was self-evident from examples in English that transformations must be part of a human language. But then this concept of transformations eventually led him to conclude that grammar is unlearnable, that it has to be built into the human mind.
In my view of grammar, there are no transformations. Instead, there are just two different versions of some words, or they can be underspecified for their grammar usage. The different usages may be related in meaning, and they can point to a similar meaning, but they have different dependency structures.
I think the advent of large language models suggests that language is learnable and that syntax isn’t as complicated as we used to think it was, because LLMs are successful at producing language. A large language model is almost the same as an adult speaker of a language in what it can produce. There are subtle ways in which they differ, but on the surface, they look the same in many ways, which suggests that these models do very well with learning language, even with human-like quantities of data.
I get pushback from some people who say, well, researchers can still use transformations to account for some phenomena. My reaction is: Unless you can show me that transformations are necessary, then I don’t think we need them.
Q: This book is open access. Why did you decide to publish it that way?
A: I am all for free knowledge for everyone. I am one of the editors of “Open Mind,” a journal established several years ago that is completely free and open access. I felt my book should be the same way, and MIT Press is a fantastic university press that is nonprofit and supportive of open-access publishing. It means I make less money, but it also means it can reach more people. For me, it is really about trying to get the information out there. I want more people to read it, to learn things. I think that’s how science is supposed to be.
Rhea Vedro brings community wishes to life in Boston sculpture
Boston recently got its own good luck charm, “Amulet,” a 19-foot-tall tangle of organic spires installed in City Hall Plaza and embedded with the wishes, hopes, and prayers of residents from across the city.
The public artwork, by artist Rhea Vedro — also a lecturer and metals artist-in-residence in MIT’s Department of Materials Science and Engineering (DMSE) — was installed on the north side of City Hall, in a newly renovated stretch of the plaza along Congress Street, in October and dedicated with a ribbon cutting on Dec. 19.
“I’m really interested in this idea of protective objects worn on the skin by humans across cultures, across time,” said Vedro at the event in the Civic Pavilion, across the plaza from the sculpture. “And then, how do you take those ideas off the body and turn them into a blown-up version — a stand-in for the body?”
Vedro started exploring that question in 2021, when she was awarded a Boston Triennial Public Art Accelerator fellowship and later commissioned by the city to create the piece — the first artwork installed in the refurbished section of the plaza. She invited people to workshops and community centers to create hundreds of “wishmarks” — steel panels with hammered indentations and words, each representing a personal wish or reflection.
The plates were later used to form the metal skin of the sculpture — three bird-like forms designed to be, in Vedro’s words, a “protective amulet for the landscape.”
“I didn’t ask anyone to share what their actual wishes were, but I met people going into surgery, people who were homeless and looking for housing, people who had just lost a loved one, people dealing with immigration issues,” Vedro said. She asked participants to meditate on the idea of a journey and safe passage. “That could be a literal journey with ideas around immigration and migration,” she said, “or it could be your own internal journey.”
Large-scale art, fine-scale detail
Vedro, who has several public artworks to her name, said in a video about making “Amulet” that the project was “the biggest thing I’ve ever done.” While artworks of this scale are often handed off to fabrication teams, she handled the construction herself, starting on her driveway until zoning rules forced her to move to her father-in-law’s warehouse. Sections were also welded at Artisans Asylum, a community workshop in Boston, where she was an artist in residence, and then moved to a large industrial studio in Rhode Island.
At the ribbon-cutting event, Vedro thanked friends, family members, and city officials who helped bring the project to life. The celebration ended with a concert by musician Veronica Robles and her mariachi band. Robles runs the Veronica Robles Cultural Center in East Boston, which served as the main site for wishmark workshops. The sculpture is expected to remain in City Hall Plaza for up to five years.
Vedro’s background is in fine arts metalsmithing, a discipline that involves shaping and manipulating metals like silver, gold, and copper through forging, casting, and soldering. She began working at a very different scale, making jewelry, and then later moved primarily to welded steel sculpture — both techniques she now teaches at MIT. When working with steel, Vedro applies the same sensitivity a jeweler brings to small objects, paying close attention to small undulations and surface texture.
She loves working with steel, Vedro says — “shaping and forming and texturing and fighting with it” — because it allows her to engage physically with the material, with her hands involved in every millimeter.
The sculpture’s fluid design began with loose, free-form bird drawings on a cement floor and rubber panels with soapstone, oil pastels, and paint sticks. Vedro then built the forms in metal, welding three-dimensional armatures from round steel bars. The organic shapes and flourishes emerged through a responsive, intuitive process.
“I’m someone who works in real-time, changing my mind and responding to the material,” Vedro says. She likens her process to making a patchwork quilt of steel pieces: forming patterns in a shapeable material like tar paper, transferring them to steel sheets, cutting and shaping and texturing the pieces, and welding them together. “So I can get lots of curvatures that way that are not at all modular.”
From steel plates to soaring form
The sculpture’s outer skin is made from thin, 20-gauge mild steel — a low-carbon steel that’s relatively soft and easy to work with — used for the wishmarks. Those plates were fitted over an internal armature constructed from heavier structural steel.
Because there were more wishmark panels than surface area, Vedro slipped some of them into the hollow space inside the sculpture before welding the piece closed. She compares them to treasures in a locket, “loose, rattling around, which freaked out the team when they were installing.” Any written text on the panels was burned off when the pieces were welded together.
“I believe the stuff’s all alchemized up into smoke, which to me is wonderful because it traverses realms just like a bird,” she says.
The surface of the sculpture is coated with a sealant — necessary because the outer skin material is prone to rust — along with spray paints, patinas, and accents including gold leaf. Its appearance will change over time, something Vedro embraces.
“The idea of transformation is actually integral to my work,” she says.
Standing outside the warmth of the Civic Pavilion on a windy, rainy day, artist Matt Bajor described the sculpture as “gorgeous,” attributing its impact in part to Vedro’s fluency in working across vastly different scales.
“The attention to detail — paying attention to the smaller things so that as it comes together as a whole, you have that fineness throughout the whole sculpture,” he said. “To do that at such a large scale is just crazy. It takes a lot of skill, a lot of effort, and a lot of time.”
Suveena Sreenilayam, a DMSE graduate student who has worked closely with Vedro, said her understanding of the relationship between art and craft brings a unique dimension to her work.
“Metal is hard to work with — and to build that on such small and large scales indicates real versatility,” Sreenilayam said. “To make something so artistic at this scale reflects her physical talent, and also her eye for detail and expression.”
Bajor said “Amulet” is a striking addition to the plaza, where the clean lines of City Hall’s Brutalist architecture contrast with the sculpture’s sinuous curves — and to Boston itself.
“I’m looking forward to seeing it in different conditions — in snow and bright sun — as the metal changes over time and as the patina develops,” he said. “It’s just a really great addition to the city.”
EFF to Close Friday in Solidarity with National Shutdown
The Electronic Frontier Foundation stands with the people of Minneapolis and with all of the communities impacted by the ongoing campaign of ICE and CBP violence. EFF will be closed Friday, Jan. 30 as part of the national shutdown in opposition to ICE and CBP and the brutality and terror they and other federal agencies continue to inflict on immigrant communities and any who stand with them.
We do not make this decision lightly, but we will not remain silent.
- See our statement on ICE/CBP violence: https://www.eff.org/deeplinks/2026/01/eff-statement-lawless-actions-ice-and-cbp
- See our Surveillance Self-Defense tips for protestors: https://ssd.eff.org/module/attending-protest
- See our explanation of the right to record police activity: https://www.eff.org/deeplinks/2025/02/yes-you-have-right-film-ice
“MIT Open Learning has opened doors I never imagined possible”
Through the MITx MicroMasters Program in Data, Economics, and Design of Policy, Munip Utama strengthened the skills he was already applying in his work with Baitul Enza, a nonprofit helping students in need via policy-shaping research and hands-on assistance.
Utama’s commitment to advancing education for underprivileged students stems from his own background. His father is an elementary school teacher in a remote area and his mother has passed away. While financial hardship has always been a defining challenge, he says it has also been the driving force behind his pursuit of education. With the assistance of special programs for high-achieving students, Utama attended top schools and completed his bachelor’s degree in economics at UIN Jakarta — becoming the second person in his family to earn a university degree.
Utama joined Baitul Enza two months before graduation, through a faculty-led research project, and later became its manager, leading its programs and future development. In this interview, he describes how his experiences with the MicroMasters Program in Data, Economics, and Design of Policy (DEDP), offered by the Abdul Latif Jameel Poverty Action Lab (J-PAL) and MIT Open Learning, are shaping his education, career, and personal mission.
Q: What motivated you to pursue the MITx MicroMasters Program in Data, Economics, and Design of Policy?
A: I was seeking high-quality, evidence-based courses in economics and development. I needed rigorous training in data analysis, economic reasoning, and policy design to strengthen our interventions at Baitul Enza. The MITx MicroMasters Program in Data, Economics, and Design of Policy offered exactly that: a curriculum grounded in real-world problem-solving, aligned with the challenges I face in Indonesia.
I deeply admire MIT’s commitment to transforming teaching and learning not only through innovation, but also through empathy. The DEDP program exemplifies this mission: It connects theory with practice, allowing learners like me to apply analytical tools directly to real development challenges. This approach has inspired me to adopt the same philosophy in my own teaching and mentoring, encouraging students to use data and critical thinking to solve problems in their communities.
Q: What have you gained from the MITx DEDP program?
A: The DEDP courses have provided me with rigorous analytical and quantitative training in data analysis, economics, and policy design. They have strengthened both my research and mentorship abilities by teaching me to approach poverty and inequality through evidence-based frameworks. My experience conducting independent and collaborative research projects has informed how I mentor students, guiding them to carry out their own evidence-based research projects. I continue to seek further academic dialogue to broaden my understanding and prepare for future graduate studies.
Another key component has been the program’s financial assistance offers. Even with DEDP’s personalized income-based course pricing, financial constraints remain a significant challenge for me, and Baitul Enza operates entirely on donations and volunteer support. The scholarships administered by DEDP have been crucial in enabling me to continue my studies. It has allowed me to focus on learning without the constant burden of financial insecurity, while staying committed to my mission of breaking cycles of poverty through education.
Q: How are you applying what you’ve learned from MIT Open Learning’s MITx programs, and how will you use what you’ve learned going forward?
A: The DEDP program has transformed how I lead Baitul Enza. I now apply data-driven and evidence-based approaches to program design, monitoring, and evaluation — enhancing cost-effectiveness and long-term impact. The program has enabled me to design case-based learning modules for students, where they analyze real-world data on poverty and education; mentor youth researchers to conduct small-scale projects using evidence-based methods; and improve program cost-effectiveness and outcome measurement to attract collaborators and government support.
Coming from a lower-middle-class family with limited access to education, MIT Open Learning has opened doors I never imagined possible. It has reaffirmed my belief that education, grounded in data and empathy, can break the cycle of poverty. The DEDP program continues to inspire me to mentor young researchers, empower disadvantaged students, and build a community rooted in evidence-based decision-making.
With the foundation built by MITx, I aim to produce policy-relevant research and scale up Baitul Enza’s impact. My long-term vision is to generate experimental evidence in Indonesia on scalable education interventions, inform national policy, and empower marginalized youth to thrive. MITx has not only prepared me academically, but has also strengthened my resolve to lead with clarity, design with evidence, and act with purpose. Beyond my own growth, MITx has multiplied its impact by empowering the next generation of students to use data and evidence in solving local development challenges.
MIT engineers design structures that compute with heat
MIT researchers have designed silicon structures that can perform calculations in an electronic device using excess heat instead of electricity. These tiny structures could someday enable more energy-efficient computation.
In this computing method, input data are encoded as a set of temperatures using the waste heat already present in a device. The flow and distribution of heat through a specially designed material forms the basis of the calculation. Then the output is represented by the power collected at the other end, which is thermostat at a fixed temperature.
The researchers used these structures to perform matrix vector multiplication with more than 99 percent accuracy. Matrix multiplication is the fundamental mathematical technique machine-learning models like LLMs utilize to process information and make predictions.
While the researchers still have to overcome many challenges to scale up this computing method for modern deep-learning models, the technique could be applied to detect heat sources and measure temperature changes in electronics without consuming extra energy. This would also eliminate the need for multiple temperature sensors that take up space on a chip.
“Most of the time, when you are performing computations in an electronic device, heat is the waste product. You often want to get rid of as much heat as you can. But here, we’ve taken the opposite approach by using heat as a form of information itself and showing that computing with heat is possible,” says Caio Silva, an undergraduate student in the Department of Physics and lead author of a paper on the new computing paradigm.
Silva is joined on the paper by senior author Giuseppe Romano, a research scientist at MIT’s Institute for Soldier Nanotechnologies and a member of the MIT-IBM Watson AI Lab. The research appears today in Physical Review Applied.
Turning up the heat
This work was enabled by a software system the researchers previously developed that allows them to automatically design a material that can conduct heat in a specific manner.
Using a technique called inverse design, this system flips the traditional engineering approach on its head. The researchers define the functionality they want first, then the system uses powerful algorithms to iteratively design the best geometry for the task.
They used this system to design complex silicon structures, each roughly the same size as a dust particle, that can perform computations using heat conduction. This is a form of analog computing, in which data are encoded and signals are processed using continuous values, rather than digital bits that are either 0s or 1s.
The researchers feed their software system the specifications of a matrix of numbers that represents a particular calculation. Using a grid, the system designs a set of rectangular silicon structures filled with tiny pores. The system continually adjusts each pixel in the grid until it arrives at the desired mathematical function.
Heat diffuses through the silicon in a way that performs the matrix multiplication, with the geometry of the structure encoding the coefficients.
“These structures are far too complicated for us to come up with just through our own intuition. We need to teach a computer to design them for us. That is what makes inverse design a very powerful technique,” Romano says.
But the researchers ran into a problem. Due to the laws of heat conduction, which impose that heat goes from hot to cold regions, these structures can only encode positive coefficients.
They overcame this problem by splitting the target matrix into its positive and negative components and representing them with separately optimized silicon structures that encode positive entries. Subtracting the outputs at a later stage allows them to compute negative matrix values.
They can also tune the thickness of the structures, which allows them to realize a greater variety of matrices. Thicker structures have greater heat conduction.
“Finding the right topology for a given matrix is challenging. We beat this problem by developing an optimization algorithm that ensures the topology being developed is as close as possible to the desired matrix without having any weird parts,” Silva explains.
Microelectronic applications
The researchers used simulations to test the structures on simple matrices with two or three columns. While simple, these small matrices are relevant for important applications, such as fusion sensing and diagnostics in microelectronics.
The structures performed computations with more than 99 percent accuracy in many cases.
However, there is still a long way to go before this technique could be used for large-scale applications such as deep learning, since millions of structures would need to be tiled together. As the matrices become more complicated, the structures become less accurate, especially when there is a large distance between the input and output terminals. In addition, the devices have limited bandwidth, which would need to be greatly expanded if they were to be used for deep learning.
But because the structures rely on excess heat, they could be directly applied for tasks like thermal management, as well as heat source or temperature gradient detection in microelectronics.
“This information is critical. Temperature gradients can cause thermal expansion and damage a circuit or even cause an entire device to fail. If we have a localized heat source where we don’t want a heat source, it means we have a problem. We could directly detect such heat sources with these structures, and we can just plug them in without needing any digital components,” Romano says.
Building on this proof-of-concept, the researchers want to design structures that can perform sequential operations, where the output of one structure becomes an input for the next. This is how machine-learning models perform computations. They also plan to develop programmable structures, enabling them to encode different matrices without starting from scratch with a new structure each time.
Introducing Encrypt It Already
Today, we’re launching Encrypt It Already, our push to get companies to offer stronger privacy protections to our data and communications by implementing end-to-end encryption. If that name sounds a little familiar, it’s because this is a spiritual successor to our 2019 campaign, Fix It Already, a campaign where we pushed companies to fix longstanding issues.
End-to-end encryption is the best way we have to protect our conversations and data. It ensures the company that provides a service cannot access the data or messages you store on it. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the contents of your messages, and they’re only accessible on your and your recipients. When it comes to data, like what’s stored using Apple’s Advanced Data Protection, it means you control the encryption keys and the service provider will not be able to access the data.
We’ve divided this up into three categories, each with three different demands:
- Keep your Promises: Features that the company has publicly stated they’re working on, but which haven’t launched yet.
- Facebook should use end-to-end encryption for group messages
- Apple and Google should deliver on their promise of interoperable end-to-end encryption of RCS
- Bluesky should launch its promised end-to-end encryption for DMs
- Defaults Matter: Features that are available on a service or in app already, but aren’t enabled by default.
- Telegram should default to end-to-end encryption for DMs
- WhatsApp should use end-to-end encryption for backups by default
- Ring should enable end-to-end encryption for its cameras by default
- Protect Our Data: New features that companies should launch, often because their competition is doing it already.
- Google should launch end-to-end encryption for Google Authenticator backups
- Google should offer end-to-end encryption for Android backup data
- Apple and Google should offer an AI permissions per app option to block AI access to secure chat apps
What is only half the problem. How is just as important.
What Companies Should Do When They Launch End-to-End Encryption FeaturesThere’s no one-size fits all way to implement end-to-end encryption in products and services, but best practices can support the security of the platform with the transparency that makes it possible for its users to trust it protects data like the company claims it does. When these encryption features launch, companies should consider doing so with:
- A blog post written for a general audience that summarizes the technical details of the implementation, and when it makes sense, a technical white paper that goes into further detail for the technical crowd.
- Clear user-facing documentation around what data is and isn’t end-to-end encrypted, and robust and clear user controls when it makes sense to have them.
- Data minimization principles whenever feasible, storing as little metadata as possible.
Technical documentation is important for end-to-encryption features, but so is clear documentation that makes it easy for users to understand what is and isn’t protected, what features may change, and what steps they need to take to set it up so they’re comfortable with how data is protected.
What You Can DoWhen it’s an option, enable any end-to-end encryption features you can, like on Telegram, WhatsApp, and Ring.
For everything else, let companies know that these are features you want! You can find messages to share on social media on the Encrypt It Already website, and take the time to customize those however you’d like.
In some cases, you can also reach out to a company directly with feature requests, which all the above companies, except for Google and WhatsApp, offer in some form. We recommend filing these through any service you use for any of the above features you’d like to see:
As for Ring and Telegram, we’ve already made the asks and just need your help to boost them. Head over to the Telegram bug and suggestions and upvote this post, and Ring’s feature request board and boost this post.
End-to-end encryption protects what we say and what we store in a way that gives users—not companies or governments—control over data. These sorts of privacy-protective features should be the status quo across a range of products, from fitness wearables to notes apps, but instead it’s a rare feature limited to a small set of services, like messaging and (occasionally) file storage. These demands are just the start. We deserve this sort of protection for a far wider array of products and services. It’s time to encrypt it already!
Help protect digital privacy & free speech for everyone
Google Settlement May Bring New Privacy Controls for Real-Time Bidding
EFF has long warned about the dangers of the “real-time bidding” (RTB) system powering nearly every ad you see online. A proposed class-action settlement with Google over their RTB system is a step in the right direction towards giving people more control over their data. Truly curbing the harms of RTB, however, will require stronger legislative protections.
What Is Real-Time Bidding?RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your personal information to thousands of companies a day. At a high-level, here’s how RTB works:
- The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. This involves sending information about you and the content you’re viewing to the ad tech company.
- This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
- The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people.
- Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space.
- The highest bidder gets to display an ad for you, but advertisers (and the adtech companies they use to buy ads) can collect your bidstream data regardless of whether or not they bid on the ad space.
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. Since bid requests contain individual identifiers, they can be tied together to create detailed profiles of people’s behavior over time.
Data brokers have sold bidstream data for a range of invasive purposes, including tracking union organizers and political protesters, outing gay priests, and conducting warrantless government surveillance. Several federal agencies, including ICE, CBP and the FBI, have purchased location data from a data broker whose sources likely include RTB. ICE recently requested information on “Ad Tech” tools it could use in investigations, further demonstrating RTB’s potential to facilitate surveillance. RTB also poses national security risks, as researchers have warned that it could allow foreign states to obtain compromising personal data about American defense personnel and political leaders.
The privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast torrents of personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used.
Proposed Settlement with Google Is a Step in the Right DirectionAs the dominant player in the online advertising industry, Google facilitates the majority of RTB auctions. Google has faced several class-action lawsuits for sharing users’ personal information with thousands of advertisers through RTB auctions without proper notice and consent. A recently proposed settlement to these lawsuits aims to give people more knowledge and control over how their information is shared in RTB auctions.
Under the proposed settlement, Google must create a new privacy setting (the “RTB Control”) that allows people to limit the data shared about them in RTB auctions. When the RTB Control is enabled, bid requests will not include identifying information like pseudonymous IDs (including mobile advertising IDs), IP addresses, and user agent details. The RTB Control should also prevent cookie matching, a method companies use to link their data profiles about a person to a corresponding bid request. Removing identifying information from bid requests makes it harder for data brokers and advertisers to create consumer profiles based on bidstream data. If the proposed settlement is approved, Google will have to inform all users about the new RTB Control via email.
While this settlement would be a step in the right direction, it would still require users to actively opt out of their identifying information being shared through RTB. Those who do not change their default settings—research shows this is most people—will remain vulnerable to RTB’s massive daily data breach. Google broadcasting your personal data to thousands of companies each time you see an ad is an unacceptable and dangerous default.
The impact of RTB Control is further limited by technical constraints on who can enable it. RTB Control will only work for devices and browsers where Google can verify users are signed in to their Google account, or for signed-out users on browsers that allow third-party cookies. People who don't sign in to a Google account or don't enable privacy-invasive third-party cookies cannot benefit from this protection. These limitations could easily be avoided by making RTB Control the default for everyone. If the settlement is approved, regulators and lawmakers should push Google to enable RTB Control by default.
The Real Solution: Ban Online Behavioral AdvertisingLimiting the data exposed through RTB is important, but we also need legislative change to protect people from the online surveillance enabled and incentivized by targeted advertising. The lack of strong, comprehensive privacy law in the U.S. makes it difficult for individuals to know and control how companies use their personal information. Strong privacy legislation can make privacy the default, not something that individuals must fight for through hidden settings or additional privacy tools. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move. Until then, you can limit the harms of RTB by using EFF’s Privacy Badger to block ads that track you, disabling your mobile advertising ID (see instructions for iPhone/Android), and keeping an eye out for Google’s RTB Control.
