Feed aggregator
Stand Together to Protect Democracy
What a year it’s been. We’ve seen technology unfortunately misused to supercharge the threats facing democracy: dystopian surveillance, attacks on encryption, and government censorship. These aren’t abstract dangers. They’re happening now, to real people, in real time.
EFF’s lawyers, technologists, and activists are pushing back. But we need you in this fight.
MAKE A YEAR END DONATION—HELP EFF UNLOCK CHALLENGE GRANTS!
If you donate to EFF before the end of 2025, you’ll help fuel the legal battles that defend encryption, the tools that protect privacy, and the advocacy that stops dangerous laws—and you’ll help unlock up to $26,200 in challenge grants.
📣 Stand Together: That's How We Win 📣The past year confirmed how urgently we need technologies that protect us, not surveil us. EFF has been in the fight every step of the way, thanks to support from people like you.
Get free gear when you join EFF!
This year alone EFF:
- Launched a resource hub to help users understand and fight back against age verification laws.
- Challenged San Jose's unconstitutional license plate reader database in court.
- Sued demanding answers when ICE spotting apps were mysteriously taken offline.
- Launched Rayhunter to detect cell site simulators.
- Pushed back hard against the EU's Chat Proposal that would break encryption for millions.
After 35 years of defending digital freedoms, we know what's at stake: we must protect your ability to speak freely, organize safely, and use technology without surveillance.
We have opportunities to win these fights, and you make each victory possible. Donate to EFF by December 31 and help us unlock additional grants this year!
Already an EFF Member? Help Us Spread the Word!EFF Members have carried the movement for privacy and free expression for decades. You can help move the mission even further! Here’s some sample language that you can share with your networks:
We need to stand together and ensure technology works for us, not against us. Donate any amount to EFF by Dec 31, and you'll help unlock challenge grants! https://eff.org/yec
Bluesky | Facebook | LinkedIn | Mastodon
(more at eff.org/social)
_________________
EFF is a member-supported U.S. 501(c)(3) organization. We’re celebrating TWELVE YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.
Wisconsin senators probe state AG for hiring outside climate lawyers
Appeals court throws green banks a lifeline
Scientists decry White House plan to break up Colorado climate center
Maryland legislators override vetoes on energy, climate bills
Fact-checking Trump’s energy claims
Data centers have a political problem — and Big Tech wants to fix it
Trump admin squeezes Colorado River states on water use
Coal demand rises in Asia despite booming renewables
Passenger jets are Japan’s newest tool to track climate change
In Senegal, climate change adds to farmer-herder tensions
Automakers, climate groups unite to criticize EU’s EV plan
A new way to increase the capabilities of large language models
Most languages use word position and sentence structure to extract meaning. For example, “The cat sat on the box,” is not the same as “The box was on the cat.” Over a long text, like a financial document or a novel, the syntax of these words likely evolves.
Similarly, a person might be tracking variables in a piece of code or following instructions that have conditional actions. These are examples of state changes and sequential reasoning that we expect state-of-the-art artificial intelligence systems to excel at; however, the existing, cutting-edge attention mechanism within transformers — the primarily architecture used in large language models (LLMs) for determining the importance of words — has theoretical and empirical limitations when it comes to such capabilities.
An attention mechanism allows an LLM to look back at earlier parts of a query or document and, based on its training, determine which details and words matter most; however, this mechanism alone does not understand word order. It “sees” all of the input words, a.k.a. tokens, at the same time and handles them in the order that they’re presented, so researchers have developed techniques to encode position information. This is key for domains that are highly structured, like language. But the predominant position-encoding method, called rotary position encoding (RoPE), only takes into account the relative distance between tokens in a sequence and is independent of the input data. This means that, for example, words that are four positions apart, like “cat” and “box” in the example above, will all receive the same fixed mathematical rotation specific to that relative distance.
Now research led by MIT and the MIT-IBM Watson AI Lab has produced an encoding technique known as “PaTH Attention” that makes positional information adaptive and context-aware rather than static, as with RoPE.
“Transformers enable accurate and scalable modeling of many domains, but they have these limitations vis-a-vis state tracking, a class of phenomena that is thought to underlie important capabilities that we want in our AI systems. So, the important question is: How can we maintain the scalability and efficiency of transformers, while enabling state tracking?” says the paper’s senior author Yoon Kim, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and a researcher with the MIT-IBM Watson AI Lab.
A new paper on this work was presented earlier this month at the Conference on Neural Information Processing Systems (NeurIPS). Kim’s co-authors include lead author Songlin Yang, an EECS graduate student and former MIT-IBM Watson AI Lab Summer Program intern; Kaiyue Wen of Stanford University; Liliang Ren of Microsoft; and Yikang Shen, Shawn Tan, Mayank Mishra, and Rameswar Panda of IBM Research and the MIT-IBM Watson AI Lab.
Path to understanding
Instead of assigning every word a fixed rotation based on relative distance between tokens, as RoPE does, PaTH Attention is flexible, treating the in-between words as a path made up of small, data-dependent transformations. Each transformation, based on a mathematical operation called a Householder reflection, acts like a tiny mirror that adjusts depending on the content of each token it passes. Each step in a sequence can influence how the model interprets information later on. The cumulative effect lets the system model how the meaning changes along the path between words, not just how far apart they are. This approach allows transformers to keep track of how entities and relationships change over time, giving it a sense of “positional memory.” Think of this as walking a path while experiencing your environment and how it affects you. Further, the team also developed a hardware-efficient algorithm to more efficiently compute attention scores between every pair of tokens so that the cumulative mathematical transformation from PaTH Attention is compressed and broken down into smaller computations so that it’s compatible with fast processing on GPUs.
The MIT-IBM researchers then explored PaTH Attention’s performance on synthetic and real-world tasks, including reasoning, long-context benchmarks, and full LLM training to see whether it improved a model’s ability to track information over time. The team tested its ability to follow the most recent “write” command despite many distracting steps and multi-step recall tests, tasks that are difficult for standard positional encoding methods like RoPE. The researchers also trained mid-size LLMs and compared them against other methods. PaTH Attention improved perplexity and outcompeted other methods on reasoning benchmarks it wasn’t trained on. They also evaluated retrieval, reasoning, and stability with inputs of tens of thousands of tokens. PaTH Attention consistently proved capable of content-awareness.
“We found that both on diagnostic tasks that are designed to test the limitations of transformers and on real-world language modeling tasks, our new approach was able to outperform existing attention mechanisms, while maintaining their efficiency,” says Kim. Further, “I’d be excited to see whether these types of data-dependent position encodings, like PATH, improve the performance of transformers on structured domains like biology, in [analyzing] proteins or DNA.”
Thinking bigger and more efficiently
The researchers then investigated how the PaTH Attention mechanism would perform if it more similarly mimicked human cognition, where we ignore old or less-relevant information when making decisions. To do this, they combined PaTH Attention with another position encoding scheme known as the Forgetting Transformer (FoX), which allows models to selectively “forget.” The resulting PaTH-FoX system adds a way to down-weight information in a data-dependent way, achieving strong results across reasoning, long-context understanding, and language modeling benchmarks. In this way, PaTH Attention extends the expressive power of transformer architectures.
Kim says research like this is part of a broader effort to develop the “next big thing” in AI. He explains that a major driver of both the deep learning and generative AI revolutions has been the creation of “general-purpose building blocks that can be applied to wide domains,” such as “convolution layers, RNN [recurrent neural network] layers,” and, most recently, transformers. Looking ahead, Kim notes that considerations like accuracy, expressivity, flexibility, and hardware scalability have been and will be essential. As he puts it, “the core enterprise of modern architecture research is trying to come up with these new primitives that maintain or improve the expressivity, while also being scalable.”
This work was supported, in part, by the MIT-IBM Watson AI Lab and the AI2050 program at Schmidt Sciences.
Digital innovations and cultural heritage in rural towns
Population decline often goes hand-in-hand with economic stagnation in rural areas — and the two reinforce each other in a cycle. Can digital technologies advance equitable innovation and, at the same time, preserve cultural heritage in shrinking regions?
A new open-access book, edited by MIT Vice Provost and Department of Urban Studies and Planning (DUSP) Professor Brent D. Ryan PhD ’02, Carmelo Ignaccolo PhD ’24 of Rutgers University, and Giovanna Fossa of the Politecnico di Milano, explores the transformative power of community-centered technologies in the rural areas of Italy.
“Small Town Renaissance: Bridging Technology, Heritage and Planning in Shrinking Italy” (Springer Nature, 2025) investigates the future of small towns through empirical analyses of cellphone data, bold urban design visions, collaborative digital platforms for small businesses, and territorial strategies for remote work. The work examines how technology may open up these regions to new economic opportunities. The book shares data-driven scholarly work on shrinking towns, economic development, and digital innovation from multiple planning scholars and practitioners, several of whom traveled to Italy in fall 2022 as part of a DUSP practicum taught by Ryan and Ignaccolo, and sponsored by MISTI Italy and Fondazione Rocca, in collaboration with Liminal.
“What began as a hands-on MIT practicum grew into a transatlantic book collaboration uniting scholars in design, planning, heritage, law, and telecommunications to explore how technology can sustain local economies and culture,” says Ignaccolo.
Now an assistant professor of city planning at Rutgers University’s E.J. Bloustein School of Planning and Public Policy, Ignaccolo says the book provides concrete and actionable strategies to support shrinking regions in leveraging cultural heritage and smart technologies to strengthen opportunities and local economies.
“Depopulation linked to demographic change is reshaping communities worldwide,” says Ryan. “Italy is among the hardest hit, and the United States is heading in the same direction. This project offered students a chance to harness technology and innovation to imagine bold responses to this growing challenge.”
The researchers note that similar struggles also exist in rural communities across Germany, Spain, Japan, and Korea. The book provides policymakers, urban planners, designers, tech innovators, and heritage advocates with fresh insights and actionable strategies to shape the future of rural development in the digital age. The book and chapters can be downloaded for free through most university libraries via open access.
Post-COP30, more aggressive policies needed to cap global warming at 1.5 C
The latest United Nations Climate Change Conference (COP30) concluded in November without a roadmap to phase out fossil fuels and without significant progress in strengthening national pledges to reduce climate-altering greenhouse gas emissions. In aggregate, today’s climate policies remain far too unambitious to meet the Paris Agreement’s goal of capping global warming at 1.5 degrees Celsius, setting the world on course to experience more frequent and intense storms, flooding, droughts, wildfires, and other climate impacts. A global policy regime aligned with the 1.5 C target would almost certainly reduce the severity of those impacts.
In the “2025 Global Change Outlook,” researchers at the MIT Center for Sustainability Science and Strategy (CS3) compare the consequences of these two approaches to climate policy through modeled projections of critical natural and societal systems under two scenarios. The Current Trends scenario represents the researchers’ assessment of current measures for reducing greenhouse gas (GHG) emissions; the Accelerated Actions scenario is a credible pathway to stabilizing the climate at a global mean surface temperature of 1.5 C above preindustrial levels, in which countries impose more aggressive GHG emissions-reduction targets.
By quantifying the risks posed by today’s climate policies — and the extent to which accelerated climate action aligned with the 1.5 C goal could reduce them — the “Global Change Outlook” aims to clarify what’s at stake for environments and economies around the world. Here, we summarize the report’s key findings at the global level; regional details can also be accessed in several sections and through MIT CS3’s interactive global visualization tool.
Emerging headwinds for global climate action
Projections under Current Trends show higher GHG emissions than in our previous 2023 outlook, indicating reduced action on GHG emissions mitigation in the upcoming decade. The difference, roughly equivalent to the annual emissions from Brazil or Japan, is driven by current geopolitical events.
Additional analysis in this report indicates that global GHG emissions in 2050 could be 10 percent higher than they would be under Current Trends if regional rivalries triggered by U.S. tariff policy prompt other regions to weaken their climate regulations. In that case, the world would see virtually no emissions reduction in the next 25 years.
Energy and electricity projections
Between 2025 and 2050, global energy consumption rises by 17 percent under Current Trends, with a nearly nine-fold increase in wind and solar. Under Accelerated Actions, global energy consumption declines by 16 percent, with a nearly 13-fold increase in wind and solar, driven by improvements in energy efficiency, wider use of electricity, and demand response. In both Current Trends and Accelerated Actions, global electricity consumption increases substantially (by 90 percent and 100 percent, respectively), with generation from low-carbon sources becoming a dominant source of power, though Accelerated Actions has a much larger share of renewables.
“Achieving long-term climate stabilization goals will require more ambitious policy measures that reduce fossil-fuel dependence and accelerate the energy transition toward low-carbon sources in all regions of the world. Our Accelerated Actions scenario provides a pathway for scaling up global climate ambition,” says MIT CS3 Deputy Director Sergey Paltsev, co-lead author of the report.
Greenhouse gas emissions and climate projections
Under Current Trends, global anthropogenic (human-caused) GHG emissions decline by 10 percent between 2025 and 2050, but start to rise again later in the century; under Accelerated Actions, however, they fall by 60 percent by 2050. Of the two scenarios, only the latter could put the world on track to achieve long-term climate stabilization.
Median projections for global warming by 2050, 2100, and 2150 are projected to reach 1.79, 2.74, and 3.72 degrees C (relative to the global mean surface temperature (GMST) average for the years 1850-1900) under Current Trends and 1.62, 1.56, and 1.50 C under Accelerated Actions. Median projections for global precipitation show increases from 2025 levels of 0.04, 0.11, and 0.18 millimeters per day in 2050, 2100, and 2150 under Current Trends and 0.03, 0.04, and 0.03 mm/day for those years under Accelerated Actions.
“Our projections demonstrate that aggressive cuts in GHG emissions can lead to substantial reductions in the upward trends of GMST, as well as global precipitation,” says CS3 deputy director C. Adam Schlosser, co-lead author of the outlook. “These reductions to both climate warming and acceleration of the global hydrologic cycle lower the risks of damaging impacts, particularly toward the latter half of this century.”
Implications for sustainability
The report’s modeled projections imply significantly different risk levels under the two scenarios for water availability, biodiversity, air quality, human health, economic well-being, and other sustainability indicators.
Among the key findings: Policies that align with Accelerated Actions could yield substantial co-benefits for water availability, biodiversity, air quality, and health. For example, combining Accelerated Actions-aligned climate policies with biodiversity targets, or with air-quality targets, could achieve biodiversity and air quality/health goals more efficiently and cost-effectively than a more siloed approach. The outlook’s analysis of the global economy under Current Trends suggests that decision-makers need to account for climate impacts outside their home region and the resilience of global supply chains.
Finally, CS3’s new data-visualization platform provides efficient, screening-level mapping of current and future climate, socioeconomic, and demographic-related conditions and changes — including global mapping for many of the model outputs featured in this report.
“Our comparison of outcomes under Current Trends and Accelerated Actions scenarios highlights the risks of remaining on the world’s current emissions trajectory and the benefits of pursuing a much more aggressive strategy,” says CS3 Director Noelle Selin, a co-author of the report and a professor in the Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences at MIT. “We hope that our risk-benefit analysis will help inform decision-makers in government, industry, academia, and civil society as they confront sustainability-relevant challenges.”
Student Spotlight: Diego Temkin
This interview is part of a series of short interviews from the Department of Electrical Engineering and Computer Science (EECS). Each spotlight features a student answering their choice of questions about themselves and life at MIT. Today’s interviewee, senior Diego Temkin, is double majoring in courses 6-3 (Computer Science and Engineering) and 11 (Urban Planning). The McAllen, Texas, native is involved with MIT’s Dormitory Council (DormCon), helps to maintain Hydrant (formerly Firehose)/CourseRoad, and is both a member of the Student Information Processing Board (MIT’s oldest computing club) and an Advanced Undergraduate Research Opportunities Program (SuperUROP) scholar.
Q: What’s your favorite key on a standard computer keyboard, and why?
A: The “1” key! During Covid, I ended up starting a typewriter collection and trying to fix them up, and I always thought it was interesting how they didn’t have a 1 key. People were just expected to use the lowercase “l,” which presumably makes anyone who cares about ASCII very upset.
Q: Tell us about a teacher from your past who had an influence on the person you’ve become.
A: Back in middle school, everyone had to take a technology class that taught things like typing skills, Microsoft Word and Excel, and some other things. I was a bit of a nerd and didn’t have too many friends interested in the sort of things I was, but the teacher of that technology class, Mrs. Camarena, would let me stay for a bit after school and encouraged me to explore more of my interests. She helped me become more confident in wanting to go into computer science, and now here I am.
Q: What’s your favorite trivia factoid?
A: Every floor in Building 13 is painted as a different MBTA line. I don’t know why and can’t really find anything about it online, but once you notice it you can’t unsee it!
Q: Do you have any pets?
A: I do! His name is Skateboard, and he is the most quintessentially orange cat. I got him off reuse@mit.edu during my first year here at MIT (shout out to Patty K), and he’s been with me ever since. He’s currently five years old, and he’s a big fan of goldfish and stepping on my face at 7 a.m. Best decision I’ve ever made.
Q: Are you a re-reader or a re-watcher? If so, what are your comfort books, shows, or movies?
A: Definitely a re-watcher, and definitely “Doctor Who.” I’ve watched far too much of that show and there are episodes I can recite from memory (looking at you, “The Eleventh Hour”). Anyone I know will tell you that I can go on about that show for hours, and before anyone asks, my favorite doctor is Matt Smith (sorry to the David Tennant fans; I like him too, though!)
Q: Do you have a bucket list? If so, share one or two of the items on it.
A: I’ve been wanting to take a cross-country Amtrak trip for a while … I think I might try going to the West Coast and some national parks during IAP [Independent Activities Period], if I have the time. Now that it’s on here, I definitely have to do it!
Local Communities Are Winning Against ALPR Surveillance—Here’s How: 2025 in Review
Across ideologically diverse communities, 2025 campaigns against automated license plate reader (ALPR) surveillance kept winning. From Austin, Texas to Cambridge, Massachusetts to Eugene, Oregon, successful campaigns combined three practical elements: a motivated political champion on city council, organized grassroots pressure from affected communities, and technical assistance at critical decision moments.
The 2025 Formula for Refusal
- Institutional Authority: Council members leveraging "procurement power"—local democracy's most underutilized tool—to say no.
- Community Mobilization: A base that refuses to debate "better policy" and demands "no cameras."
- Shared Intelligence: Local coalitions utilizing shared research on contract timelines and vendor breaches.
In 2025, organizers embraced the "ugly" win: prioritizing immediate contract cancellations over the "political purity" of perfect privacy laws. Procurement fights are often messy, bureaucratic battles rather than high-minded legislative debates, but they stop surveillance where it starts—at the checkbook. In Austin, more than 30 community groups built a coalition that forced a contract cancellation, achieving via purchasing power what policy reform often delays.
In Hays County, Texas, the victory wasn't about a new law, but a contract termination. Commissioner Michelle Cohen grounded her vote in vendor accountability, explaining: "It's more about the company's practices versus the technology." These victories might lack the permanence of a statute, but every camera turned off built a culture of refusal that made the next rejection easier. This was the organizing principle: take the practical win and build on it.
Start with the HarmWinning campaigns didn't debate technical specifications or abstract privacy principles. They started with documented harms that surveillance enabled. EFF's research showing police used Flock's network to track Romani people with discriminatory search terms, surveil women seeking abortion care, and monitor protesters exercising First Amendment rights became the evidence organizers used to build power.
In Olympia, Washington, nearly 200 community members attended a counter-information rally outside city hall on Dec. 2. The DeFlock Olympia movement countered police department claims point-by-point with detailed citations about data breaches and discriminatory policing. By Dec. 3, cameras had been covered pending removal.
In Cambridge, the city council voted unanimously in October to pause Flock cameras after residents, the ACLU of Massachusetts, and Digital Fourth raised concerns. When Flock later installed two cameras "without the city's awareness," a city spokesperson called it a "material breach of our trust" and terminated the contract entirely. The unexpected camera installation itself became an organizing moment.
The Inside-Outside GameThe winning formula worked because it aligned different actors around refusing vehicular mass surveillance systems without requiring everyone to become experts. Community members organized neighbors and testified at hearings, creating political conditions where elected officials could refuse surveillance and survive politically. Council champions used their institutional authority to exercise "procurement power": the ability to categorically refuse surveillance technology.
To fuel these fights, organizers leveraged technical assets like investigation guides and contract timeline analysis. This technical capacity allowed community members to lead effectively without needing to become policy experts. In Eugene and Springfield, Oregon, Eyes Off Eugene organized sustained opposition over months while providing city council members political cover to refuse. "This is [a] very wonderful and exciting victory," organizer Kamryn Stringfield said. "This only happened due to the organized campaign led by Eyes Off Eugene and other local groups."
Refusal Crosses Political DividesA common misconception collapsed in 2025: that surveillance technology can only be resisted in progressive jurisdictions. San Marcos, Texas let its contract lapse after a 3-3 deadlock, with Council Member Amanda Rodriguez questioning whether the system showed "return on investment." Hays County commissioners in Texas voted to terminate. Small towns like Gig Harbor, Washington rejected proposals before deployment.
As community partners like the Rural Privacy Coalition emphasize, "privacy is a rural value." These victories came from communities with different political cultures but shared recognition that mass surveillance systems weren't worth the cost or risk regardless of zip code.
Communities Learning From Each OtherIn 2025, communities no longer needed to build expertise from scratch—they could access shared investigation guides, learn from victories in neighboring jurisdictions, and connect with organizers who had won similar fights. When Austin canceled its contract, it inspired organizing across Texas. When Illinois Secretary of State's audit revealed illegal data sharing with federal immigration enforcement, Evanston used those findings to terminate 19 cameras.
The combination of different forms of power—institutional authority, community mobilization, and shared intelligence—was a defining feature of this year's most effective campaigns. By bringing these elements together, community coalitions have secured cancellations or rejections in nearly two dozen jurisdictions since February, building the infrastructure to make the next refusal easier and the movement unstoppable.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
Fighting to Keep Bad Patents in Check: 2025 in Review
A functioning patent system depends on one basic principle: bad patents must be challengeable. In 2025, that principle was repeatedly tested—by Congress, by the U.S. Patent and Trademark Office (USPTO), and by a small number of large patent owners determined to weaken public challenges.
Two damaging bills, PERA and PREVAIL, were reintroduced in Congress. At the same time, USPTO attempted a sweeping rollback of inter partes review (IPR), one of the most important mechanisms for challenging wrongly granted patents.
EFF pushed back—on Capitol Hill, inside the Patent Office, and alongside thousands of supporters who made their voices impossible to ignore.
Congress Weighed Bills That Would Undo Core SafeguardsThe Patent Eligibility Restoration Act, or PERA, would overturn the Supreme Court’s Alice and Myriad decisions—reviving patents on abstract software ideas, and even allowing patents on isolated human genes. PREVAIL, introduced by the same main sponsors in Congress, would seriously weaken the IPR process by raising the burden of proof, limiting who can file challenges, forcing petitioners to surrender court defenses, and giving patent owners new ways to rewrite their claims mid-review.
Together, these bills would have dismantled much of the progress made over the last decade.
We reminded Congress that abstract software patents—like those we’ve seen on online photo contests, upselling prompts, matchmaking, and scavenger hunts—are exactly the kind of junk claims patent trolls use to threaten creators and small developers. We also pointed out that if PREVAIL had been law in 2013, EFF could not have brought the IPR that crushed the so-called “podcasting patent.”
EFF’s supporters amplified our message, sending thousands of messages to Congress urging lawmakers to reject these bills. The result: neither bill advanced to the full committee. The effort to rewrite patent law behind closed doors stalled out once public debate caught up with it.
Patent Office Shifts To An “Era of No”Congress’ push from the outside was stymied, at least for now. Unfortunately, what may prove far more effective is the push from within by new USPTO leadership, which is working to dismantle systems and safeguards that protect the public from the worst patents.
Early in the year, the Patent Office signaled it would once again lean more heavily on procedural denials, reviving an approach that allowed patent challenges to be thrown out basically whenever there was an ongoing court case involving the same patent. But the most consequential move came later: a sweeping proposal unveiled in October that would make IPR nearly unusable for those who need it most.
2025 also marked a sharp practical shift inside the agency. Newly appointed USPTO Director John Squires took personal control of IPR institution decisions, and rejected all 34 of the first IPR petitions that came across his desk. As one leading patent blog put it, an “era of no” has been ushered in at the Patent Office.
The October Rulemaking: Making Bad Patents UntouchableThe USPTO’s proposed rule changes would:
- Force defendants to surrender their court defenses if they use IPR—an intense burden for anyone actually facing a lawsuit.
- Make patents effectively unchallengeable after a single prior dispute, even if that challenge was limited, incomplete, or years out of date.
- Block IPR entirely if a district court case is projected to move faster than the Patent Trial and Appeal Board (PTAB).
These changes wouldn’t “balance” the system as USPTO claims—they would make bad patents effectively untouchable. Patent trolls and aggressive licensors would be insulated, while the public would face higher costs and fewer options to fight back.
We sounded the alarm on these proposed rules and asked supporters to register their opposition. More than 4,000 of you did—thank you! Overall, more than 11,000 comments were submitted. An analysis of the comments shows that stakeholders and the public overwhelmingly oppose the proposal, with 97% of comments weighing in against it.
In those comments, small business owners described being hit with vague patents they could never afford to fight in court. Developers and open-source contributors explained that IPR is often the only realistic check on bad software patents. Leading academics, patient-advocacy groups, and major tech-community institutions echoed the same point: you cannot issue hundreds of thousands of patents a year and then block one of the only mechanisms that corrects the mistakes.
The Linux Foundation warned that the rules “would effectively remove IPRs as a viable mechanism” for developers.
GitHub emphasized the increased risk and litigation cost for open-source communities.
Twenty-two patent law professors called the proposal unlawful and harmful to innovation.
Patients for Affordable Drugs detailed the real-world impact of striking invalid pharmaceutical patents, showing that drug prices can plummet once junk patents are removed.
Heading Into 2026The USPTO now faces thousands of substantive comments. Whether the agency backs off or tries to push ahead, EFF will stay engaged. Congress may also revisit PERA, PREVAIL, or similar proposals next year. Some patent owners will continue to push for rules that shield low-quality patents from any meaningful review.
But 2025 proved something important: When people understand how patent abuse affects developers, small businesses, patients, and creators, they show up—and when they do, their actions can shape what happens next.
This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.
A “scientific sandbox” lets researchers explore the evolution of vision systems
Why did humans evolve the eyes we have today?
While scientists can’t go back in time to study the environmental pressures that shaped the evolution of the diverse vision systems that exist in nature, a new computational framework developed by MIT researchers allows them to explore this evolution in artificial intelligence agents.
The framework they developed, in which embodied AI agents evolve eyes and learn to see over many generations, is like a “scientific sandbox” that allows researchers to recreate different evolutionary trees. The user does this by changing the structure of the world and the tasks AI agents complete, such as finding food or telling objects apart.
This allows them to study why one animal may have evolved simple, light-sensitive patches as eyes, while another has complex, camera-type eyes.
The researchers’ experiments with this framework showcase how tasks drove eye evolution in the agents. For instance, they found that navigation tasks often led to the evolution of compound eyes with many individual units, like the eyes of insects and crustaceans.
On the other hand, if agents focused on object discrimination, they were more likely to evolve camera-type eyes with irises and retinas.
This framework could enable scientists to probe “what-if” questions about vision systems that are difficult to study experimentally. It could also guide the design of novel sensors and cameras for robots, drones, and wearable devices that balance performance with real-world constraints like energy efficiency and manufacturability.
“While we can never go back and figure out every detail of how evolution took place, in this work we’ve created an environment where we can, in a sense, recreate evolution and probe the environment in all these different ways. This method of doing science opens to the door to a lot of possibilities,” says Kushagra Tiwary, a graduate student at the MIT Media Lab and co-lead author of a paper on this research.
He is joined on the paper by co-lead author and fellow graduate student Aaron Young; graduate student Tzofi Klinghoffer; former postdoc Akshat Dave, who is now an assistant professor at Stony Brook University; Tomaso Poggio, the Eugene McDermott Professor in the Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute, and co-director of the Center for Brains, Minds, and Machines; co-senior authors Brian Cheung, a postdoc in the Center for Brains, Minds, and Machines and an incoming assistant professor at the University of California San Francisco; and Ramesh Raskar, associate professor of media arts and sciences and leader of the Camera Culture Group at MIT; as well as others at Rice University and Lund University. The research appears today in Science Advances.
Building a scientific sandbox
The paper began as a conversation among the researchers about discovering new vision systems that could be useful in different fields, like robotics. To test their “what-if” questions, the researchers decided to use AI to explore the many evolutionary possibilities.
“What-if questions inspired me when I was growing up to study science. With AI, we have a unique opportunity to create these embodied agents that allow us to ask the kinds of questions that would usually be impossible to answer,” Tiwary says.
To build this evolutionary sandbox, the researchers took all the elements of a camera, like the sensors, lenses, apertures, and processors, and converted them into parameters that an embodied AI agent could learn.
They used those building blocks as the starting point for an algorithmic learning mechanism an agent would use as it evolved eyes over time.
“We couldn’t simulate the entire universe atom-by-atom. It was challenging to determine which ingredients we needed, which ingredients we didn’t need, and how to allocate resources over those different elements,” Cheung says.
In their framework, this evolutionary algorithm can choose which elements to evolve based on the constraints of the environment and the task of the agent.
Each environment has a single task, such as navigation, food identification, or prey tracking, designed to mimic real visual tasks animals must overcome to survive. The agents start with a single photoreceptor that looks out at the world and an associated neural network model that processes visual information.
Then, over each agent’s lifetime, it is trained using reinforcement learning, a trial-and-error technique where the agent is rewarded for accomplishing the goal of its task. The environment also incorporates constraints, like a certain number of pixels for an agent’s visual sensors.
“These constraints drive the design process, the same way we have physical constraints in our world, like the physics of light, that have driven the design of our own eyes,” Tiwary says.
Over many generations, agents evolve different elements of vision systems that maximize rewards.
Their framework uses a genetic encoding mechanism to computationally mimic evolution, where individual genes mutate to control an agent’s development.
For instance, morphological genes capture how the agent views the environment and control eye placement; optical genes determine how the eye interacts with light and dictate the number of photoreceptors; and neural genes control the learning capacity of the agents.
Testing hypotheses
When the researchers set up experiments in this framework, they found that tasks had a major influence on the vision systems the agents evolved.
For instance, agents that were focused on navigation tasks developed eyes designed to maximize spatial awareness through low-resolution sensing, while agents tasked with detecting objects developed eyes focused more on frontal acuity, rather than peripheral vision.
Another experiment indicated that a bigger brain isn’t always better when it comes to processing visual information. Only so much visual information can go into the system at a time, based on physical constraints like the number of photoreceptors in the eyes.
“At some point a bigger brain doesn’t help the agents at all, and in nature that would be a waste of resources,” Cheung says.
In the future, the researchers want to use this simulator to explore the best vision systems for specific applications, which could help scientists develop task-specific sensors and cameras. They also want to integrate LLMs into their framework to make it easier for users to ask “what-if” questions and study additional possibilities.
“There’s a real benefit that comes from asking questions in a more imaginative way. I hope this inspires others to create larger frameworks, where instead of focusing on narrow questions that cover a specific area, they are looking to answer questions with a much wider scope,” Cheung says.
This work was supported, in part, by the Center for Brains, Minds, and Machines and the Defense Advanced Research Projects Agency (DARPA) Mathematics for the Discovery of Algorithms and Architectures (DIAL) program.
Teen builds an award-winning virtual reality prototype thanks to free MIT courses
When Freesia Gaul discovered MIT Open Learning’s OpenCourseWare at just 14 years old, it opened up a world of learning far beyond what her classrooms could offer. Her parents had started a skiing company, and the seasonal work meant that Gaul had to change schools every six months. Growing up in small towns in Australia and Canada, she relied on the internet to fuel her curiosity.
“I went to 13 different schools, which was hard because you're in a different educational system every single time,” says Gaul. “That’s one of the reasons I gravitated toward online learning and teaching myself. Knowledge is something that exists beyond a curriculum.”
The small towns she lived in often didn’t have a lot of resources, she says, so a computer served as a main tool for learning. She enjoyed engaging with Wikipedia, ultimately researching topics and writing and editing content for pages. In 2018, she discovered MIT OpenCourseWare, part of MIT Open Learning, and took her first course. OpenCouseWare offers free, online, open educational resources from more than 2,500 MIT undergraduate and graduate courses.
“I really got started with the OpenCourseWare introductory electrical engineering classes, because I couldn’t find anything else quite like it online,” says Gaul, who was initially drawn to courses on circuits and electronics, such as 6.002 (Circuits and Electronics) and 6.01SC (Introduction to Electrical Engineering and Computer Science). “It really helped me in terms of understanding how electrical engineering worked in a practical sense, and I just started modding things.”
In true MIT “mens et manus” (“mind and hand”) fashion, Gaul spent much of her childhood building and inventing, especially when she was able to access a 3D printer. She says that a highlight was when she built a life-sized, working version of a Mario Kart, constructed out of materials she had printed.
Gaul calls herself a “serial learner,” and has taken many OpenCourseWare courses. In addition to classes on circuits and electronics, she also took courses in linear algebra, calculus, and quantum physics — in which she took a particular interest.
When she was 15, she participated in Qubit by Qubit. Hosted by The Coding School, in collaboration with universities (including MIT) and tech companies, this two-semester course introduces high schoolers to quantum computing and quantum physics.
During that time she started a blog called On Zero, representing the “zero state” of a qubit. “The ‘zero state’ in a quantum computer is the representation of creativity from nothing, infinite possibilities,” says Gaul. For the blog, she found different topics and researched them in depth. She would think of a topic or question, such as “What is color?” and then explore it in great detail. What she learned eventually led her to start asking questions such as “What is a hamiltonian?” and teaching quantum physics alongside PhDs.
Building on these interests, Gaul chose to study quantum engineering at the University of New South Wales. She notes that on her first day of university, she participated in iQuHack, the MIT Quantum Hackathon. Her team worked to find a new way to approximate the value of a hyperbolic function using quantum logic, and received an honorable mention for “exceptional creativity.”
Gaul’s passion for making things continued during her college days, especially in terms of innovating to solve a problem. When she found herself on a train, wanting to code a personal website on a computer with a dying battery, she wondered if there might be a way to make a glove that can act as a type of Bluetooth keyboard — essentially creating a way to type in the air. In her spare time, she started working on such a device, ultimately finding a less expensive way to build a lightweight, haptic, gesture-tracking glove with applications for virtual reality (VR) and robotics.
Gaul says she has always had an interest in VR, using it to create her own worlds, reconstruct an old childhood house, and play Dungeons and Dragons with friends. She discovered a way to put into a glove some small linear resonant actuators, which can be found in a smartphone or gaming controller, and map to any object in VR so that the user can feel it.
An early prototype that Gaul put together in her dorm room received a lot of attention on YouTube. She went on to win the People’s Choice award for it at the SxSW Sydney 2025 Tech and Innovation Festival. This design also sparked her co-founding of the tech startup On Zero, named after her childhood blog dedicated to the love of creation from nothing.
Gaul sees the device, in general, as a way of “paying it forward,” making improved human-computer interaction available to many — from young students to professional technologists. She hopes to enable creative freedom in as many as she can. “The mind is just such a fun thing. I want to empower others to have the freedom to follow their curiosity, even if it's pointless on paper.
“I’ve benefited from people going far beyond what they needed to do to help me,” says Gaul. “I see OpenCourseWare as a part of that. The free courses gave me a solid foundation of knowledge and problem-solving abilities. Without these, it wouldn’t be possible to do what I’m doing now.”
