Feed aggregator
Tackling the housing shortage with robotic microfactories
A national housing shortage is straining finances and communities across the United States. In Massachusetts, at least 222,000 homes will have to be built in the next 10 years to meet the population's needs. At the same time, there are numerous challenges in traditional construction. There's a shortage of skilled construction workers. Most projects involve multiple contractors and subcontractors, adding complexity and lag time. And the construction process, as well as the buildings themselves, can be a major source of emissions that contribute to climate change.
Reframe Systems, co-founded by Vikas Enti SM '20, uses robotics, software, and high-performance materials to address these problems. Founded in 2022, the company deploys microfactories that bring housing fabrication and production closer to the regions where the homes are needed. The first homes designed and manufactured in Reframe's first microfactory have been fully built in Arlington and Somerville, Massachusetts.
Enti's experiences in MIT System Design and Management (SDM) shaped the company from its start. "Learning how to navigate the system and finding the optimal value for each stakeholder has been a key part of the business strategy," he says, "and that's rooted in what I learned at SDM."
Better tools for system-level problems
Enti applied to SDM's master of science in engineering and management while he was working at Kiva Systems, overseeing its acquisition by Amazon and transformation into Amazon Robotics. He found that the SDM program's fundamentals of systems engineering, system architecture, and project management provided him with the tools he needed to address system-level problems in his work.
While he was at MIT, Enti also served as an associate director for the MIT $100K Entrepreneurship Competition, which offers students and researchers mentorship, feedback, and potential funding for their startup ideas. He realized that "there isn't a single formula for how businesses start, or how long it takes to get them started," he says, which helped shape his plans to start his own business.
Enti took a leave of absence from MIT to oversee the expansion of Amazon Robotics in Europe. He returned and completed his degree in 2020, writing his thesis on developing technology that could mitigate falls for elderly people. This instinct to use his education for a good cause resurfaced when his daughters were born. He wanted his future business to address a real-world problem and have a social impact, while also reducing carbon emissions.
Growing housing, shrinking emissions
Enti concluded that housing, with immediate real-world impact and a significant share of global carbon emissions, was the right problem to work on. He reached out to his colleagues Aaron Small and Felipe Polido from Amazon Robotics to share his idea for advanced, low-cost factories that could be deployed quickly and close to where they were needed. The two joined him as co-founders.
Currently, the microfactory in Andover, Massachusetts, produces structural panels, with robotics completing wall and ceiling framing and people completing the rest of the work, including wiring and plumbing. Eventually, Reframe hopes to automate more of the building process through further use of robotics. The modular construction process allows for reduced waste and disruption on the eventual home site. And the finished homes are designed to be energy-efficient and ready for solar panel installation. The company is set to start work soon on a group of homes in Devens, Massachusetts.
In addition to the Andover location, Reframe is setting up in southern California to help rebuild homes that were destroyed in the area's January 2025 wildfires. The company's software-assisted design process and the adjustability of the microfactories allows them to meet local zoning and building codes and align with the local architectural aesthetic. This means that in Somerville, Reframe's completed buildings look like modernized versions of the neighboring three-story buildings, known locally as "triple-deckers." On the other side of the country, Reframe's design offerings include Spanish-style and craftsman homes.
"Housing is a complex systems problem," Enti says, explaining the impact SDM has had on his work at Reframe. The methods and tools taught in the integrated core class EM.412 (Foundations of System Design and Management) help him tackle systems-level problems and take the needs of multiple stakeholders into account. The Reframe team used technology roadmapping as they devised their overall business plan, inspired by the work of Olivier de Weck, associate head of the MIT Department of Aeronautics and Astronautics. And lectures on project management from Bryan Moser, SDM's academic director, remain relevant.
"Embracing the fact that this is a systems problem, and learning how to navigate the system and the stakeholders to make sure we're finding the optimal value, has been a key part of the business strategy," Enti says.
Reframe Systems is set to continue learning through iteration as they plan to expand their network of microfactories. The company remains committed to the core vision of sustainably meeting the country's need for more housing. "I'm grateful we get to do this," Enti says. "Once you strip away all the robotics, the advanced algorithms, and the factories, these are high-quality, healthy homes that families get to live in and grow."
Copyright and DMCA Best Practices for Fediverse Operators
People building the future of the social web — interoperable and decentralized — need to protect themselves against copyright liability. Like anyone who creates and operates platforms for user-uploaded content, the hosts of the decentralized social web can take preventive measures to reduce their legal exposure when a user posts material that violates someone’s copyright.
This post gives an overview of the steps to take. It’s meant for operators of Mastodon and other ActivityPub servers, Bluesky hosts, RSS mirrors, and other decentralized social media protocols, and developers of apps for those protocols — but it will apply to other hosts as well. This isn’t legal advice, and can’t substitute for a consultation with a lawyer about your specific circumstances. It focuses on U.S. law — the law may impose different requirements elsewhere. Still, we hope it helps you get started with confidence.
Why should I care? Copyright’s Sword of DamoclesIn some circumstances, the operator of a platform that handles user content can be legally responsible for content that infringes copyright. That can happen when the platform operator is directly involved in copying or distributing the copyrighted material, when they promote or knowingly assist the infringement, or when they benefit financially from infringement while being in a position to supervise it. But these judge-made rules are often difficult and uncertain to apply in practice — and the penalties for being found on the wrong side of the law can be severe. Copyright’s “statutory damages” regime allows for massive, unpredictable financial liability. That’s why it’s important to limit your risk.
For Server Operators: Limiting Risk with the DMCA Safe HarborsIf you run a social network server, the safe harbor provisions of the Digital Millennium Copyright Act (DMCA) are an important way to limit your liability risk. The DMCA shields server operators from nearly all forms of copyright liability that can result from “storage at the direction of a user” — in other words, hosting user-uploaded content. But to qualify for this protection, there are steps a server operator has to take.
1. Designate A Contact To Receive Copyright Infringement NoticesFirst, you’ll need to provide contact information for someone who can receive infringement notices (a “designated agent”). That information needs to be posted in at least two places: on your server in a place visible to users (such as a “DMCA” page or post, or as part of your Terms of Service), and in the U.S. Copyright Office’s “Designated Agent Directory.” To post that information to the directory, you have to create an account at https://www.copyright.gov/dmca-directory/ and pay a small fee. The directory listings expire after three years, and once expired, your safe harbor protection goes away, so it’s important to keep that listing current.
2. Respond Promptly to Notices and Counter-noticesWhen you receive infringement notices, it’s important to respond to them promptly. Notices are supposed to identify the copyright holder, the copyrighted work they claim was infringed, and the post they claim is infringing. By deleting or disabling access to the posted material, you protect yourself from liability with respect to that material.
The theory behind Section 512 is that hosts don’t have to be in a position of deciding whether a post infringes someone’s copyright — it’s up to the poster, the rights holder, and potentially a court to decide that. A host who takes down posts whenever they receive an infringement notice is well-protected. But it’s equally important to recognize that hosts aren’t required to take down content in response to every notice. Infringement notices are frequently wrong, misguided, or abusive, or simply incomplete. Hosts who want to stand up for their users’ speech can choose to disregard infringement notices that seem suspect. While this risks losing the automatic protection of the safe harbor in each instance, it can still be done safely with careful preparation, ideally using a plan crafted with help from a lawyer. Bear in mind that people sending false notices, including by failing to consider whether a post is a fair use before asking a host to take it down, can be liable for damages under the DMCA.
The DMCA also allows the person who posted the material to send a “counter-notification” asserting that they really did have the right to post and that there’s no copyright infringement. Responding to counter-notifications is a good way for a host to demonstrate that they look out for their users. When a host receives a counter-notification, they should forward it on to the person who sent the original takedown notice and let them know that the post will be restored in 10 business days. Then, after that waiting period has elapsed, the host can restore the posted material. Just like with infringement notices, a host isn’t required to honor a counter-notification that appears to be fraudulent, but there’s no penalty for honoring it anyway.
3. Have A Repeat Infringer PolicyThe next requirement is to have a policy of terminating the accounts of “subscribers and account holders” who are “repeat infringers” in “appropriate circumstances,” and to carry out that policy. Yes, that’s a vague requirement. It doesn’t require a “three strikes” policy or any other sports analogy. It just needs to be reasonable. Be sure your policy is spelled out in your website terms or “DMCA” page.
4. Don’t Ignore Known InfringementHosts need to take down user posts whenever the host actually knows that the post is infringing. In other words, a host isn’t protected if they ignore takedown notices based on technicalities in the notices, or if they learn about the infringement some other way. But hosts don’t need to actively look for infringement on their servers — only to act when someone notifies them.
5. Don’t Encourage InfringementFinally, make sure that nothing you post or advertise actively encourages copyright infringement. For example, don’t post examples of users uploading copyrighted music or video without permission, or insinuate that your server is a good place for infringing content.
There are some other technicalities in the DMCA that can affect the safe harbor, which is why it’s always a good idea to consult with a lawyer. But following these steps will help protect you when you run a social media server — or any other kind of user-uploaded content platform.
How to expand the US economy
It’s an essential insight about our world: Innovation drives economic growth. For the U.S. to thrive, it must keep innovating. But how, and in what areas?
A new book co-authored by MIT faculty members focuses on six key areas where technology advances can drive the economy and support national security.
Those sectors — semiconductors, biotechnology, critical minerals, drones, quantum computing, and advanced manufacturing — are all built on U.S. know-how but are also areas where the country has either yielded a lead in production or innovation, or could yet fall behind.
As the book explains, a roadmap for U.S. prosperity and security involves sustaining notable areas of innovation and the national research ecosystem behind them, while rebuilding domestic manufacturing.
“In each of these areas, there are breakthroughs to be had, where the U.S. can leapfrog competitors and gain an advantage,” says Elisabeth Reynolds, an MIT expert on industrial innovation and editor of the new volume. “That’s a very exciting part of this.” She adds: “These areas are front and center for U.S. national economic and security policy.”
The book, “Priority Technologies: Ensuring U.S. Security and Shared Prosperity,” is published this week by the MIT Press. It features chapters by MIT faculty with expertise on the industrial sectors in question. Reynolds, a professor of the practice in MIT’s Department of Urban Studies and Planning, is a leading expert on industrial innovation and has long advocated for innovation-based growth that helps the U.S. workforce.
“All of this can be good for everyone,” says MIT economist Simon Johnson, who wrote the foreword to the book. “Out of that flow of innovations and ideas, we can create more good jobs for all Americans. Pushing the technological frontier and turning that into jobs is definitely going to help.”
Making more chips
“Priority Technologies” grew out of an ongoing MIT seminar by the same name, which Reynolds and Johnson began holding in 2023, often with appearances by other MIT faculty.
Both Reynolds and Johnson bring vast experience to the subject of innovation and production. Among other things, Reynolds headed MIT’s Industrial Performance Center for over a decade and was executive director of the MIT Task Force on the Work of the Future. She served in the White House National Economic Council as special assistant to the president for manufacturing and development.
Johnson, the Ronald A. Kurtz (1954) Professor of Entrepreneurship at the MIT Sloan School of Management, shared the 2024 Nobel Prize in economics, with MIT’s Daron Acemoglu and the University of Chicago’s James Robinson, for work about the historical relationship between institutions and economic growth. He has co-authored numerous books, including, with Acemoglu, the 2023 book “Power and Progress,” about the trajectory and implications of artificial intelligence.
As it happens, “Priority Technologies” does not focus on AI, instead opting to examine other vital, and often related, areas of innovation.
“We do not think this is the entire list of priority technologies,” Johnson says. “This is a partial list, and there are lots of other ideas.”
In the chapter on semiconductors, Jesús A. del Alamo, the Donner Professor of Science in MIT’s Department of Electrical Engineering and Computer Science, calls them “the oxygen of modern society.” This U.S.-born industry has seen a large manufacturing shift away from the country, however, leaving it vulnerable in terms of security and the economy; about one-third of inflation experienced in 2021 stemmed from a chip shortage. As he notes, the U.S. is now in the process of rebuilding its capacity to make leading-edge logic chips, for one thing.
“With semiconductors, people thought the U.S. could lose the manufacturing, stay on top of the innovation and design side, and would be fine,” Reynolds says. “But it’s turned out to make the country quite vulnerable. So we’ve had a massive shift to rebuild semiconductor manufacturing capabilities here in the U.S., and I would argue that’s been a successful strategy in recent years.”
Bringing biotech back home
In biotechnology, relocating manufacturing in the U.S. is also key, using new technologies in the process. As J. Christopher Love, the Laurent Professor of Chemical Engineering, puts it in his chapter, while the U.S. is the leader in biotech research, it “lacks the manufacturing infrastructure and expertise necessary to bring these ideas to the market at the same pace as it generates innovative new products.” Among other remedies, he suggests that smaller, more flexible production facilities can help the U.S. “leapfrog” other countries on the manufacturing side. Love is also co-director of MIT’s Initiative for New Manufacturing, which aims to drive advances in U.S. production across industries.
“We have tremendous biotech innovation, we’re the leaders, but we have a bottleneck when it comes manufacturing,” Reynolds observes. “If we can break through that with new technologies, new production processes, we’re in a position to make us less vulnerable, from a supply chain point of view, and capture more of what is going to be a $4 trillion market over the next 15 years.”
A similar story holds in other areas. Many drone innovations were developed in the U.S., while much manufacturing has shifted to China. Fiona Murray, the William Porter (1967) Professor of Entrepreneurship, writes that the U.S. has an “opportunity to rebuild its production at scale,” although that will also require significant strengthening of its supply chains, too.
Elsa Olivetti, the Jerry McAfee (1940) Professor of Engineering and a professor of materials science and engineering, recommends a multifaceted approach to help the U.S. regain traction in the production of critical minerals, including better forms of extraction, manufacturing, and recycling, to reduce potential scarcities.
And in the quantum computing chapter, two MIT co-authors — William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and a professor of physics; and Jonathan Ruane, a senior lecturer at MIT Sloan — note that the sector could help accelerate drug discovery, materials science, and energy applications. Noting that the U.S. still leads in private-sector investment in the field but tails China in public-sector investment, they urge more research support and stronger supply chains for quantum computing components, among other recommendations.
“The country that achieves quantum leadership will gain decisive advantages in these strategically important industries,” they write.
The university engine
From industry to industry, the book makes clear that certain key issues are broadly important to U.S. competitiveness and growth. The partnership between the federal government and the world-leading research capacities of U.S. universities, for one thing, has given the country an initial lead in many economic sectors and promises to continue driving innovation.
At the same time, the U.S. would benefit from expanding and strengthening its domestic supply chains, in the process of building up more domestic manufacturing, and needs capital investment that will help hardware-side, physically substantial industrial growth.
“These common themes include supply chain resilience and manufacturing capability,” Reynolds says. “Can we help drive the country’s innovation ecosystem through expansion of our industrial system and manufacturing? That’s a big question.”
On the research front, she reflects, over the years, “It’s been amazing how much MIT-led research has aligned with national priorities — or maybe that’s not so surprising.”
The partnership between the U.S. federal government and universities as research engines was formalized in the 1940s, thanks in part to then-MIT president Vannevar Bush. According to some estimates, government investment in non-defense research and development alone has accounted for up to 25 percent of U.S. economic growth since World War II.
“Vannevar Bush realized it wasn’t about a stock of technology, it was about a flow of innovation,” Johnson says. “And that brilliant insight is still relevant today. I think that is the insight of the last century. And that’s what we’re trying to capture and reiterate and repeat.”
“This is not even the future. This is current.”
Scholars and industry leaders have praised “Priority Technologies.” Erica Fuchs, a professor of engineering and public policy at Carnegie Mellon University, has stated that when it comes to “ensuring American national security, economic competitiveness, and societal well-being,” the book underscores “the positive role technology can play in those outcomes.” Hemant Taneja, CEO of the venture capital firm General Catalyst, calls the volume “required reading for anyone interested in building the abundant, resilient future America deserves.”
For their part, Reynolds and Johnson hope the book will draw many kinds of readers interested in the economy, innovation, prosperity, and national security.
“We tried to make the volume accessible,” Reynolds says, noting that the book directly lays out “challenges for the country, and what we see as recommendations for next steps in how we position the country to succeed, and lead globally. Each of these chapters has something important to say.”
Johnson also notes the MIT scholars participating in the project want to enhance the ongoing policy conversation, in Washington and across the country, about supporting innovation and using it to drive U.S. economic and technological leadership.
“One reason to write a book is, you can’t pound the table with a podcast,” quips Johnson, who co-hosts a podcast, “Power and Consequences,” on major policy issues. In conversations with political leaders and their staffs, he adds, there is a core message to be transmitted about America and technology-driven growth: We have the knowledge and resources, but need to focus on supporting innovation while trying to increase domestic production.
“Here are the technologies we currently need,” Johnson says. “This is not imagination, this is not fanciful, this is not science fiction. This is not even the future. This is current. These are the technologies needed to defend the country and its interests. And we need to invest in these, and in everything we need to drive them forward.”
From EPA to DOJ? Zeldin’s words may haunt him.
Trump invokes war powers to juice fossil energy, grid
California’s largest power user strains to meet 2035 climate goal
Major insurer won’t disclose some emissions
Trump’s $1B deal sinking offshore wind draws legal scrutiny
Gas project review downplays climate after endangerment repeal
Wildfire survivors could face another blow from taxes on settlement payouts
Mexican Surveillance Company
Grupo Seguritech is a Mexican surveillance company that is expanding into the US.
Germany wants to put industry at core of EU carbon market reform
Solar rises to 10% of Japan power generation in 2025, Ember says
BHP begins review to rank unprofitable Australian coal mines
Fewer candidates for UN secretary-general audition this year than in 2016
Palantir Has a Human Rights Policy. Its ICE Work Tells a Different Story
For years, EFF has pushed technology companies to make real human rights commitments—and to live up to them. In response to growing evidence that Palantir’s tools help power abusive immigration enforcement by ICE, we sent the company a detailed letter asking how the promises in its own human rights framework extends to that work.
This post explains what we asked, how Palantir responded, and why we believe those responses fall short. EFF is not alone in raising alarms about Palantir; immigrants' rights groups, human rights organizations, journalists, and former employees have raised similar concerns based on reports of the company's role in abusive immigration enforcement. We focus here on Palantir’s own human rights promises.
At the outset, we appreciate that Palantir was willing to engage respectfully, and we recognize that confidentiality and security obligations can limit what it can say. Nonetheless, measured against Palantir's own human rights commitments, its decision to keep powering ICE with tools used in dragnet raids and discriminatory detentions is indefensible. A good-faith application of those commitments should lead Palantir to end its contract with ICE, and refuse new, or end current, contracts with any other agency whose work predictably violates those commitments.
Palantir’s Public PromisesPalantir has long said it performs comprehensive human rights analysis on its work. It has also worked with ICE for years, apparently in a more limited capacity than today. It has publicly embraced the UN Guiding Principles on Business and Human Rights, the Universal Declaration of Human Rights, and the OECD Guidelines for Multinational Enterprises. Additionally, in its response to EFF, Palantir says its legal responsibilities are only “the floor” for broader risk assessments.
That was the point of our letter. We asked what human rights due diligence Palantir conducted when it first contracted with ICE and DHS; whether it performed the “proactive risk scoping” it advertises, how it reviews work over time, what it has done in response to reports of misuse, and whether it has used “every means at [its] disposal”—including contract provisions, third‑party oversight, and termination—to prevent or mitigate harms.
For the most part, Palantir did not answer our accountability questions. It did correct one point: Palantir says it does not currently work with CBP, and available evidence supports that, though it also made clear it could work with CBP in the future.
Palantir also raised a red herring it often deploys in response to criticism. It denied building a 'mega' or 'master' database for ICE and denied creating a database of protesters, which some ICE agents have claimed to have been built. We call it a red herring because those denials sidestep the central issues: what capabilities Palantir's tools actually provide to ICE.
To be clear, EFF has never claimed that Palantir is building a single centralized database. Our concern is grounded in how Palantir’s tools allow ICE to query and analyze data from multiple databases through a unified interface—which from an agent’s perspective can be a distinction without a difference.
In the sections that follow, we compare Palantir’s account of its work for ICE with evidence about how its tools seem to be used, and explain why legality, internal process, and sustained “engagement with the institutions whose vital tasks exist in tension with certain human rights” are no substitute for real human rights due diligence—because respect for human rights must be measured by outcomes, not just process.
Palantir’s ICE Work Undermines Its Own StandardsPalantir says ICE uses its ELITE tool for “prioritized enforcement”: to surface likely addresses of specific people, such as individuals with final orders of removal or high‑severity criminal charges. But according to sworn testimony in Oregon, ICE agents use ELITE to determine where to conduct deportation sweeps, and the system “pulled from all kinds of sources” to identify locations for raids aimed at mass detentions, including information from the Department of Health and Human Services such as Medicaid data. A leaked ELITE user guide for 'Special Operations' also instructs operators to disable filters to "display all targets within a Special Operations dataset." Those details directly conflict with Palantir’s narrow description of ELITE’s role.
Additionally, Palantir's response leans on legal authority and the Privacy Act. But it does not identify any specific lawful basis for using Medicaid data in this way or explain how its software enables that access. Even if a legal theory exists, turning sensitive medical information into fuel for dragnet sweeps is hard to reconcile with its commitments to privacy, equity, and the rights of impacted communities. Its own human rights framework requires grappling with foreseeable harms its products may enable, not just invoking possible legal authorization.
Reporting shows that many people detained by ICE had no criminal record, much less a serious one, and in many cases no final order of removal. An overwhelming percentage of those detained were, or appeared to be, from Central and South America, and nearly one in five ICE arrests were street arrests of a Latine person with neither a criminal history nor a removal order.
These facts raise obvious questions about discriminatory impact, racial profiling, and whether Palantir's tools are facilitating detention practices far broader than the company claims. Palantir's response does not meaningfully engage those questions, despite the company's commitments to non-discrimination and due process.
EFF’s letter asked Palantir to explain how it is honoring its commitments to civil liberties in light of reports linking Palantir-owned systems to facial recognition and other tools used to identify and target people engaged in observing and recording law enforcement, including in connection with the deaths of Renée Good and Alex Pretti. The letter also cites an incident in which an officer scanned protesters’ and observers’ faces and threatened to add their biometrics to a “nice little database.” Palantir’s response denies involvement in any such database.
A narrow denial about a single database does not answer the broader question: if ICE, its customer, claims it has this capability, what has Palantir done to ensure its tools are not used to chill protected speech, retaliate against observers, or facilitate targeting of people engaged in First Amendment‑protected activity? For a company that claims to value democracy and civil liberties, this is not a marginal issue; it goes to the heart of its human rights commitments.
Legality, Process, and Engagement with ICE Are Not Human Rights StandardsAs mentioned above, Palantir leans heavily on legal compliance. It says government data sharing is “subject to, and governed by, data sharing agreements and government oversight” and that any sharing it facilitates is done according to “legal and technical requirements, including those of the Privacy Act of 1974.” It describes its role in ELITE as “data integration,” enabling ICE “to incorporate data sources to which it has access,” including data shared under inter‑agency agreements.
EFF is very familiar with the Privacy Act—we are suing the Office of Personnel Management over it currently. But Palantir’s response does not clarify how ICE legally has access to this information, how Palantir ensures that it follows those legal processes, or how Palantir’s software may have enabled access in the first place. More critically, that is still a legal answer to a human rights question, and legal compliance alone is insufficient as a human rights standard.
Human rights due diligence requires assessing foreseeable harms, responding to credible evidence of abuse, and changing course when the facts demand it—something Palantir, on paper, recognizes. That’s why it stresses that its legal responsibilities are only “the floor for [its] broader risk assessments,” pointing to the way it built toward GDPR‑style data protection principles and incorporated international humanitarian law principles before those requirements were formalized. If those commitments mean anything, Palantir has to explain how specific practices—like enabling ICE to use Medicaid data in dragnet raids—square with that broader standard.
Palantir also leans heavily on process. It points to a “layered approach” to risk, frameworks that purportedly examine multiple dimensions of privacy and equity, and “indelible” audit logs that track how its tools are used. Audit logs are not sufficient for protecting human rights. There is a long history of authoritarian regimes keeping extensive logs of their human rights abuses. Those structures can be useful for protecting human rights, but only if they are used to detect harm, trigger reassessment, and lead to changes in design, access, support, or contract enforcement when credible reports of abuse emerge.
That is why we pressed Palantir to spell out clearly what reports of misuse Palantir has received, what changes it made, and on what timeline. Again, instead of offering specific examples, Palantir points back to its internal framework and its willingness to “move towards the hardest problems” as evidence of effective efforts. But human rights are an outcome, not just a process.
Human rights due diligence is not a one-time approval at contract signing; under the UN Guiding Principles, it is supposed to be continuous, with new facts triggering reassessment. Complaints, media reports, leaks, litigation, and sworn testimony are exactly the kinds of events that should prompt review. If Palantir has an account for that work— how often it reviews ICE contracts, who conducts the reviews, what triggers them, and how findings reach the Board— it had every opportunity to describe it. Instead, it offered a generic assurance that it remains committed to human rights without engaging in the specifics. Confidentiality may sometimes limit disclosure, but it is no substitute for accountability.
What Needs to Happen NextPalantir wants credit for “mov[ing] towards the hardest problems” and engaging with institutions whose missions it says are “in tension with certain human rights” while having a human rights framework. But when the record includes violent raids, dragnet detentions, use of sensitive medical data, discriminatory targeting, retaliation against observers, and deaths tied to immigration enforcement operations, pointing to a values page is not enough; it has to reckon with the results.
Voluntary corporate human rights policies often function as weak accountability mechanisms: companies can tout principles, publish policies, and answer criticism with polished statements while changing very little on the ground. Palantir’s response fits that pattern all too well. EFF will continue to challenge its role in abusive immigration enforcement and demanding more accountability for technology vendors whose tools enable human rights violations. We are also happy to continue a dialogue with Palantir to that end. For now, this much is clear: Palantir needs to reconsider its contract with ICE and with all agencies whose work predictably violate human rights.
The Internet Still Works: Reddit Empowers Community Moderation
Section 230 helps make it possible for online communities to host user speech: from restaurant reviews, to fan fiction, to collaborative encyclopedias. But recent debates about the law often overlook how it works in practice. To mark its 30th anniversary, EFF is interviewing leaders of online platforms about how they handle complaints, moderate content, and protect their users’ ability to speak and share information.
Reddit is one of the largest user-generated content platforms on the internet, built around thousands of independent communities known as subreddits. Some subreddits cover everyday interests, while others host discussions about specialized or controversial topics. These communities are created and moderated by volunteers, and the site’s decentralized model means that Reddit hosts a vast range of user speech without relying on centralized editorial control.
Ben Lee is Chief Legal Officer at Reddit, where he oversees the company’s legal strategy and policy work on issues including content moderation and intermediary liability. Before joining Reddit, Lee held senior legal roles at other tech companies including Plaid, Twitter, and Google. At Reddit, he has been closely involved in litigation and policy debates surrounding Section 230, including cases addressing the legal risks faced by platforms and their users and moderators. He was interviewed by Joe Mullin, a policy analyst on EFF's Activism Team.
Joe Mullin: When we talk about user rights and Section 230, what rights are most at stake on a platform like Reddit?
Ben Lee: Reddit, we often say, is the most human place on the internet. What’s often missing from the debate is that section 230 protects people—not platforms.
It protects millions of everyday humans and volunteer moderators who participate in online communities. Without it, people could face lawsuits for voting down a post, enforcing community rules, or moderating a discussion. These are foundational activities on Reddit, and frankly, the whole internet.
If you had to describe section 230 to a regular Reddit user without naming the law, what would you say it does for them?
Section 230 protects your ability to participate in community moderation.
Even if all you are doing is up-voting or down-voting content, that’s participation. On Reddit, everyone is a content moderator, through voting. Up-voting determines the visibility of content.
We believe, strongly, this is one of the only models to allow Reddit to scale. You make the community part of the moderation process. They’re invested in the community, making it better.
How would user speech be affected if Section 230 were eliminated or weakened?
We would undermine community self governance—the notion that humans can do content moderation, and take that responsibility for themselves. Whether you’re a small blog or big forum. I like to think of Reddit as composed of this federation of communities that range from the tiny to the humongous. That’s what the internet is!
The legal risk would discourage people from moderating, or even speaking at all. The kind of speech we’re trying to protect is often critical of powerful people or entities. If a moderation decision leads to litigation from those powerful entities, that’s an expensive proposition to fight.
Reddit relies on user-run communities and volunteer moderators. Can you walk me through how content moderation and legal complaints actually work in practice, and where section 230 comes into that?
We have a tiered structure, like our federal system. Each community is like a state: it has its own rules, and enforces them. The vast majority of content moderation decisions are made by the communities, not by Reddit itself.
Reddit is built on self-governing communities that are moderated by volunteers, supported by automated tools. Section 230 gives Reddit the freedom to experiment, and lets users shape healthy, interest-based spaces.
Section 230 is fundamental to protecting the moderators from a frivolous lawsuit. A screenwriting community might want to protect their community from scammy competitions—and then they get sued by that competition.
Or a community wants to keep their conversation civil. And, for example, may not allow Star Trek characters to be called “soy boys,” and they enforce that. Then a person sues.
I wish these were hypotheticals. But they were actual lawsuits. And we have them, routinely.
What are policymakers missing about Section 230?
The [moderation] decisions being criticized in court, are decisions to try to make the internet safer. In none of the cases that I mentioned is there a moderator saying, “I want to increase harmful content!” These are good-faith decisions about what makes the internet better.
Section 230 is, at its core, protecting the ability for people to make those choices for their own communities.
There's a price to be paid for not having a Section 230. And it will be paid by internet users—not the biggest platforms.
Some see 230 as a way to punish Big Tech. But removing it doesn't punish Big Tech—it makes them more powerful. It's startups, community driven platforms, and individual moderators who rely on Section 230 to compete and innovate. Weakening Section 230 will harm the open internet, and reduce the choice, diversity, and resilience of the internet.
The big guys, they have armies of lawyers. They have the budget to withstand a flood of lawsuits. Weakening Section 230 just entrenches them.
In Reddit’s amicus brief in the Gonzalez v. Google Supreme Court case, you point out that without Section 230, many moderation decisions wouldn’t be protected. The brief states: “A plaintiff might claim emotional distress from a truthful but hurtful post that gained prominence when a moderator highlighted it as a trending topic. Or, a plaintiff might claim interference with economic relations arising from an honest but very critical two-star restaurant review.”
When you have situations where moderators get threats or litigation, what can you do?
We have had cases where our own moderators got sued, along with us. In the “soy boy” case, we worked to help find pro bono counsel for the moderators.
Someone posted “Wesley Crusher is a soy boy,” and it got removed. I'm enough of a Star Trek fan that I understand both the reference, and why the moderator decided—“hey, it's gone. I don't want this here.”
This would not violate our Reddit rules. But the community took it down under its own rules about being civil. It was just not a kind-hearted action, and the community had a right to decide.
But the moderator got sued. We got sued, actually, because the poster disagreed with that moderation choice. Section 230 is what allowed us to win that case.
These are just average people, implicated only because they moderated their own community. They are trying to do the right thing by their community.
In cases where litigation happens, when does Section 230 come into play?
Section 230 is usually one of the first things that's talked about in the case. It’s usually the most effective way of saying: if you believe someone who defamed you—please go to the person who has defamed you. If you’re looking to the moderator, or to Reddit itself, this is not a great way of getting the justice that you seek.
Is there a different workflow internationally?
There’s a very different workflow. We had a prominent case in France where a company was trying to sue moderators, and of course, we didn't have section 230 to protect them. So we had to do all sorts of other things to protect them. It got much more complicated.
The breadth of content that's considered illegal in certain jurisdictions can be somewhat breathtaking.
Our goal is always to preserve as much freedom of expression as possible for our community. In the U.S., we look at it through the lens of the First Amendment, and other aspects. Outside the U.S., we rely more on the lens of international human rights.
How would you characterize legal demands around user content, the ones you see most often?
They tend to be: somebody said something mean about me—take this down. Or someone says: you didn’t allow me to say something mean about someone or some entity. It completely runs the spectrum.
One law that has already passed that weakens Section 230 is SESTA/FOSTA. From Reddit’s perspective, what changed after that?
There's some communities we had to shut down, in particular, support communities. There was a cost. Every time Section 230 is narrowed, there’s a cost—some types of speech and communities have a harder time staying online.
The cost may not seem high to some people, because those communities are not for them. But if they visited them, they’d see that these are actual people, interacting in a positive way. If it wasn’t positive, we have rules for that—but that’s a different question.
Is “Satoshi Nakamoto” Really Adam Back?
The New York Times has a long article where the author lays out an impressive array of circumstantial evidence that the inventor of Bitcoin is the cypherpunk Adam Back.
I don’t know. The article is convincing, but it’s written to be convincing.
I can’t remember if I ever met Adam. I was a member of the Cypherpunks mailing list for a while, but I was never really an active participant. I spent more time on the Usenet newsgroup sci.crypt. I knew a bunch of the Cypherpunks, though, from various conferences around the world at the time. I really have no opinion about who Satoshi Nakamoto really is...
