Feed aggregator
DOE restores $1.44B loan for Montana clean energy project
FEMA fires senior official over payments for migrant lodging
SEC’s Uyeda weighs future of ‘deeply flawed’ climate rule
Push to cut renewable energy goals in Puerto Rico sparks outrage
Hawaii court rules against insurance companies in Maui fire case
China’s green bond debut is chance to exploit US retreat
Singapore says deeper emissions cuts will need new technology
MIT engineers develop a fully 3D-printed electrospray engine
An electrospray engine applies an electric field to a conductive liquid, generating a high-speed jet of tiny droplets that can propel a spacecraft. These miniature engines are ideal for small satellites called CubeSats that are often used in academic research.
Since electrospray engines utilize propellant more efficiently than the powerful, chemical rockets used on the launchpad, they are better suited for precise, in-orbit maneuvers. The thrust generated by an electrospray emitter is tiny, so electrospray engines typically use an array of emitters that are uniformly operated in parallel.
However, these multiplexed electrospray thrusters are typically made via expensive and time-consuming semiconductor cleanroom fabrication, which limits who can manufacture them and how the devices can be applied.
To help break down barriers to space research, MIT engineers have demonstrated the first fully 3D-printed, droplet-emitting electrospray engine. Their device, which can be produced rapidly and for a fraction of the cost of traditional thrusters, uses commercially accessible 3D printing materials and techniques. The devices could even be fully made in orbit, as 3D printing is compatible with in-space manufacturing.
By developing a modular process that combines two 3D printing methods, the researchers overcame the challenges involved in fabricating a complex device comprised of macroscale and microscale components that must work together seamlessly.
Their proof-of-concept thruster comprises 32 electrospray emitters that operate together, generating a stable and uniform flow of propellant. The 3D-printed device generated as much or more thrust than existing droplet-emitting electrospray engines. With this technology, astronauts might quickly print an engine for a satellite without needing to wait for one to be sent up from Earth.
“Using semiconductor manufacturing doesn’t match up with the idea of low-cost access to space. We want to democratize space hardware. In this work, we are proposing a way to make high-performance hardware with manufacturing techniques that are available to more players,” says Luis Fernando Velásquez-García, a principal research scientist in MIT’s Microsystems Technology Laboratories (MTL) and senior author of a paper describing the thrusters, which appears in Advanced Science.
He is joined on the paper by lead author Hyeonseok Kim, an MIT graduate student in mechanical engineering.
A modular approach
An electrospray engine has a reservoir of propellant that flows through microfluidic channels to a series of emitters. An electrostatic field is applied at the tip of each emitter, triggering an electrohydrodynamic effect that shapes the free surface of the liquid into a cone-shaped meniscus that ejects a stream of high-speed charged droplets from its apex, producing thrust.
The emitter tips need to be as sharp as possible to attain the electrohydrodynamic ejection of propellant at a low voltage. The device also requires a complex hydraulic system to store and regulate the flow of liquid, efficiently shuttling propellant through microfluidic channels.
The emitter array is composed of eight emitter modules. Each emitter module contains an array of four individual emitters that must work in unison, forming a larger system of interconnected modules.
“Using a one-size-fits-all fabrication approach doesn’t work because these subsystems are at different scales. Our key insight was to blend additive manufacturing methods to achieve the desired outcomes, then come up with a way to interface everything so the parts work together as efficiently as possible,” Velásquez-García says.
To accomplish this, the researchers utilized two different types of vat photo polymerization printing (VPP). VPP involves shining light onto a photosensitive resin, which solidifies to form 3D structures with smooth, high-resolution features.
The researchers fabricated the emitter modules using a VPP method called two-photon printing. This technique utilizes a highly focused laser beam to solidify resin in a precisely defined area, building a 3D structure one tiny brick, or voxel, at a time. This level of detail enabled them to produce extremely sharp emitter tips and narrow, uniform capillaries to carry propellant.
The emitter modules are fitted into a rectangular casing called a manifold block, which holds each in place and supplies the emitters with propellant. The manifold block also integrates the emitter modules with the extractor electrode that triggers propellant ejection from the emitter tips when a suitable voltage is applied. Fabricating the larger manifold block using two-photon printing would be infeasible because of the method’s low throughput and limited printing volume.
Instead, the researchers used a technique called digital light processing, which utilizes a chip-sized projector to shine light into the resin, solidifying one layer of the 3D structure at a time.
“Each technology works very well at a certain scale. Combining them, so they work together to produce one device, lets us take the best of each method,” Velásquez-García says.
Propelling performance
But 3D printing the electrospray engine components is only half the battle. The researchers also conducted chemical experiments to ensure the printing materials were compatible with the conductive liquid propellant. If not, the propellant might corrode the engine or cause it to crack, which is undesirable for hardware meant for long-term operation with little to no maintenance.
They also developed a method to clamp the separate parts together in a way that avoids misalignments which could hamper performance and ensures the device remains watertight.
In the end, their 3D-printed prototype was able to generate thrust more efficiently than larger, more expensive chemical rockets and outperformed existing droplet electrospray engines.
The researchers also investigated how adjusting the pressure of propellant and modulating the voltage applied to the engine affected the flow of droplets. Surprisingly, they achieved a wider range of thrust by modulating the voltage. This could eliminate the need for a complex network of pipes, valves, or pressure signals to regulate the flow of liquid, leading to a lighter, cheaper electrospray thruster that is also more efficient.
“We were able to show that a simpler thruster can achieve better results,” Velásquez-García says.
The researchers want to continue exploring the benefits of voltage modulation in future work. They also want to fabricate denser and larger arrays of emitter modules. In addition, they may explore the use of multiple electrodes to decouple the process of triggering of the electrohydrodynamic ejection of propellant from setting up the shape and speed of the emitted jet. In the long run, they also hope to demonstrate a CubeSat that utilizes a fully 3D-printed electrospray engine during its operation and deorbiting.
This research is funded, in part, by a MathWorks fellowship and the NewSat Project, and was carried out, in part, using MIT.nano facilities.
EFF Sues OPM, DOGE and Musk for Endangering the Privacy of Millions
NEW YORK—EFF and a coalition of privacy defenders led by Lex Lumina filed a lawsuit today asking a federal court to stop the U.S. Office of Personnel Management (OPM) from disclosing millions of Americans’ private, sensitive information to Elon Musk and his “Department of Government Efficiency” (DOGE).
The complaint on behalf of two labor unions and individual current and former government workers across the country, filed in the U.S. District Court for the Southern District of New York, also asks that any data disclosed by OPM to DOGE so far be deleted.
The complaint by EFF, Lex Lumina LLP, State Democracy Defenders Fund, and The Chandra Law Firm argues that OPM and OPM Acting Director Charles Ezell illegally disclosed personnel records to Musk’s DOGE in violation of the federal Privacy Act of 1974. Last week, a federal judge temporarily blocked DOGE from accessing a critical Treasury payment system under a similar lawsuit.
This lawsuit’s plaintiffs are the American Federation of Government Employees AFL-CIO; the Association of Administrative Law Judges, International Federation of Professional and Technical Engineers Judicial Council 1 AFL-CIO; Vanessa Barrow, an employee of the Brooklyn Veterans Affairs Medical Center; George Jones, President of AFGE Local 2094 and a former employee of VA New York Harbor Healthcare; Deborah Toussant, a former federal employee; and Does 1-100, representing additional current or former federal workers or contractors.
As the federal government is the nation’s largest employer, the records held by OPM represent one of the largest collections of sensitive personal data in the country. In addition to personally identifiable information such as names, social security numbers, and demographic data, these records include work information like salaries and union activities; personal health records and information regarding life insurance and health benefits; financial information like death benefit designations and savings programs; and nondisclosure agreements; and information concerning family members and other third parties referenced in background checks and health records. OPM holds these records for tens of millions Americans, including current and former federal workers and those who have applied for federal jobs. OPM has a history of privacy violations—an OPM breach in 2015 exposed the personal information of 22.1 million people—and its recent actions make its systems less secure.
With few exceptions, the Privacy Act limits the disclosure of federally maintained sensitive records on individuals without the consent of the individuals whose data is being shared. It protects all Americans from harms caused by government stockpiling of our personal data. This law was enacted in 1974, the last time Congress acted to limit the data collection and surveillance powers of an out-of-control President.
“The Privacy Act makes it unlawful for OPM Defendants to hand over access to OPM’s millions of personnel records to DOGE Defendants, who lack a lawful and legitimate need for such access,” the complaint says. “No exception to the Privacy Act covers DOGE Defendants’ access to records held by OPM. OPM Defendants’ action granting DOGE Defendants full, continuing, and ongoing access to OPM’s systems and files for an unspecified period means that tens of millions of federal-government employees, retirees, contractors, job applicants, and impacted family members and other third parties have no assurance that their information will receive the protection that federal law affords.”
For more than 30 years, EFF has been a fierce advocate for digital privacy rights. In that time, EFF has been at the forefront of exposing government surveillance and invasions of privacy—such as forcing the release of hundreds of pages of documents about domestic surveillance under the Patriot Act—and enforcing existing privacy laws to protect ordinary Americans—such as in its ongoing lawsuit against Sacramento's public utility company for sharing customer data with police.
For the complaint: https://www.eff.org/document/afge-v-opm-complaint
For more about the litigation: https://www.eff.org/deeplinks/2025/02/eff-sues-doge-and-office-personnel-management-halt-ransacking-federal-data
Contacts:
Electronic Frontier Foundation: press@eff.org
Lex Lumina LLP: Managing Partner Rhett Millsaps, rhett@lex-lumina.com
The TAKE IT DOWN Act: A Flawed Attempt to Protect Victims That Will Lead to Censorship
Congress has begun debating the TAKE IT DOWN Act (S. 146), a bill that seeks to speed up the removal of a troubling type of online content: non-consensual intimate imagery, or NCII. In recent years, concerns have also grown about the use of digital tools to alter or create such images, sometimes called deepfakes.
While protecting victims of these heinous privacy invasions is a legitimate goal, good intentions alone are not enough to make good policy. As currently drafted, the TAKE IT DOWN Act mandates a notice-and-takedown system that threatens free expression, user privacy, and due process, without addressing the problem it claims to solve.
TAKE IT DOWN mandates that websites and other online services remove flagged content within 48 hours and requires “reasonable efforts” to identify and remove known copies. Although this provision is designed to allow NCII victims to remove this harmful content, its broad definitions and lack of safeguards will likely lead to people misusing the notice-and-takedown system to remove lawful speech.
The takedown provision applies to a much broader category of content—potentially any images involving intimate or sexual content—than the narrower NCII definitions found elsewhere in the bill. The takedown provision also lacks critical safeguards against frivolous or bad-faith takedown requests. Lawful content—including satire, journalism, and political speech—could be wrongly censored. The legislation’s tight time frame requires that apps and websites remove content within 48 hours, meaning that online service providers, particularly smaller ones, will have to comply so quickly to avoid legal risk that they won’t be able to verify claims. Instead, automated filters will be used to catch duplicates, but these systems are infamous for flagging legal content, from fair-use commentary to news reporting.
TAKE IT DOWN creates a far broader internet censorship regime than the Digital Millennium Copyright Act (DMCA), which has been widely abused to censor legitimate speech. But at least the DMCA has an anti-abuse provision and protects services from copyright claims should they comply. TAKE IT DOWN contains none of those minimal speech protections and essentially greenlights misuse of its takedown regime.
TAKE IT DOWN Threatens Encrypted ServicesThe online services that do the best job of protecting user privacy could also be under threat from Take It Down. While the bill exempts email services, it does not provide clear exemptions for private messaging apps, cloud storage, and other end-to-end encrypted (E2EE) services. Services that use end-to-end encryption, by design, are not able to access or view unencrypted user content.
How could such services comply with the takedown requests mandated in this bill? Platforms may respond by abandoning encryption entirely in order to be able to monitor content—turning private conversations into surveilled spaces.
In fact, victims of NCII often rely on encryption for safety—to communicate with advocates they trust, store evidence, or escape abusive situations. The bill’s failure to protect encrypted communications could harm the very people it claims to help.
Victims Of NCII Have Legal Options Under Existing LawAn array of criminal and civil laws already exist to address NCII. In addition to 48 states that have specific laws criminalizing the distribution of non-consensual pornography, there are defamation, harassment, and extortion statutes that can all be wielded against people abusing NCII. Since 2022, NCII victims have also been able to bring federal civil lawsuits against those who spread this harmful content.
If a deepfake is used for criminal purposes, then criminal laws will apply. If a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. For any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations.
In many cases, civil claims could also be brought against those distributing the images under causes of action like False Light invasion of privacy. False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes.
A false light plaintiff (such as a person harmed by NCII) must prove that a defendant (such as a person who uploaded NCII) published something that gives a false or misleading impression of the plaintiff in such a way to damage the plaintiff’s reputation or cause them great offense
Congress should focus on enforcing and improving these existing protections, rather than opting for a broad takedown regime that is bound to be abused. Private platforms can play a part as well, improving reporting and evidence collection systems.
Gift from Sebastian Man ’79, SM ’80 supports MIT Stephen A. Schwarzman College of Computing building
The MIT Stephen A. Schwarzman College of Computing has received substantial support for its striking new headquarters on Vassar Street in Cambridge, Massachusetts. A major gift from Sebastian Man ’79, SM ’80 will be recognized with the naming of a key space in the building, enriching the academic and research activities of the MIT Schwarzman College of Computing and MIT.
Man, the first major donor to support the building since Stephen A. Schwarzman’s foundational gift established the Schwarzman College of Computing, is the chair and CEO of Chung Mei International Holdings Ltd., a manufacturer of domestic kitchen electrics and air treatment products for major international brands. Particularly supportive of education, he is a council member of the Hong Kong University of Science and Technology, serves on the Board of the Morningside College of the Chinese University of Hong Kong, and was a member of the court of the University of Hong Kong and the chair of the Harvard Business School Association of Hong Kong. His community activities include serving as a council member of The Better Hong Kong Foundation and executive committee member of the International Chamber of Commerce Hong Kong China Business Council, as well as of many other civic and business organizations. Man is also part of the MIT parent community, as his son, Brandon Man, is a graduate student in the Department of Mechanical Engineering.
Man’s gift to the college was recognized at a ceremony and luncheon in Hong Kong, where he resides, on Jan. 10. MIT Chancellor for Academic Advancement W. Eric L. Grimson PhD ’80, who hosted the event, noted that in addition to his financial generosity to the Institute, Man has played many important volunteer roles at MIT. “His service includes advancing MIT near and far as a member of the Corporation Development Committee, sharing his expertise through his recent selection as a new member of the Mechanical Engineering Visiting Committee, and, most recently, his acceptance of an invitation to join the Schwarzman College of Computing Dean’s Advisory Council,” he said.
“This new building is a home for the MIT community and a home for the people who are helping shape the future of computing and AI,” said MIT Schwarzman College of Computing Dean Daniel Huttenlocher SM ’84, PhD ’88 in a video greeting to Man and his family. “Thanks to your gift, the college is better positioned to achieve its mission of creating a positive impact on society, and for that we are deeply grateful.”
The state-of-the-art MIT Schwarzman College of Computing headquarters was designed to reflect the mission of meeting rapidly changing needs in computing through new approaches to research, education, and real-world engagement. The space provides MIT’s campus with a home base for computing research groups, new classrooms, and convening and event spaces.
Those at the Hong Kong event also enjoyed a video message from Stephen A. Schwarzman, chair, CEO, and co-founder of Blackstone and the college’s founding donor. “When we first announced the new college at MIT,” he said, “MIT said it was reshaping itself for the future. That future has come even faster than we all thought. Today, AI is part of the daily vernacular, and MIT’s ability to impact its development with your support is more tangible than ever.”
Sebastian Man spoke fondly of his years at the Institute. “The place really opened my eyes … and sharpened my intellect. It offered me a whole brave, new world. Everything was interesting and everything was exciting!
“I come from a family where my father taught us that one should always be grateful to those people and places that have helped you to become who you are today,” Man continued. “MIT instilled in me unending intellectual curiosity and the love for the unknown, and I am honored and privileged to be associated with the MIT Schwarzman College of Computing.”
Bridging philosophy and AI to explore computing ethics
During a meeting of class 6.C40/24.C40 (Ethics of Computing), Professor Armando Solar-Lezama poses the same impossible question to his students that he often asks himself in the research he leads with the Computer Assisted Programming Group at MIT:
"How do we make sure that a machine does what we want, and only what we want?"
At this moment, what some consider the golden age of generative AI, this may seem like an urgent new question. But Solar-Lezama, the Distinguished Professor of Computing at MIT, is quick to point out that this struggle is as old as humankind itself.
He begins to retell the Greek myth of King Midas, the monarch who was granted the godlike power to transform anything he touched into solid gold. Predictably, the wish backfired when Midas accidentally turned everyone he loved into gilded stone.
"Be careful what you ask for because it might be granted in ways you don't expect," he says, cautioning his students, many of them aspiring mathematicians and programmers.
Digging into MIT archives to share slides of grainy black-and-white photographs, he narrates the history of programming. We hear about the 1970s Pygmalion machine that required incredibly detailed cues, to the late '90s computer software that took teams of engineers years and an 800-page document to program.
While remarkable in their time, these processes took too long to reach users. They left no room for spontaneous discovery, play, and innovation.
Solar-Lezama talks about the risks of building modern machines that don't always respect a programmer's cues or red lines, and that are equally capable of exacting harm as saving lives.
Titus Roesler, a senior majoring in electrical engineering, nods knowingly. Roesler is writing his final paper on the ethics of autonomous vehicles and weighing who is morally responsible when one hypothetically hits and kills a pedestrian. His argument questions underlying assumptions behind technical advances, and considers multiple valid viewpoints. It leans on the philosophy theory of utilitarianism. Roesler explains, "Roughly, according to utilitarianism, the moral thing to do brings about the most good for the greatest number of people."
MIT philosopher Brad Skow, with whom Solar-Lezama developed and is team-teaching the course, leans forward and takes notes.
A class that demands technical and philosophical expertise
Ethics of Computing, offered for the first time in Fall 2024, was created through the Common Ground for Computing Education, an initiative of the MIT Schwarzman College of Computing that brings multiple departments together to develop and teach new courses and launch new programs that blend computing with other disciplines.
The instructors alternate lecture days. Skow, the Laurance S. Rockefeller Professor of Philosophy, brings his discipline's lens for examining the broader implications of today's ethical issues, while Solar-Lezama, who is also the associate director and chief operating officer of MIT's Computer Science and Artificial Intelligence Laboratory, offers perspective through his.
Skow and Solar-Lezama attend one another's lectures and adjust their follow-up class sessions in response. Introducing the element of learning from one another in real time has made for more dynamic and responsive class conversations. A recitation to break down the week's topic with graduate students from philosophy or computer science and a lively discussion combine the course content.
"An outsider might think that this is going to be a class that will make sure that these new computer programmers being sent into the world by MIT always do the right thing," Skow says. However, the class is intentionally designed to teach students a different skill set.
Determined to create an impactful semester-long course that did more than lecture students about right or wrong, philosophy professor Caspar Hare conceived the idea for Ethics of Computing in his role as an associate dean of the Social and Ethical Responsibilities of Computing. Hare recruited Skow and Solar-Lezama as the lead instructors, as he knew they could do something more profound than that.
"Thinking deeply about the questions that come up in this class requires both technical and philosophical expertise. There aren't other classes at MIT that place both side-by-side,” Skow says.
That's exactly what drew senior Alek Westover to enroll. The math and computer science double major explains, "A lot of people are talking about how the trajectory of AI will look in five years. I thought it was important to take a class that will help me think more about that."
Westover says he's drawn to philosophy because of an interest in ethics and a desire to distinguish right from wrong. In math classes, he's learned to write down a problem statement and receive instant clarity on whether he's successfully solved it or not. However, in Ethics of Computing, he has learned how to make written arguments for "tricky philosophical questions" that may not have a single correct answer.
For example, "One problem we could be concerned about is, what happens if we build powerful AI agents that can do any job a human can do?" Westover asks. "If we are interacting with these AIs to that degree, should we be paying them a salary? How much should we care about what they want?"
There's no easy answer, and Westover assumes he'll encounter many other dilemmas in the workplace in the future.
“So, is the internet destroying the world?”
The semester began with a deep dive into AI risk, or the notion of "whether AI poses an existential risk to humanity," unpacking free will, the science of how our brains make decisions under uncertainty, and debates about the long-term liabilities, and regulation of AI. A second, longer unit zeroed in on "the internet, the World Wide Web, and the social impact of technical decisions." The end of the term looks at privacy, bias, and free speech.
One class topic was devoted to provocatively asking: "So, is the internet destroying the world?"
Senior Caitlin Ogoe is majoring in Course 6-9 (Computation and Cognition). Being in an environment where she can examine these types of issues is precisely why the self-described "technology skeptic" enrolled in the course.
Growing up with a mom who is hearing impaired and a little sister with a developmental disability, Ogoe became the default family member whose role it was to call providers for tech support or program iPhones. She leveraged her skills into a part-time job fixing cell phones, which paved the way for her to develop a deep interest in computation, and a path to MIT. However, a prestigious summer fellowship in her first year made her question the ethics behind how consumers were impacted by the technology she was helping to program.
"Everything I've done with technology is from the perspective of people, education, and personal connection," Ogoe says. "This is a niche that I love. Taking humanities classes around public policy, technology, and culture is one of my big passions, but this is the first course I've taken that also involves a philosophy professor."
The following week, Skow lectures on the role of bias in AI, and Ogoe, who is entering the workforce next year, but plans to eventually attend law school to focus on regulating related issues, raises her hand to ask questions or share counterpoints four times.
Skow digs into examining COMPAS, a controversial AI software that uses an algorithm to predict the likelihood that people accused of crimes would go on to re-offend. According to a 2018 ProPublica article, COMPAS was likely to flag Black defendants as future criminals and gave false positives at twice the rate as it did to white defendants.
The class session is dedicated to determining whether the article warrants the conclusion that the COMPAS system is biased and should be discontinued. To do so, Skow introduces two different theories on fairness:
"Substantive fairness is the idea that a particular outcome might be fair or unfair," he explains. "Procedural fairness is about whether the procedure by which an outcome is produced is fair." A variety of conflicting criteria of fairness are then introduced, and the class discusses which were plausible, and what conclusions they warranted about the COMPAS system.
Later on, the two professors go upstairs to Solar-Lezama's office to debrief on how the exercise had gone that day.
"Who knows?" says Solar-Lezama. "Maybe five years from now, everybody will laugh at how people were worried about the existential risk of AI. But one of the themes I see running through this class is learning to approach these debates beyond media discourse and getting to the bottom of thinking rigorously about these issues."
To keep hardware safe, cut out the code’s clues
Imagine you’re a chef with a highly sought-after recipe. You write your top-secret instructions in a journal to ensure you remember them, but its location within the book is evident from the folds and tears on the edges of that often-referenced page.
Much like recipes in a cookbook, the instructions to execute programs are stored in specific locations within a computer’s physical memory. The standard security method — referred to as “address space layout randomization” (ASLR) — scatters this precious code to different places, but hackers can now find their new locations. Instead of hacking the software directly, they use approaches called microarchitectural side attacks that exploit hardware, identifying which memory areas are most frequently used. From there, they can use code to reveal passwords and make critical administrative changes in the system (also known as code-reuse attacks).
To enhance ASLR’s effectiveness, researchers from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have found a way to make these footprints vanish. Their “Oreo” method mitigates hardware attacks by removing randomized bits of addresses that lead to a program’s instructions before they’re translated to a physical location. It scrubs away traces of where code gadgets (or short sequences of instructions for specific tasks) are located before hackers can find them, efficiently enhancing security for operating systems like Linux.
Oreo has three layers, much like its tasty namesake. Between the virtual address space (which is used to reference program instructions) and the physical address space (where the code is located), Oreo adds a new “masked address space.” This re-maps code from randomized virtual addresses to fixed locations before it is executed within the hardware, making it difficult for hackers to trace the program’s original locations in the virtual address space through hardware attacks.
“We got the idea to structure it in three layers from Oreo cookies,” says Shixin Song, an MIT PhD student in electrical engineering and computer science (EECS) and CSAIL affiliate who is the lead author of a paper about the work. “Think of the white filling in the middle of that treat — our version of that is a layer that essentially whites out traces of gadget locations before they end up in the wrong hands.”
Senior author Mengjia Yan, an MIT associate professor of EECS and CSAIL principal investigator, believes Oreo’s masking abilities could make address space layout randomization more secure and reliable.
“ASLR was deployed in operating systems like Windows and Linux, but within the last decade, its security flaws have rendered it almost broken,” says Yan. “Our goal is to revive this mechanism in modern systems to defend microarchitecture attacks, so we’ve developed a software-hardware co-design mechanism that prevents leaking secret offsets that tell hackers where the gadgets are.”
The CSAIL researchers will present their findings about Oreo at the Network and Distributed System Security Symposium later this month.
Song and her coauthors evaluated how well Oreo could protect Linux by simulating hardware attacks in gem5, a platform commonly used to study computer architecture. The team found that it could prevent microarchitectural side attacks without hampering the software it protects.
Song observes that these experiments demonstrate how Oreo is a lightweight security upgrade for operating systems. “Our method introduces marginal hardware changes by only requiring a few extra storage units to store some metadata,” she says. “Luckily, it also has a minimal impact on software performance.”
While Oreo adds an extra step to program execution by scrubbing away revealing bits of data, it doesn’t slow down applications. This efficiency makes it a worthwhile security boost to ASLR for page-table-based virtual memory systems beyond Linux, such as those commonly found in major platforms such as Intel, AMD, and Arm.
In the future, the team will look to address speculative execution attacks — where hackers fool computers into predicting their next tasks, then steal the hidden data it leaves behind. Case in point: the infamous Meltdown/Spectre attacks in 2018.
To defend against speculative execution attacks, the team emphasizes that Oreo needs to be coupled with other security mechanisms (such as Spectre mitigations). This potential limitation extends to applying Oreo to larger systems.
“We think Oreo could be a useful software-hardware co-design platform for a broader type of applications,” says Yan. “In addition to targeting ASLR, we’re working on new methods that can help safeguard the critical crypto libraries widely used to safeguard information across people's network communication and cloud storage.”
Song and Yan wrote the paper with MIT EECS undergraduate researcher Joseph Zhang. The team’s work was supported, in part, by Amazon, the U.S. Air Force Office of Scientific Research, and ACE, a center within the Semiconductor Research Corporation sponsored by the U.S. Defense Advanced Research Projects Agency (DARPA).
EFF Sues DOGE and the Office of Personnel Management to Halt Ransacking of Federal Data
EFF and a coalition of privacy defenders have filed a lawsuit today asking a federal court to block Elon Musk’s Department of Government Efficiency (DOGE) from accessing the private information of millions of Americans that is stored by the Office of Personnel Management (OPM), and to delete any data that has been collected or removed from databases thus far. The lawsuit also names OPM, and asks the court to block OPM from sharing further data with DOGE.
The Plaintiffs who have stepped forward to bring this lawsuit include individual federal employees as well as multiple employee unions, including the American Federation of Government Employees and the Association of Administrative Law Judges.
This brazen ransacking of Americans’ sensitive data is unheard of in scale. With our co-counsel Lex Lumina, State Democracy Defenders Fund, and the Chandra Law Firm, we represent current and former federal employees whose privacy has been violated. We are asking the court for a temporary restraining order to immediately cease this dangerous and illegal intrusion. This massive trove of information includes private demographic data and work histories of essentially all current and former federal employees and contractors as well as federal job applicants. Access is restricted by the federal Privacy Act of 1974. Last week, a federal judge temporarily blocked DOGE from accessing a critical Treasury payment system under a similar lawsuit.
The mishandling of this information could lead to such significant and varied abuses that they are impossible to detail.
What’s in OPM’s Databases?The data housed by OPM is extraordinarily sensitive for several reasons. The federal government is the nation’s largest employer, and OPM’s records are one of the largest, if not the largest, collection of employee data in the country. In addition to personally identifiable information such as names, social security numbers, and demographics, it includes work experience, union activities, salaries, performance, and demotions; health information like life insurance and health benefits; financial information like death benefit designations and savings programs; and classified information nondisclosure agreements. It holds records for millions of federal workers and millions more Americans who have applied for federal jobs.
The mishandling of this information could lead to such significant and varied abuses that they are impossible to detail. On its own, DOGE’s unchecked access puts the safety of all federal employees at risk of everything from privacy violations to political pressure to blackmail to targeted attacks. Last year, Elon Musk publicly disclosed the names of specific government employees whose jobs he claimed he would cut before he had access to the system. He has also targeted at least one former employee of Twitter. With unrestricted access to OPM data, and with his ownership of the social media platform X, federal employees are at serious risk.
And that’s just the danger from disclosure of the data on individuals. OPM’s records could give an overview of various functions of entire government agencies and branches. Regardless of intention, the law makes it clear that this data is carefully protected and cannot be shared indiscriminately.
In late January, OPM reportedly sent about two million federal employees its "Fork in the Road" form email introducing a “deferred resignation” program. This is a visible way in which the data could be used; OPMs databases contain the email addresses for every federal employee.
How the Privacy Act Protects Americans’ DataUnder the Privacy Act of 1974, disclosure of government records about individuals generally requires the written consent of the individual whose data is being shared, with few exceptions.
Congress passed the Privacy Act in response to a crisis of confidence in the government as a result of scandals including Watergate and the FBI’s Counter Intelligence Program (COINTELPRO). The Privacy Act, like the Foreign Intelligence Surveillance Act of 1978, was created at a time when the government was compiling massive databases of records on ordinary citizens and had minimal restrictions on sharing them, often with erroneous information and in some cases for retaliatory purposes.
These protections were created the last time Congress rose to the occasion of limiting the surveillance powers of an out-of-control President.
Congress was also concerned with the potential for abuse presented by the increasing use of electronic records and the use of identifiers such as social security numbers, both of which made it easier to combine individual records housed by various agencies and to share that information. In addition to protecting our private data from disclosure to others, the Privacy Act, along with the Freedom of Information Act, also allows us to find out what information is stored about us by the government. The Privacy Act includes a private right of action, giving ordinary people the right to decide for themselves whether to bring a lawsuit to enforce their statutory privacy rights, rather than relying on government agencies or officials.
It is no coincidence that these protections were created the last time Congress rose to the occasion of limiting the surveillance powers of an out-of-control President. That was fifty years ago; the potential impact of leaking this government information, representing the private lives of millions, is now even more serious. DOGE and OPM are violating Americans’ most fundamental privacy rights at an almost unheard-of scale.
OPM’s Data Has Been Under Assault BeforeTen years ago, OPM announced that it had been the target of two data breaches. Over twenty-million security clearance records—information on anyone who had undergone a federal employment background check, including their relatives and references—were reportedly stolen by state-sponsored attackers working for the Chinese government. At the time, it was considered one of the most potentially damaging breaches in government history.
DOGE employees likely have access to significantly more data than this. Just as an example, the OPM databases also include personal information for anyone who applied to a federal job through USAJobs.gov—24.5 million people last year. Make no mistake: this is, in many ways, a worse breach than what occurred in 2014. DOGE has access to ten more years of data; it likely includes what was breached before, as well as significantly more sensitive data. (This is not to mention that while DOGE has access to these databases, they reportedly have the ability to not only export records, but to add them, modify them, or delete them.) Every day that DOGE maintains its current level of access, more risks mount.
EFF Fights for PrivacyEFF has fought to protect privacy for nearly thirty-five years at the local, state, and federal level, as well as around the world.
We have been at the forefront of exposing government surveillance and invasions of privacy: In 2006, we sued AT&T on behalf of its customers for violating privacy law by collaborating with the NSA in the massive, illegal program to wiretap and data-mine Americans’ communications. We also filed suit against the NSA in 2008; both cases arose from surveillance that the U.S. government initiated in the aftermath of 9/11. In addition to leading or serving as co-counsel in lawsuits, such as in our ongoing case against Sacramento's public utility company for sharing customer data with police, EFF has filed amicus briefs in hundreds of cases to protect privacy, free speech, and creativity.
EFF’s fight for privacy spans advocacy and technology, as well: Our free browser extension, Privacy Badger, protects millions of individuals from invasive spying by third-party advertisers. Another browser extension, HTTPS Everywhere, alongside Certbot, a tool that makes it easy to install free HTTPS certificates for websites, helped secure the web, which has now largely switched from non-secure HTTP to the more secure HTTPS protocol.
EFF is glad to join the brigade of lawsuits to protect this critical information.
EFF also fights to improve privacy protections by advancing strong laws, such as the California Electronic Communications Privacy Act (CalECPA) in 2015, which requires state law enforcement to get a warrant before they can access electronic information about who we are, where we go, who we know, and what we do. We also have a long, successful history of pushing companies, as well, to protect user privacy, from Apple to Amazon.
What’s NextThe question is not “what happens if this data falls into the wrong hands.” The data has already fallen into the wrong hands, according to the law, and it must be safeguarded immediately. Violations of Americans’ privacy have played out across multiple agencies, without oversight or safeguards, and EFF is glad to join the brigade of lawsuits to protect this critical information. Our case is fairly simple: OPM’s data is extraordinarily sensitive, OPM gave it to DOGE, and this violates the Privacy Act. We are asking the court to block any further data sharing and to demand that DOGE immediately destroy any and all copies of downloaded material.
You can view the press release for this case here.
Related Cases: American Federation of Government Employees v. U.S. Office of Personnel ManagementBuilding a Community Privacy Plan
Digital security training can feel overwhelming, and not everyone will have access to new apps, new devices, and new tools. There also isn't one single system of digital security training, and we can't know the security plans of everyone we communicate with—some people might have concerns about payment processors preventing them from obtaining fees for their online work, whilst others might be concerned about doxxing or safely communicating sensitive medical information.
This is why good privacy decisions begin with proper knowledge about your situation and a community-oriented approach. To start, explore the following questions together with your friends and family, organizing groups, and others:
- What do we want to protect? This might include sensitive messages, intimate images, or information about where protests are organized.
- Who do we want to protect it from? For example, law enforcement or stalkers.
- How much trouble are we willing to go through to try to prevent potential consequences? After all, convincing everyone to pivot to a different app when they like their current service might be tricky!
- Who are our allies? Besides those who are collaborating with you throughout this process, it’s a good idea to identify others who are on your side. Because they’re likely to share the same threats you do, they can be a part of your protection plans.
This might seem like a big task, so here are a few essentials:
Use Secure Messaging Services for Every CommunicationPrivate communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption, ensuring that only the sender and recipient of any communication have access to the content. But this protection does not reach its full potential without others joining you in communicating on these platforms.
Of the most common messaging apps, Signal provides the most extensive privacy protections through its use of end-to-end encryption, and is available for download across the globe. But we know it might not always be possible to encourage everyone in your network to transition away from their current services. There are alternatives, though. WhatsApp, one of the most popular communication platforms in the world, uses end-to-end encryption, but collects more metadata than Signal. Facebook Messenger now also provides end-to-end encryption by default in one-on-one direct messages.
Specific privacy concerns remain with group chats. Facebook Messenger has not enabled end-to-end encryption for chats that include more than two people, and popular platforms like Slack and Discord similarly do not provide these protections. These services may appear more user-friendly in accommodating large numbers, but in the absence of real privacy protections, make sure you consider what is being communicated on these sites and use alternative messaging services when talking about sensitive topics.
As a service's user base gets larger and more diverse, it's less likely that simply downloading and using it will indicate anything about a particular user's activities. For example, the more people use Signal, the less those seeking reproductive health care or coordinating a protest would stand out by downloading it. So beyond protecting just your communications, you’re building up a user base that can protect others who use encrypted, secure services and give them the shield of a crowd.
It also protects your messages from being available for law enforcement should they request it from the platforms you use. In choosing a platform that protects our privacy, we create a space from safety and authenticity away from government and corporate surveillance.
For example, prosecutors in Nebraska used messages sent via Facebook Messenger (prior to the platform enabling end-to-end encryption by default) as evidence to charge a mother with three felonies and two misdemeanors for assisting her daughter with an abortion. Given that someone known to the family reported the incident to law enforcement, it’s unlikely using an end-to-end encrypted service would have prevented the arrest entirely, but it would have prevented the contents of personal messages turned over by Meta from being used as evidence in the case.
Beyond this, it's important to know the privacy limitations of the platforms you communicate on. For example, while a secure messaging app might prevent government and corporate eavesdroppers from snooping on conversations, that doesn't stop someone you're communicating with from taking screenshots, or the government from attempting to compel you (or your contact) to turn over your messages yourselves. Secure messaging apps also don't protect when someone gets physical access to an unlocked phone with all those messages on it, which is why you may want to consider enabling disappearing message features for certain conversations.
Consider The Content You Post On Social MediaWe’re all interconnected in this digital age. Even without everyone having access to their own personal device or the internet, it is pretty difficult to completely opt out of the online. One person’s decision to upload a picture to a social media platform may impact another person without the second even knowing it, such as an association with a movement or a topic that you don’t want to be public knowledge.
Talk with your friends about the potentially sensitive data you reveal about each other online. Even if you don’t have a social media account, or if you untag yourself from posts, friends can still unintentionally identify you, report your location, and make their connections to you public. This works in the offline world too, such as sharing precautions with organizers and fellow protesters when going to a demonstration, and discussing ahead of time how you can safely document and post the event online without exposing those in attendance to harm.
It’s important to carefully consider the tradeoffs between publicity and privacy when it comes to social media. If you’re promoting something important that needs greater reach, it may be more worth posting to the more popular platforms that undermine user privacy. To do so, it’s vital that you compartmentalize your personal information (registration credentials, post attribution, friends list, etc) away from these accounts.
If you are organising online or conversing on potentially sensitive issues, choose platforms that limit the amount of information collected and tracking undertaken. We know this is not always possible—perhaps people cannot access different applications, or might not have interest in downloading or using a different service. In this scenario, think about how you can protect your community on the platform you currently engage on. For example, if you currently use Facebook for organizing, work with others to keep your Facebook groups as private and secure as Facebook allows.
Think About Cloud Servers as Other People’s ComputersFor our online world to function, corporations use online servers (often referred to as the cloud) to store the mass amounts of data collected from our devices. When we back up our content to these cloud services, corporations may run automated tools to check the content being stored, including scanning all our messages, pictures, and videos. The best case scenario in the event of a false flag is that your account is temporarily blocked, but worst case could see your entire account deleted and/or legal action initiated for perceivably illegal content.
For example, in 2021 a father took pictures of son’s groin area and sent these to a health care provider’s messaging service. Days later, his Google account was disabled because the photos constituted a “a severe violation of Google’s policies and might be illegal,” with an attached link flagging “child sexual abuse and exploitation” as one of the possible reasons. Despite the photos being taken for medical purposes, Google refused to reinstate the account, meaning that the father lost access to years of emails, pictures, account login details, and more. In a similar case, a father in Houston took photos of his child’s infected intimate parts to send to his wife via Google’s chat feature. Google refused to reinstate this account, too.
The adage goes, “there are no clouds, just other peoples’ computers.” It’s true! As countless discoveries over the years have revealed, the information you share on Slack at work is on Slack's computers and made accessible to your employer. So why not take extra care to choose whose computers you’re trusting with sensitive information?
If it makes sense to back up your data onto encrypted thumb drives or limited cloud services that provide options for end-to-end encryption, then so be it. What’s most important is that you follow through with backing it up. And regularly!
Adopting all of these best practices can be daunting, we get it. Every community is made up of people with different strengths, so with some consideration you can make smart decisions about who does what for the collective privacy and security. Once these tasks are broken down into smaller, more easily done tasks, it’s easier for a group to accomplish together. As familiarity with these tasks grows, you’ll realize you’re developing a team of experts, and after some time, you can teach each other.
Create Incident Response PlansDeveloping a plan for if or when something bad happens is a good practice for anyone, but especially a community of people who face increased risk. Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies around what to do in the event of such things happening. Doing so before an incident occurs is much easier than when you’re presently facing a crisis.
Only you and your allies can decide what belongs on such a plan, but some strategies might be:
- Isolating the impacted areas, such as shutting down social media accounts and turning off affected devices
- Notifying others who may be affected
- Switching communications to a predetermined more secure alternative
- Noting behaviors of suspected threats and documenting these
- Outsourcing tasks to someone further from the affected circle who is already aware of this potential responsibility.
Everyone's security plans and situations will always be different, which is why we often say that security and privacy are a state of mind, not a purchase. But the first step is always taking a look at your community and figuring out what's needed and how to get everyone else on board.
Privacy Loves Company
Most of the internet’s blessings—the opportunities for communities to connect despite physical borders and oppressive controls, the avenues to hold the powerful accountable without immediate censorship, the sharing of our hopes and frustrations with loved ones and strangers alike—tend to come at a price. Governments, corporations, and bad actors too often use our content for surveillance, exploitation, discrimination, and harm.
It’s easy to dismiss these issues because you don’t think they concern you. It might also feel like the whole system is too pervasive to actively opt-out of. But we can take small steps to better protect our own privacy, as well as to build an online space that feels as free and safe as speaking with those closest to us in the offline world.
This is why a community-oriented approach helps. In speaking with your friends and family, organizing groups, and others to discuss your specific needs and interests, you can build out digital security practices that work for you. This makes it more likely that your privacy practices will become second nature to you and your contacts.
Good privacy decisions begin with proper knowledge about your situation—and we’ve got you covered. To learn more about building a community privacy plan, read our ‘how to’ guide here, where we talk you through the topics below in more detail:
Using Secure Messaging Services For Every Communication
At some point, we all need to send a message that’s safe from prying eyes, so the chances of these apps becoming the default for sensitive communications is much higher if we use these platforms for all communications. On an even simpler level, it also means that messages and images sent to family and friends in group chats will be safe from being viewed by automated and human scans on services like Telegram and Facebook Messenger.
Consider The Content You Post On Social Media
Our decision to send messages, take pictures, and interact with online content has a real offline impact, and whilst we cannot control for every circumstance, we can think about how our social media behaviour impacts those closest to us, as well as those in our proximity.
Think About Cloud Servers as Other People’s Computers
When we backup our content to online cloud services, corporations may run automated tools to check the content being stored, including scanning all our messages, pictures, and videos. Whilst we might think we don't have anything to hide, these tools scan without context, and what might be an innocent picture to you may be flagged as harmful or illegal by a corporation's service. So why not take extra care to choose whose computers you’re entrusting with sensitive information.
Assign Team Roles
Once these privacy tasks are broken down into smaller, more easily done projects, it’s much easier for a group to accomplish together.
Create Incident Response Plans
Since many threats are social in nature, such as doxxing or networked harassment, it’s important to strategize with your allies what to do in such circumstances. Doing so before an incident occurs is much easier than on the fly when you’re already facing a crisis.
To dig in deeper, continue reading in our blog post Building a Community Privacy Plan here.
Trusted Encryption Environments
Really good—and detailed—survey of Trusted Encryption Environments (TEEs.)