Feed aggregator
A Baseless Copyright Claim Against a Web Host—and Why It Failed
Copyright law is supposed to encourage creativity. Too often, it’s used to extract payouts from others.
Higbee & Associates, a law firm known for sending copyright demand letters to website owners, targeted May First Movement Technology, accusing it of infringing a photograph owned by Agence France-Presse (AFP). The claim was baseless. May First didn’t post the photo. It didn’t even own the website where the photo appeared.
May First is a nonprofit membership organization that provides web hosting and technical infrastructure to social justice groups around the world. The allegedly infringing image was posted years ago by one of May First’s members, a human rights group based in Mexico. When May First learned about the copyright complaint, it ensured that the group removed the image.
That should have been the end of it. Instead, the firm demanded payment.
So EFF stepped in as May First’s counsel and explained why AFP and Higbee had no valid claim. After receiving our response, Higbee backed down.
This outcome is a reminder that targets of copyright demands often have strong defenses—especially when someone else posted the material.
Hosting Content Isn’t the Same as Publishing ItCopyright law treats those who create or control content differently from those who simply provide the tools or infrastructure for others to communicate.
In this case, May First provided hosting services but didn’t post the photo. Courts have long recognized that service providers aren’t direct infringers when they merely store material at the direction of users. In those cases, service providers lack “volitional conduct”—the intentional act of copying or distributing the work.
Copyright law also recognizes that intermediaries can’t realistically police everything users upload. That’s why legal protections like the Digital Millennium Copyright Act safe harbors exist. Even outside those safe harbors, courts still shield service providers from liability when they promptly respond to notices.
May First did exactly what the law expects: it notified its member, and the image came down.
A Claim That Should Have Been Withdrawn Much SoonerThe troubling part of this story isn’t just that a demand was sent. It’s that Higbee and AFP continued to demand money and threaten litigation after May First explained that it was merely a hosting provider and had the image removed.
In other words, the claim was built on shaky legal ground from the start. Once May First explained its role, Higbee should have withdrawn its demand. Individuals and small nonprofits shouldn’t need lawyers just to stop aggressive copyright shakedowns.
Statutory Damages Fuel Copyright AbuseThis isn’t an isolated case—it’s a predictable result of copyright law’s statutory damages regime.
Statutory damages can reach $150,000 per work, regardless of actual harm. That enormous leverage incentivizes firms like Higbee to send mass demand letters seeking quick settlements. Even meritless claims can generate revenue when recipients are too afraid, confused, or resource-constrained to fight back.
This hits community organizations, independent publishers, and small service providers that don’t have in-house legal teams especially hard. Faced with the threat of ruinous statutory damages, many just pay what is demanded.
That’s not how copyright law should work.
Know Your RightsIf you receive a copyright demand based on material someone else posted, don’t assume you’re liable.
You may have defenses based on:
- Your role as a hosting or service provider
- Lack of volitional conduct
- Prompt removal of the material after notice
- The statute of limitations
- The copyright owner’s failure to timely register the work
- The absence of actual damages
Every situation is different, but the key point is this: a demand letter is not the same as a valid legal claim.
Standing Up to Copyright TrollsMay First stood its ground, and Higbee abandoned its demand after we explained the law.
But the bigger problem remains. Copyright’s statutory damages framework enables aggressive enforcement tactics that targets the wrong parties, and chills lawful online activity.
Until lawmakers fix these structural incentives, organizations and individuals will keep facing pressure to pay up—even when they’ve done nothing wrong.
If you get one of these demand letters, remember: you may have more rights than it suggests.
- EFF Letter to Higbee and Associates, March 4, 2026
Print Blocking Won't Work - Permission to Print Part 2
This is the second post in a series on 3D print blocking, for the first entry check out: Print Blocking is Anti-Consumer - Permission to Print Part 1
Legislators across the U.S. are proposing laws to force “blueprint blockers” on 3D printers sold in their states. This mandated censorware is doomed to fail for its intended purpose, but will still manage to hurt the professional and hobbyist communities relying on these tools.
3D printers are commonly used to repair belongings, decorate homes, print figurines, and so much more. It’s not just hobbyists; 3D printers are also used professionally for parts prototyping and fixturing, small-batch manufacturing, and workspace organization. In rare cases, they’ve also been used to print parts needed for firearm assembly.
Many states have already banned manufacturing firearms using computer controlled machine tools, which are called “Computer Numerical Control or CNC machines,” and 3D printers without a license. Recently proposed laws seek to impose technical limitations onto 3D printers (and in some cases, CNC machines) in the hope of enforcing this prohibition.
This is a terrible idea; these mandates will be onerous to implement and will lock printer users into vendor software, impose one-time and ongoing costs on both printer vendors and users, and lay the foundation for a 3D-print censorship platform to be used in other jurisdictions. We dive more into these issues in the first part of this series.
On a pragmatic level, however, these state mandates are just wishful thinking. Below, we dive into how 3D printing works, why these laws won’t deter the printing of firearms, and how regular lawful use will be caught in the proposed dragnet.
How 3D Printers WorkTo understand the impact of this proposed legislation, we need to know a bit about how 3D printers work. The most common printers work similarly to a computer-controlled hot glue gun on a motion platform; they follow basic commands to maintain temperature, extrude (push) plastic through a nozzle, and move a platform. These motions together build up layers to make a final “print.” Modern 3D printers often offer more features like Wi-Fi connectivity or camera monitoring, but fundamentally they are very simple machines.
The basic instructions used by most 3D printers are called Geometric Code, or G-Code, which specify very basic motions such as “move from position A to position B while extruding plastic.” The list of commands that will eventually print up a part are transferred to the printer in a text file thousands-to-millions of lines long. The printer dutifully follows these instructions with no overall idea of what it is printing.
While it is possible to write G-Code by hand for either a CNC machine or a 3D printer, the vast majority is generated by computer aided manufacturing (CAM) software, often called a “slicer” in 3D printing since it divides a 3D model into many 2D slices then generates motion instructions.
This same general process applies to CNC machines which use G-Code instructions to guide a metal removal tool. CNC machines have been included in previous prohibitions on firearm manufacturing and file distribution and are also targeted in some of these bills.
There are other types of 3D printers such as those that print concrete, resin, metal, chocolate and other materials using slightly different methods. All of these would be subject to the proposed requirements regardless of how unlikely doing harm with a gun made out of chocolate would be.
Simple rectangular 3D model for test fit
Part of a 173490 line long G-Code file produced by slicer for simple rectangular model.
Part of a 173,490 line long G-Code file for a simple rectangular part.
How is Firearm Detection Supposed to Work?Under these proposed laws, manufacturers of consumer 3D printers must ensure their printers only work with their software, and implement firearm detection algorithms on either the printer itself or in a slicer software. These algorithms must detect firearm files using a maintained database of existing models. Vendors of printers must then verify that printers are on the allow-list maintained by the state before they can offer them for sale.
Owners of printers will be guilty of a crime if they circumvent these intrusive scanning procedures or load alternative software, which they might do because their printer manufacturer ends support. Owners of existing noncompliant 3D printers in regulated states will be unable to resell their printers on the secondary market legally.
What Will Actually Happen?While the proposed laws allow for scanning to happen on either the printer itself or in the slicer software, the reality is more complicated.
The computers inside many 3D printers have very limited computational and storage ability; it will be impossible for the printer’s computer to render the G-Code into a 3D model to compare with the database of prohibited files. Thus the only way to achieve this through the machine would be to upload all printer files to a cloud comparison tool, creating new delays, errors, and unacceptable invasions of privacy.
Many vendors will instead choose to permanently link their printers to a specific slicer that implements firearm detection. This requires cryptographic signing of G-Code to ensure only authorized prints are completed, and will lock 3D printer owners into the slicer chosen by their printer vendor.
Regardless of the specifics of their implementation, these algorithms will interfere with 3D printers' ability to print other parts without actually stopping manufacture of guns. It takes very little skill for a user to make slight design tweaks to either a model or G-Code to evade detection. One can also design incomplete or heavily adorned models which can be made functional with some post-print alterations. While this would be pioneered by skilled users—like the ones who designed today’s 3D printed guns—once the design and instructions are out there anyone able to print a gun today will be able to follow suit.
Firearm part identification features also impose costs onto 3D printer manufacturers, and hence their end consumers. 3D printer manufacturers must develop or license these costly algorithms and continuously maintain and update both the algorithm and the database of firearm models. Older printers that cannot comply will not be able to be resold in states where they are banned, creating additional E-waste.
While those wishing to create guns will still be able to do so, people printing other functional parts will likely be caught up in these algorithms, particularly for things like film props, kids’ toys, or decorative models, which often closely resemble real firearms or firearm components.
What Are The Impacts of These Changes?Technological restrictions on manufacturing tools’ abilities are harmful for many reasons. EFF is particularly concerned with this regulation locking a 3D printer to proprietary vendor software. Vendors will be able to use this mandate to support only in-house materials, locking users into future purchases. Vendor slicer software is often based on out-of-date, open source software, and forcing users to use that software deprives them of new features or even use of their printer altogether if the vendor goes out of business. At worst, some of these bill will make it a misdemeanor to fix those problems and gain full control of your printer.
File-scanning frameworks required by this regulation will lay the foundation for future privacy and freedom intrusions. This requirement could be co-opted to scan prints for copyright violations and be abused similar to DMCA takedowns, or to suppress models considered obscene by a patchwork of definitions. What if you were unable to print a repair part because the vendor asserted the model was in violation of their trademark? What if your print was considered obscene?
Regardless of your position on current prohibitions on firearms, we should all fight back against this effort to force technological restrictions on 3D printers, and legislators must similarly abandon the idea. These laws impose real costs and potential harms among lawful users, lay the groundwork for future censorship, and simply won’t deter firearm printing.
Print Blocking is Anti-Consumer - Permission to Print Part 1
This is the first post in a series on 3D print blocking, for the next entry check out Print Blocking Won't Work - Permission to Print Part 2
When legislators give companies an excuse to write untouchable code, it’s a disaster for everyone. This time, 3D printers are in the crosshairs across a growing number of states. Even if you’ve never used one, you’ve benefited from the open commons these devices have created—which is now under threat.
This isn’t the first time we’ve gone to bat for 3D printing. These devices come in many forms and can construct nearly any shape with a variety of materials. This has made them absolutely crucial for anything from life-saving medical equipment, to little Iron Man helmets for cats, to everyday repairs. For decades these devices have been a proven engine for innovation, while democratizing a sliver of manufacturing for hobbyists, artists, and researchers around the world.
For us all to continue benefiting from this grassroots creativity, we need to guard against the type of corporate centralization that has undermined so much of the promise of the digital era. Unfortunately some state legislators are looking to repeat old mistakes by demanding printer vendors install an enshittification switch.
In the U.S, three states have recently proposed that commercial 3D-printer manufacturers must ensure their printers only work with their software, and are responsible for checking each print for forbidden shapes—for now, any shape vendors consider too gun-like. The 2D equivalent of these “print-blocking” algorithms would be demanding HP prevent you from printing any harmful messages or recipes. Worse still, some bills can introduce criminal penalties for anyone who bypasses this censorware, or for anyone simply reselling their old printer without these restrictions.
If this sounds like Digital Rights Management (DRM) to you, you’ve been paying attention. This is exactly the sort of regulation that creates a headache and privacy risk for law-abiding users, is a gift for would-be monopolists, and can be totally bypassed by the lawbreakers actually being targeted by the proposals.
Ghosting Innovation“Print blocking” is currently coming for an unpopular target: ghost guns. These are privately made firearms (PMFs) that are typically harder to trace and can bypass other gun regulations. Contrary to what the proposed regulations suggest, these guns are often not printed at home, but purchased online as mass-produced build-it-yourself kits and accessories.
Scaling production with consumer 3D printers is expensive, error-prone, and relatively slow. Successfully making a working firearm with just a printer still requires some technical know-how, even as 3D printers improve beyond some of these limitations. That said, many have concerns about unlicensed firearm production and sales. Which is exactly why these practices are already illegal in many states, including all of the states proposing print blocking.
Mandating algorithmic print-blocking software on 3D printers and CNC machines is just wishful thinking. People illegally printing ghost guns and accessories today will have no qualms with undetectably breaking another law to bypass censoring algorithms. That’s if they even need to—the cat and mouse game of detecting gun-like prints might be doomed from the start, as we dive into in this companion post.
Meanwhile, the overwhelming majority of 3D-printer users do not print guns. Punishing innovators, researchers, and hobbyists because of a handful of outlaws is bad enough, but this proposal does it by also subjecting everyone to the anticompetitive and anticonsumer whims of device manufacturers.
Can’t make the DRM thing workWe’ve been railing against Digital Rights Management (DRM) since the DMCA made it a federal crime to bypass code restricting your use of copyrighted content. The DRM distinction has since been weaponized by manufacturers to gain greater leverage over their customers and enforce anti-competitive practices.
The same enshittification playbook applies to algorithmic print blockers.
Restricting devices to manufacturer-provided software is an old tactic from the DRM playbook, and is one that puts you in a precarious spot where you need to bend to the whims of the manufacturer. Only Windows 11 supported? You need a new PC. Tools are cloud-based? You need a solid connection. The company shutters? You now own an expensive paperweight—which used to make paperweights.
It also means useful open source alternatives which fit your needs better than the main vendor’s tools are off the table. The 3D-printer community got a taste of this recently, as manufacturer Bambu Labs pushed out restrictive firmware updates complicating the use of open source software like OrcaSlicer. The community blowback forced some accommodations for these alternatives to remain viable. Under the worst of these laws, such accommodations, and other workarounds, would be outlawed with criminal penalties.
People are right to be worried about vendor lock-in, beyond needing the right tool for the job. Making you reliant on their service allows companies to gradually sour the deal. Sometimes this happens visibly, with rising subscription fees, new paywalls, or planned obsolescence. It can also be more covert, like collecting and selling more of your data, or cutting costs by neglecting security and bug fixes.
With expensive hardware on the line, they can get away with anything that won’t make you pay through the nose to switch brands.
Indirectly, this sort of print-blocking mandate is a gift to incumbent businesses making these printers. It raises the upfront and ongoing costs associated with smaller companies selling a 3D printer, including those producing new or specialized machines. The result is fewer and more generic options from a shrinking number of major incumbents for any customer not interested in building their own 3D printer.
Reaching the Melting PointIt’s already clear these bills will be bad for anyone who currently uses a 3D printer, and having alternative software criminalized is particularly devastating for open source contributors. These impacts to manufacturers and consumers culminate into a major blow to the entire ecosystem of innovation we have benefited from for decades.
But this is just the beginning.
Once the infrastructure for print blocking is in place, it can be broadened. This isn’t a block of a very specific and static design, like how some copiers block reproductions of currency. Banning a category of design based on its function is a moving target, requiring a constantly expanding blacklist. Nothing in this legislation restricts those updates to firearm-related designs. Rather, if we let proposals like this pass, we open the door to the database of forbidden shapes for other powerful interests.
Intellectual property is a clear expansion risk. This could look like Nintendo blocking a Pikachu toy, John Deere blocking a replacement part, or even patent trolls forcing the hand of hardware companies. Repressive regimes, here or abroad, could likewise block the printing of "extreme" and “obscene” symbols, or tools of resistance like popular anti-ICE community whistles.
Finally, even the most sympathetic targets of algorithmic censorship will result in false positives—blocking 3D-printer users’ lawful expression. This is something proven again and again in online moderation. Whether by mistake or by design, a platform that has you locked in has little incentive to offer remedies to this censorship. And these new incentives for companies to surveil each print can also impose a substantial chilling effect on what the user chooses to create.
While 3D printers aren’t in most households, this form of regulation would set a dangerous precedent. Government mandating on-device censors which are maintained by corporate algorithms is bad. It won’t work. It consolidates corporate power. It criminalizes and blocks the grassroots innovation and empowerment which has defined the 3D-printer community. We need to roundly reject these onerous restraints on creation.
US Bans All Foreign-Made Consumer Routers
This is for new routers; you don’t have to throw away your existing ones:
The Executive Branch determination noted that foreign-produced routers (1) introduce “a supply chain vulnerability that could disrupt the U.S. economy, critical infrastructure, and national defense” and (2) pose “a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.”
More information:
Any new router made outside the US will now need to be approved by the FCC before it can be imported, marketed, or sold in the country...
Google and Amazon: Acknowledged Risks, And Ignored Responsibilities
In late 2024, we urged Google and Amazon to honor their human rights commitments, to be more transparent with the public, and to take meaningful action to address the risks posed by Project Nimbus, their cloud computing contract that includes Israel’s Ministry of Defense and the Israeli Security Agency. Since then, a stream of additional reporting has reinforced that our concerns were well-founded. Yet despite mounting evidence of serious risk, both companies have refused to take action.
Amazon has completely ignored our original and follow-up letters. Google, meanwhile, has repeatedly promised to respond to our questions. Yet more than a year and a half later, we have seen no meaningful action by either company. Neither approach is acceptable given the human rights commitments these companies have made.
Additionally, Microsoft required a public leak before it felt compelled enough to look into and find that its client, the Israeli government, was indeed misusing its services in ways that violated Microsoft’s public commitments to human rights. This should have given both Google and Amazon an additional reason to take a close look and let the public know what they find, but nothing of the sort materialized.
In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.
Google: Known Risks, No Meaningful ActionGoogle’s own internal assessments warned of the risks associated with Project Nimbus even before the contract was signed. Major news outlets have reported that Google provides the Israeli government with advanced cloud and AI services under Project Nimbus, including large-scale data storage, image and video analysis, and AI model development tools. These capabilities are exceptionally powerful, highly adaptable, and well suited for surveillance and military applications.
Despite those warnings, and the multiple reports since then about human rights abuses by the very portions of the Israeli government that uses Google’s and Amazon’s services, the companies continue to operate business as usual. It seems that they have taken the position that they do not need to change course or even publicly explain themselves unless the media or other external organizations present definitive proof that their tools have been used in specific violations of international human rights or humanitarian law. While that conclusive public evidence has not yet emerged for all the companies, the risks are obvious, and they are aware of them. Instead of conducting robust, transparent human rights due diligence, Amazon and Google are continually choosing to look the other way.
Google’s own internal assessments undermine its public posture. According to reporting, Google’s lawyers and policy staff warned that Google Cloud services could be linked to the facilitation of human rights abuses. In the same report, Google employees also raised concerns that the company’s cloud and AI tools could be used for surveillance or other militarized purposes, which seems very likely given the Israeli government’s long-standing reliance on advanced data-driven systems to control and monitor Palestinians.
Google has publicly claimed that Project Nimbus is “not directed at highly sensitive, classified, or military workloads” and is governed by its standard Acceptable Use Policies. Yet reporting has revealed conflicting representations about the contract’s terms, including indications that the Israeli government may be permitted to use any services offered in Google’s cloud catalog for any purpose. Google has declined to publicly resolve these contradictions, and its lack of transparency is problematic. The gap between what Google says publicly and what it knows internally should alarm anyone who hopes to take the company’s human rights commitments seriously.
Google’s and Amazon’s AI Principles Require Proactive ActionEven after being revised last year, Google’s AI Principles continue to commit the company to responsible development and deployment of its technologies, including implementing appropriate human oversight, due diligence, and safeguards to mitigate harmful outcomes and align with widely accepted principles of international law and human rights. While the updated principles no longer explicitly commit Google to avoiding entire categories of harmful use, they still require the company to assess foreseeable risks, employ rigorous monitoring and mitigation measures, and act responsibly throughout the full lifecycle of AI development and deployment.
Amazon has similarly committed to responsible AI practices through its Responsible AI framework for AWS services. The company states that it aims to integrate responsible AI considerations across the full lifecycle of AI design, development and operation, emphasizing safeguards such as fairness, explainability, privacy and security, safety, transparency, and governance. Amazon also says its AI services are designed with mechanisms for monitoring, and risk mitigation to help prevent harmful outputs or misuse and to enable responsible deployment across a range of use cases.
Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice.
Here, the risks are neither speculative nor remote. They are foreseeable, well-documented, and exacerbated by the context in which Project Nimbus operates, which is an ongoing military campaign marked by widespread civilian harm and credible allegations of grave human rights violations including genocide. In such circumstances, waiting for definitive proof is not responsible risk management, it is willful blindness.
Modern cloud and AI systems are designed to be flexible, customizable, and deployable at scale, often beyond the vendor’s direct visibility. That reality is precisely why human rights due diligence must be proactive. Waiting for a leaked document or whistleblower account demonstrating direct misuse, as occurred in Microsoft’s case, means waiting until harm has already been done.
Microsoft’s Experience Should Have Been Warning EnoughAs noted above, the recent revelations about Microsoft’s technologies being misused in violation of Microsoft’s commitments by the Israeli military illustrate the dangers of this wait-and-see approach. Google and Amazon should not need a similar incident to recognize what is at stake. The demonstrated misuse of comparable technologies, combined with Google’s and Amazon’s own knowledge of the risks associated with Project Nimbus, should already be sufficient to trigger action.
The appropriate response is to act responsibly and proactively.
Google and Amazon should immediately:
- Conduct and publish an independent human rights impact assessment of Project Nimbus.
- Disclose how they evaluate, monitor, and enforce compliance with their AI Principles in high-risk government contracts, including and especially in Project Nimbus.
- Commit to suspending or restricting services where there is a credible risk of serious human rights harm, even if definitive proof of misuse has not yet emerged.
Google and Amazon publicly emphasize their commitment to responsible AI and respect for human rights. Those commitments are meaningless if they apply only once harm is undeniable and irreversible. In conflict settings, especially where secrecy and information asymmetry are the norm, companies must act on credible risk, not perfect evidence.
Google and Amazon have the knowledge, the leverage, and the responsibility to act now. Choosing not to is still a choice, and one that carries real consequences for people whose lives are already at risk.
Lincoln Laboratory laser communications terminal launches on historic Artemis II moon mission
In 1969, Apollo 11 astronaut Neil Armstrong stepped onto the moon’s surface — a momentous engineering and science feat marked by his iconic words: "That’s one small step for man, one giant leap for mankind." Now, NASA is making history again.
With the successful launch of NASA’s Artemis II mission yesterday, four astronauts are set to become the first humans to travel to the moon in more than 50 years. In 2022, the uncrewed Artemis I mission demonstrated that NASA’s new Orion spacecraft could travel farther into space than ever before and return safely to Earth. Building on that success, the 10-day Artemis II mission will pave the way for future Artemis missions, which aim to land astronauts on the moon to prepare for a lasting lunar presence, and eventually human missions to Mars.
As it orbits the moon, the Orion spacecraft will carry an optical (laser) communications system developed at MIT Lincoln Laboratory in collaboration with NASA Goddard Space Flight Center. Called the Orion Artemis II Optical Communications System (O2O), the system is capable of higher-bandwidth data transmissions from space compared to traditional radio-frequency (RF) systems. During the Artemis II mission, O2O will use laser beams to send high-resolution video and images of the lunar surface down to Earth.
"Space-based communications has always been a big challenge," says lead systems engineer Farzana Khatri, a senior staff member in the laboratory’s Optical and Quantum Communications Group. "RF communications have served their purpose well. However, the RF spectrum is highly congested now, and RF does not scale well to longer distances across space. Laser communication [lasercom] is a solution that could solve this problem, and the laboratory is an expert in the field, which was really pioneered here."
Artemis II is historic not only for renewing human exploration beyond Earth, but also for being the first crewed lunar flight to demonstrate lasercom technologies, which are poised to revolutionize how spacecraft communicate. Lincoln Laboratory has been developing such technologies for more than two decades, and NASA has been infusing them into its missions to meet the growing demands of long-distance and data-intensive space exploration.
"The Orion spacecraft collects a huge amount of data during the first day of a mission, and typically these data sit on the spacecraft until it splashes down and can take months to be offloaded," Khatri says. "With an optical link running at the highest rate, we should be able to get all the data down to Earth within a few hours for immediate analysis. Furthermore, astronauts will be able to communicate in real-time over the optical link to stay in touch with Earth during their journey, inspiring the public and the next generation of deep-space explorers, much like the Apollo 11 astronauts who first landed on the moon 57 years ago."
At the heart of O2O is the laboratory-developed Modular, Agile, Scalable Optical Terminal (MAScOT). About the size of a house cat, MAScOT features a 4-inch telescope mounted on a two-axis pivoted support (gimbal) with fixed backend optics. The gimbal precisely points the telescope and tracks the laser beam through which communications signals are emitted and received in the direction of the desired data recipient or sender. Underneath the gimbal, in a separate assembly, are the backend optics, which contain light-focusing lenses, tracking sensors, fast-steering mirrors, and other components to finely point the laser beam.
MAScOT made its debut in space as part of the laboratory’s Integrated Laser Communications Relay Demonstration (LCRD) LEO User Modem and Amplifier Terminal (ILLUMA-T), which launched to the International Space Station in November 2023. Over the following six months, the laboratory team performed experiments to test and characterize the system's basic functionality, performance, and utility for human crews and user applications. Initially, the team checked whether the ILLUMA-T-to-LCRD optical link was operating at the intended data rates in both directions: 622 Mbps down and 51 Mbps up. In fact, even higher data rates were achieved: 1.2 Gbps down and 155 Mbps up. MAScOT’s lasercom terminal architecture, which was recognized with a 2025 R&D 100 Award, is now being used for Artemis II and will support future space missions.
"Our success with ILLUMA-T laid the foundation for streaming HD [high-definition] video to and from the moon," says co-principal investigator Jade Wang, an assistant leader of the Optical and Quantum Communications Group. "You can imagine the Artemis astronauts using videoconferencing to connect with physicians, coordinate mission activities, and livestream their lunar trips."
A dedicated operations team from Lincoln Laboratory is following the 10-day Artemis II mission from ground stations in Houston, Texas, and White Sands, New Mexico, and even as far as an experimental ground station in Australia, which allows for a better view of the spacecraft from the Southern Hemisphere. Leading up to the launch, the operations team had been making monthly trips to the Houston and White Sands ground stations to perform maintenance and simulations of various stages of the Artemis mission — from prelaunch to launch to the journey to the moon and back to the splashdown at the end of the mission.
"Doing these monthly simulations is important so we all stay fresh and engaged, especially when there is a launch delay," says Khatri, who adds that team members have had the opportunity to meet and speak with the four astronauts several times during these trips.
Lessons learned throughout the Artemis II mission will pave the way for humans to return to the lunar surface and beyond, eventually to Mars. Through the Artemis program, NASA will travel farther into space and explore more of the moon while creating an enduring presence in deep space and a legacy for future generations.
O2O is funded by the Space Communication and Navigation (SCaN) program at NASA Headquarters in Washington. O2O was developed by a team of engineers from NASA’s Goddard Space Flight Center and Lincoln Laboratory. This partnership has led to multiple lasercom missions, such as the 2013 Lunar Laser Communication Demonstration (LLCD), the 2021 LCRD, the 2022 TeraByte Infrared Delivery (TBIRD), and the 2023 ILLUMA-T.
EFF’s Submission to the UN OHCHR on Protection of Human Rights Defenders in the Digital Age
Governments around the world are adopting new laws and policies aimed at addressing online harms, including laws intended to curb cybercrime and disinformation, and ostensibly protect user safety. While these efforts are often framed as necessary responses to legitimate concerns, they are increasingly being used in ways that restrict fundamental rights.
In a recent submission to the United Nations Office of the High Commissioner for Human Rights, we highlighted how these evolving regulatory approaches are affecting human rights defenders (HRDs) and the broader digital environment in which they operate.
Threats to Human Rights DefendersAcross multiple regions, cybercrime and national security laws are being applied to prosecute lawful expression, restrict access to information, and expand state surveillance. In some cases, these measures are implemented without adequate judicial oversight or clear safeguards, raising concerns about their compatibility with international human rights standards.
Regulatory developments in one jurisdiction are also influencing approaches elsewhere. The UK’s Online Safety Act, for example, has contributed to the global diffusion of “duty of care” frameworks. In other contexts, similar models have been adopted with fewer protections, including provisions that criminalize broadly defined categories of speech or require user identification, increasing risks for those engaged in the defense of human rights.
At the same time, disruptions to internet access—including shutdowns, throttling, and geo-blocking—continue to affect the ability of HRDs to communicate, document abuses, and access support networks. These measures can have significant implications not only for freedom of expression, but also for personal safety, particularly in situations of conflict or political unrest.
The expanded use of digital surveillance technologies further compounds these risks. Spyware and biometric monitoring systems have been deployed against activists and journalists, in some cases across national borders. These practices result in intimidation, detention, and other forms of retaliation.
The practices of social media platforms can also put human rights defenders—and their speech—at risk. Content moderation systems that rely on broadly defined policies, automated enforcement, and limited transparency can result in the removal or suppression of speech, including documentation of human rights violations. Inconsistent enforcement across languages and regions, as well as insufficient avenues for redress, disproportionately affects HRDs and marginalized communities.
Putting Human Rights FirstThese trends underscore the importance of ensuring that regulatory and corporate responses to online harms are grounded in human rights principles. This includes adopting clear and narrowly tailored legal frameworks, ensuring independent oversight, and providing effective safeguards for privacy, expression, and association.
It also requires meaningful engagement with civil society. Human rights defenders bring essential expertise on the local and contextual impacts of digital policies, and their participation is critical to developing effective and rights-respecting approaches.
As digital technologies continue to shape civic space, protecting the individuals and communities who rely on them to advance human rights remains an urgent priority.
You can read our full submission here.
War turned Pakistan into a solar power. Will other Asian nations follow?
EPA approves ocean carbon removal test, without mentioning climate
PacifiCorp pares back renewable plans after tax credit repeal
Insurers warn about climate lawsuits against fossil fuel industry
Hochul mulls deferring New York climate ambitions to 2040
California drought, wildfire risks grow as snow falls short
Warming winters lead to more nitrate pollution in drinking water near farms
Brussels unveils change to EU carbon market to fight rising prices
Tesla’s sluggish quarter to reset the new normal for EV sales
Possible US Government iPhone Hacking Tool Leaked
Wired writes (alternate source):
Security researchers at Google on Tuesday released a report describing what they’re calling “Coruna,” a highly sophisticated iPhone hacking toolkit that includes five complete hacking techniques capable of bypassing all the defenses of an iPhone to silently install malware on a device when it visits a website containing the exploitation code. In total, Coruna takes advantage of 23 distinct vulnerabilities in iOS, a rare collection of hacking components that suggests it was created by a well-resourced, likely state-sponsored group of hackers...
MIT researchers measure traffic emissions, to the block, in real-time
In a study focused on New York City, MIT researchers have shown that existing sensors and mobile data can be used to generate a near real-time, high-resolution picture of auto emissions, which could be used to develop local transportation and decarbonization policies.
The new method produces much more detailed data than some other common approaches, which use intermittent samples of vehicle emissions. The researchers say it is also more practical and scales up better than some studies that have aimed for very granular emissions data from a small number of automobiles at once. The work helps bridge the gap between less-detailed citywide emissions inventories and highly detailed analyses based on individual vehicles.
“Our model, by combining real-time traffic cameras with multiple data sources, allows extrapolating very detailed emission maps, down to a single road and hour of the day,” says Paolo Santi, a principal research scientist in the MIT Senseable City Lab and co-author of a new paper detailing the project’s results. “Such detailed information can prove very helpful to support decision-making and understand effects of traffic and mobility interventions.”
Carlo Ratti, director of the MIT Senseable City Lab, notes that the research “is part of our lab’s ongoing quest into hyperlocal measurements of air quality and other environmental factors. By integrating multiple streams of data, we can reach a level of precision that was unthinkable just a few years ago — giving policymakers powerful new tools to understand and protect human health.”
The new method also protects privacy, since it uses computer vision techniques to recognize types of vehicles, but without compiling license plate numbers. The study leverages technologies, including those already installed at intersections, to yield richer data about vehicle movement and pollution.
“The very basic idea is just to estimate traffic emissions using existing data sources in a cost-effective way,” says Songhua Hu, a former postdoc in the Senseable City Lab, and now an assistant professor at City University of Hong Kong.
The paper, “Ubiquitous Data-driven Framework for Traffic Emission Estimation and Policy Evaluation,” is published in Nature Sustainability.
The authors are Hu; Santi; Tom Benson, a researcher in the Senseable City Lab; Xuesong Zhou, a professor of transportation engineering at Arizona State University; An Wang, an assistant professor at Hong Kong Polytechnic University; Ashutosh Kumar, a visiting doctoral student at the Senseable City Lab; and Ratti. The MIT Senseable City Lab is part of MIT’s Department of Urban Studies and Planning.
Manhattan measurements
To conduct the study, the researchers used images from 331 cameras already in use in Manhattan intersections, along with anonymized location records from over 1.75 million mobile phones. Applying vehicle-recognition programs and defining 12 broad categories of automobiles, the scholars found they could correctly place 93 percent of vehicles in the right category. The imaging also yielded important information about the specific ways traffic signals affect traffic flow. That matters because traffic signals are a major reason for stop-and-go driving patterns, which strongly affect urban emissions but are often omitted in conventional inventories.
The mobile phone data then provided rich information about the overall patterns of traffic and movement of individual vehicles throughout the city. The scholars combined the camera and phone data with known information about emissions rates to arrive at their own emissions estimates for New York City.
“We just need to input all emission-related information based on existing urban data sources, and we can estimate the traffic emissions,” Hu says.
Moreover, the researchers evaluated the changes in emissions that might occur in different scenarios when traffic patterns, or vehicle types, also change.
For one, they modeled what would happen to emissions if a certain percentage of travel demand shifted from private vehicles to buses. In another scenario, they looked at what would happen if morning and evening rush hour times were spread out a bit longer, leaving fewer vehicles on the road at once. They also modeled the effects of replacing fine-grained emissions inputs with citywide averages — finding that the rougher emissions estimates could vary widely, from −49 percent to 25 percent of the more fine-tuned results. That underscores how seemingly small simplifications can introduce large errors into emission estimates.
Major emissions drop
On one level, this work involved altering inputs into the model and seeing what emerged. But one scenario the researchers studied is based on a real-world change: In January 2025, New York City implemented congestion pricing south of 60th Street in Manhattan.
To study that, the researchers looked at what happened to vehicle traffic at intervals of two, four, six, and eight weeks after the program began. Overall, congestion pricing lowered traffic volume by about 10 percent — but there was a corresponding drop in emissions of 16-22 percent.
This finding aligns with a previous study by researchers at Cornell University, which reported a 22 percent reduction in particulate matter (PM2.5) levels within the pricing zone. The MIT team also found that these reductions were not evenly distributed across the network, with larger declines on some major streets and more mixed effects outside the pricing zone.
“We see these kinds of huge changes after the congestion pricing began, Hu says. “I think that’s a demonstration that our model can be very helpful if a government really wants to know if a new policy converts into real-world impact.”
There are additional forms of data that could be fed into the researchers’ new method. For instance, in related work in Amsterdam, the team leveraged dashboard cams from vehicles to yield rich information about vehicle movement.
“With our model we can make any camera used in cities, from the hundreds of traffic cameras to the thousands of dash cams, a powerful device to estimate traffic emissions in real-time,” says Fábio Duarte, the associate director of research and design at the MIT Senseable City Lab, who has worked on multiple related studies.
The research was supported by the city of Amsterdam, the AMS Institute, and the Abu Dhabi’s Department of Municipalities and Transport.
It was also supported by the MIT Senseable City Consortium, which consists of Atlas University, the city of Laval, the city of Rio de Janeiro, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, the Dubai Future Foundation, FAE Technology, KAIST Center for Advanced Urban Systems, Sondotecnica, Toyota, and Volkswagen Group America.
Evaluating the ethics of autonomous systems
Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.
But while these AI-driven outputs may be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?
To help stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automated evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, such as fairness.
The system separates objective evaluations from user-defined human values, using a large language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences.
The adaptive framework selects the best scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, as well as scenarios that unexpectedly fall short of ethical criteria.
“We can insert a lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we can imagine happening. It is not enough to say, ‘Let’s just use AI because it has been trained on this information.’ We wanted to develop a more systematic way to discover the unknown unknowns and have a way to predict them before anything bad happens,” says senior author Chuchu Fan, an associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).
Fan is joined on the paper by lead author Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will be presented at the International Conference on Learning Representations.
Evaluating ethics
In a large system like a power grid, evaluating the ethical alignment of an AI model’s recommendations in a way that considers all objectives is especially difficult.
Most testing frameworks rely on pre-collected data, but labeled data on subjective ethical criteria are often hard to come by. In addition, because ethical values and AI systems are both constantly evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.
Fan and her team approached this problem from a different perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to identify the most informative scenarios, which human stakeholders would then evaluate more closely.
Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It can identify scenarios that effectively meet measurable requirements and align well with human values, and vice versa.
“We don’t want to spend all our resources on random evaluations. So, it is very important to guide the framework toward the test cases we care the most about,” Li says.
Importantly, SEED-SET does not need pre-existing evaluation data, and it adapts to multiple objectives.
For instance, a power grid may have several user groups, including a large rural community and a data center. While both groups may want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.
These ethical criteria may not be well-specified, so they can’t be measured analytically.
The power grid operator wants to find the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.
SEED-SET tackles this challenge by splitting the problem into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the objective evaluation.
“The objective part of our approach is tied to the AI system, while the subjective part is tied to the users who are evaluating it. By decomposing the preferences in a hierarchical fashion, we can generate the desired scenarios with fewer evaluations,” Parashar says.
Encoding subjectivity
To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of each user group into a natural language prompt for the model.
The LLM uses these instructions to compare two scenarios, selecting the preferred design based on the ethical criteria.
“After seeing hundreds or thousands of scenarios, a human evaluator can suffer from fatigue and become inconsistent in their evaluations, so we use an LLM-based strategy instead,” Parashar explains.
SEED-SET uses the selected scenario to simulate the overall system (in this case, a power distribution strategy). These simulation results guide its search for the next best candidate scenario to test.
In the end, SEED-SET intelligently selects the most representative scenarios that either meet or are not aligned with objective metrics and ethical criteria. In this way, users can analyze the performance of the AI system and adjust its strategy.
For instance, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during periods of peak demand, leaving underprivileged neighborhoods more prone to outages.
To test SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.
The system generated more than twice as many optimal test cases as the baseline strategies in the same amount of time, while uncovering many scenarios other approaches overlooked.
“As we shifted the user preferences, the set of scenarios SEED-SET generated changed drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.
To measure how useful SEED-SET would be in practice, the researchers will need to conduct a user study to see if the scenarios it generates help with real decision-making.
In addition to running such a study, the researchers plan to explore the use of more efficient models that can scale up to larger problems with more criteria, such as evaluating LLM decision-making.
This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency.
Is “Hackback” Official US Cybersecurity Strategy?
The 2026 US “Cyber Strategy for America” document is mostly the same thing we’ve seen out of the White House for over a decade, but with a more aggressive tone.
But one sentence stood out: “We will unleash the private sector by creating incentives to identify and disrupt adversary networks and scale our national capabilities.” This sounds like a call for hackback: giving private companies permission to conduct offensive cyber operations.
The Economist noticed (alternate link) this, too.
I think this is an incredibly dumb idea:
In warfare, the notion of counterattack is extremely powerful. Going after the enemy—its positions, its supply lines, its factories, its infrastructure—is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty...
