Feed aggregator
The Anthropic-DOD Conflict: Privacy Protections Shouldn’t Depend On the Decisions of a Few Powerful People
The U.S. military has officially ended its $200 million contract with AI company Anthropic and has ordered all other military contractors to cease use of their products. Why? Because of a dispute over what the government could and could not use Anthropic’s technology to do. Anthropic had made it clear since it first signed the contract with the Pentagon in 2025 that it did not want its technology to be used for mass surveillance of people in the United States or for fully autonomous weapons systems. Starting in January, that became a problem for the Department of Defense, which ordered Anthropic to give them unrestricted use of the technology. Anthropic refused, and the DoD retaliated.
There is a lot we could learn from this conflict, but the biggest take away is this: the state of your privacy is being decided by contract negotiations between giant tech companies and the U.S. government—two entities with spotty track records for caring about your civil liberties. It’s good when CEOs step up and do the right thing—but it's not a sustainable or reliable solution to build our rights on. Given the government’s loose interpretations of the law, ability to find loopholes to surveil you, and willingness to do illegal spying, we needs serious and proactive legal restrictions to prevent it from gobbling up all the personally data it can acquire and using even routine bureaucratic data for punitive ends.
Imposing and enforcing such those restrictions is properly a role for Congress and the courts, not the private sector.
The companies know this. When speaking about the specific risk that AI poses to privacy, the CEO of Anthropic Dario Amodei said in an interview, “I actually do believe it is Congress’s job. If, for example, there are possibilities with domestic mass surveillance—the government buying of bulk data has been produced on Americans, locations, personal information, political affiliations, to build profiles, and it’s not possible to analyze all of that with AI—the fact that that is legal—that seems like the judicial interpretation of the Fourth Amendment has not caught up or the laws passed by Congress have not caught up.”
The example he cites here is a scarily realistic one—because it’s already happening. Customs and Border Protection has tapped into the online advertising world to buy data on Americans for surveillance purposes. Immigration and Customs Enforcement has been using a tool that maps millions of peoples’ devices based on purchased cell phone data. The Office of the Director of National Intelligence has proposed a centralized data broker marketplace to make it easier for intelligence agencies to buy commercially available data. Considering the government’s massive contracts with a bunch of companies that could do analysis, including Palantir, a company which does AI-enabled analysis of huge amounts of data, then the concerns are incredibly well founded.
But Congress is sadly neglecting its duties. For example, a bill that would close the loophole of the government buying personal information passed the House of Representatives in 2024, but the Senate stopped it. And because Congress did not act, Americans must rely on a tech company CEO has to try to protect our privacy—or at least refuse to help the government violate it.
Privacy in the digital age should be an easy bipartisan issue. Given that it’s wildly popular (71% of American adults are concerned about the government's use of their data and among adults that have heard of AI 70% have little to no trust in how companies use those products) you would think politicians would be leaping over each other to create the best legislation and companies would be promising us the most high-end privacy protecting features. Instead, for the time being, we are largely left adrift in a sea of constant surveillance, having to paddle our own life rafts.
EFF has, and always will, fight for real and sustainable protections for our civil liberties including a world where our privacy does not rest upon the whims of CEOs and back room deals with the surveillance state.
Injectable “satellite livers” could offer an alternative to liver transplantation
More than 10,000 Americans who suffer from chronic liver disease are on a waitlist for a liver transplant, but there are not enough donated organs for all of those patients. Additionally, many people with liver failure aren’t eligible for a transplant if they are not healthy enough to tolerate the surgery.
To help those patients, MIT engineers have developed “mini livers” that could be injected into the body and take over the functions of the failing liver.
In a new study in mice, the researchers showed that these injected liver cells could remain viable in the body for at least two months, and they were able to generate many of the enzymes and other proteins that the liver produces.
“We think of these as satellite livers. If we could deliver these cells into the body, while leaving the sick organ in place, that would provide booster function,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES).
Bhatia is the senior author of the new study, which appears today in the journal Cell Biomaterials. MIT postdoc Vardhman Kumar is the paper’s lead author.
Restoring liver function
The human liver plays a role in about 500 essential functions, including regulation of blood clotting, removing bacteria from the bloodstream, and metabolizing drugs. Most of these functions are performed by cells called hepatocytes.
Over the past decade, Bhatia’s lab has been working on ways to restore hepatocyte function without a surgical liver transplant. One possible approach is to embed hepatocytes into a biomaterial such as a hydrogel, but these gels also have to be surgically implanted.
Another option is to inject hepatocytes into the body, which eliminates the need for surgery. In this study, Bhatia’s lab sought to improve on this strategy by providing an engineered niche that could enhance the cells’ survival and facilitate noninvasive monitoring of graft health.
To achieve that, the researchers came up with the idea of injecting cells along with hydrogel microspheres that would help them stay together and form connections with nearby blood vessels. These spheres have special properties that allow them to act like a liquid when they are closely packed together, so they can be injected through a syringe and then regain their solid structure once inside the body.
In recent years, researchers have explored using hydrogel microspheres to promote wound healing, as they help cells to migrate into the spaces between the spheres and build new tissue. In the new study, the MIT team adapted them to help hepatocytes form a stable tissue graft after injection.
“What we did is use this technology to create an engineered niche for cell transplantation,” Kumar says. “If the cells are injected in the absence of these spheres, they would not integrate efficiently with the host, but these microspheres provide the hepatocytes with a niche where they can stay localized and become connected to the host circulation much faster.”
The injected mixture also includes fibroblast cells — supportive cells that help the hepatocytes survive and promote the growth of blood vessels into the tissue.
Working with Nicole Henning, an ultrasound research specialist at the Koch Institute, the researchers developed a way to inject the cell mixture using a syringe guided by ultrasound. After injection, the researchers can also use ultrasound to monitor the long-term stability of the implant.
In this study, the mini livers were injected into the fat tissue in the belly. In the future, similar grafts could be delivered to other sites in the body, such as into the spleen or near the kidneys. As long as they have enough space and access to blood vessels, the injected hepatocytes can function similarly to hepatocytes in the liver.
“For a vast majority of liver disorders, the graft does not need to sit close to the liver,” Kumar says.
An alternative to transplantation
In tests in mice, the researchers injected the mixture of liver cells and microspheres into an area of fatty tissue known as the perigonadal adipose tissue. Once the cells are localized in the body, they form a stable, compact structure. Over time, blood vessels begin to grow into the graft area, helping the injected hepatocytes to stay healthy.
“The new blood vessels formed right next to the hepatocytes, which is why they were able to survive,” Kumar says. “They were able to get the nutrients delivered right to them, they were able to function the way they're supposed to, and they produced the proteins that we expect them to.”
After injection, the cells remained viable and able to secrete specialized proteins into the host circulation for eight weeks, the length of the study. That suggests that the therapy could potentially work as a long-term treatment for liver disease, the researchers say.
“The way we see this technology is it can provide an alternative to surgery, but it can also serve as a bridge to transplantation where these grafts can provide support until a donor organ becomes available,” Kumar says. “And if we think they might need another therapy or more grafts, the barriers to do that are much less with this injectable technology than undergoing another surgery.”
With the current version of this technology, patients would likely need to take immunosuppressive drugs, but the researchers are exploring the possibility of developing “stealthy” hepatocytes that could evade the immune system, or using the hydrogel microspheres to deliver immunosuppressants locally.
The research was funded by the Koch Institute Support (core) grant from the National Cancer Institute, the National Institutes of Health, the Wellcome Leap HOPE Program, a National Science Foundation Graduate Research Fellowship, and the Howard Hughes Medical Institute.
EFF to Supreme Court: Shut Down Unconstitutional Geofence Searches
WASHINGTON, D.C. – The Electronic Frontier Foundation (EFF), the American Civil Liberties Union (ACLU), the ACLU of Virginia, and the Center on Privacy & Technology at Georgetown Law filed a brief Monday urging the U.S. Supreme Court to rule that invasive geofence warrants are unconstitutional.
The brief argues that geofence warrants—which compel companies to provide information on every electronic device in a given area during a given time period—are the digital version of the exploratory rummaging that the drafters of the Fourth Amendment specifically intended to prevent.
Unlike typical warrants, geofence warrants do not name a suspect or even target a specific individual or device. Instead, police cast a digital dragnet, demanding location data on every device in a geographic area during a certain time period, regardless of whether the device owner has any connection to the crime under investigation. These searches simultaneously impact the privacy of millions and turn innocent bystanders into suspects, just for being in the wrong place at the wrong time.
The Supreme Court agreed earlier this year to hear Chatrie v. United States, in which a 2019 geofence warrant compelled Google to search the accounts of all its hundreds of millions of users to see if any one of them was within a radius police drew around a Northern Virginia crime scene. This area amounted to several football fields in size and encompassed numerous homes, businesses, and a church. In an amicus brief filed Monday, the brief argues that allowing this sweeping power to go unchecked is inconsistent with the basic freedoms of a democratic society.
"This is not traditional police work, but rather the leveraging of new and powerful technology to claim a novel and formidable power over the people," the brief states. "By their very nature, geofence searches turn innocent bystanders into suspects and leverage even purportedly limited searches into larger dragnets, causing intrusions at a scale far beyond those held unconstitutional in the physical world."
The brief also cautioned the Court not to authorize future geofence warrants based on the facts of the Chatrie case, which reflect how such searches were conducted in 2019. Since July 2025, mass geofence searches of Google users’ location data have not been possible. However, Google is not the only company collecting location data, nor the only way for police to access mass amounts of data on people with no connection to a crime. All suspicionless searches drag a net through vast swaths of information in hopes of identifying previously unknown suspects—ensnaring innocent bystanders along the way.
"To courts, to lawmakers, and to tech companies themselves, EFF has repeatedly argued that these high-tech efforts to pull suspects out of thin air cannot be constitutional, even with a warrant," said EFF Surveillance Litigation Director Andrew Crocker. "The Supreme Court should find once and for all that geofence searches are just the kind of impermissible general warrants that the Framers of the Constitution so reviled."
For the brief: https://www.eff.org/document/chatrie-v-united-states-eff-supreme-court-amicus-brief
Tags: geofence warrantsContact: AndrewCrockerSurveillance Litigation Directorandrew@eff.orgEFF to Supreme Court: Shut Down Unconstitutional Geofence Searches
WASHINGTON, D.C. – The Electronic Frontier Foundation (EFF), the American Civil Liberties Union (ACLU), the ACLU of Virginia, and the Center on Privacy & Technology at Georgetown Law filed a brief Monday urging the U.S. Supreme Court to rule that invasive geofence warrants are unconstitutional.
The brief argues that geofence warrants—which compel companies to provide information on every electronic device in a given area during a given time period—are the digital version of the exploratory rummaging that the drafters of the Fourth Amendment specifically intended to prevent.
Unlike typical warrants, geofence warrants do not name a suspect or even target a specific individual or device. Instead, police cast a digital dragnet, demanding location data on every device in a geographic area during a certain time period, regardless of whether the device owner has any connection to the crime under investigation. These searches simultaneously impact the privacy of millions and turn innocent bystanders into suspects, just for being in the wrong place at the wrong time.
The Supreme Court agreed earlier this year to hear Chatrie v. United States, in which a 2019 geofence warrant compelled Google to search the accounts of all its hundreds of millions of users to see if any one of them was within a radius police drew around a Northern Virginia crime scene. This area amounted to several football fields in size and encompassed numerous homes, businesses, and a church. In an amicus brief filed Monday, the brief argues that allowing this sweeping power to go unchecked is inconsistent with the basic freedoms of a democratic society.
"This is not traditional police work, but rather the leveraging of new and powerful technology to claim a novel and formidable power over the people," the brief states. "By their very nature, geofence searches turn innocent bystanders into suspects and leverage even purportedly limited searches into larger dragnets, causing intrusions at a scale far beyond those held unconstitutional in the physical world."
The brief also cautioned the Court not to authorize future geofence warrants based on the facts of the Chatrie case, which reflect how such searches were conducted in 2019. Since July 2025, mass geofence searches of Google users’ location data have not been possible. However, Google is not the only company collecting location data, nor the only way for police to access mass amounts of data on people with no connection to a crime. All suspicionless searches drag a net through vast swaths of information in hopes of identifying previously unknown suspects—ensnaring innocent bystanders along the way.
"To courts, to lawmakers, and to tech companies themselves, EFF has repeatedly argued that these high-tech efforts to pull suspects out of thin air cannot be constitutional, even with a warrant," said EFF Surveillance Litigation Director Andrew Crocker. "The Supreme Court should find once and for all that geofence searches are just the kind of impermissible general warrants that the Framers of the Constitution so reviled."
For the brief: https://www.eff.org/document/chatrie-v-united-states-eff-supreme-court-amicus-brief
Tags: geofence warrantsContact: AndrewCrockerSurveillance Litigation Directorandrew@eff.orgLAB14 joins the MIT.nano Consortium
LAB14 GmbH, a corporate network based in Germany that unites eight high-tech companies focused on nanofabrication, microfabrication, and surface analysis, has joined the MIT.nano Consortium.
“The addition of LAB14 to the MIT.nano Consortium reinforces the importance of collaboration to advance the next set of great ideas,” says Vladimir Bulović, the founding faculty director of MIT.nano and the Fariborz Maseeh (1990) Professor of Emerging Technologies at MIT. “At MIT.nano, we are thrilled when our shared-access facility leads to cross-disciplinary discoveries. LAB14 carries this same motivation by assembling the constellation of remarkable interconnected industry partners.”
Comprising eight companies — Heidelberg Instruments, Nanoscribe, GenISys, Notion Systems, 40-30, Amcoss, SPECSGROUP, and Nanosurf — LAB14 is focused on developing products and services that are fundamental to micro- and nanofabrication technologies, supporting industrial and research-driven applications with complex manufacturing and analysis requirements.
The companies of LAB14 operate under a shared organizational structure that enables closer coordination in technology development. This setup allows for faster research progress and more efficient manufacturing workflows.
“Joining the MIT.nano Consortium marks a significant milestone for LAB14 and our companies,” says Martin Wynaendts van Resandt, CEO of LAB14. “This participation allows our network to collaborate directly with world-leading researchers, accelerating innovation in micro- and nanotechnology."
As part of this engagement, LAB14 will provide two pieces of equipment to be installed at MIT.nano within the coming year. The VPG 300 DI maskless stepper, a high-performance, direct-write system from Heidelberg Instruments, will be positioned inside MIT.nano’s cleanroom. This tool will allow MIT.nano users to pattern structures smaller than 500 nanometers directly onto wafers with accuracy and uniformity comparable to typical high resolution i-line lithography. Equipped with advanced multi-layer alignment and mix‑and‑match functions, the VPG creates a seamless link between laser direct writing and e‑beam lithography.
The EnviroMETROS X-ray photoelectron spectroscopy (XPS/HAXPES) tool by SPECSGROUP will join the suite of Characterization.nano instruments. This unique system is specialized in nondestructive depth profile measurements using multiple X-ray energies to determine the thickness of thin-film samples and their chemical compositions with highest precision. It supports various analyses across a wide pressure range, allowing MIT.nano users to examine thin‑film materials under more realistic environmental conditions and to observe how they change during operation.
The MIT.nano Consortium is a platform for academia-industry collaboration, fostering research and innovation in nanoscale science and engineering. Consortium members gain unparalleled access to MIT.nano and its dynamic user community, providing opportunities to share expertise and guide advances in nanoscale technology.
MIT.nano continues to welcome new companies as sustaining members. For details, and to see a list of current members, visit the MIT.nano Consortium page.
On Moltbook
The MIT Technology Review has a good article on Moltbook, the supposed AI-only social network:
Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.
“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”...
The oil island that could break Iran
Trump hates renewables. The Iran war may help them.
War insurers cancel ship coverage as Iran conflict expands
Supreme Court fight over HFCs takes aim at power of Congress
Virginia Democrats follow through on RGGI return
Key NY lawmakers briefed on state climate law concerns
How NY heat pump users got stuck with higher bills
Shrinking North American bird population is getting worse faster
Deep freeze and drought are fueling a massive Florida wildfire
Author Correction: The hard road back from overshoot
Nature Climate Change, Published online: 03 March 2026; doi:10.1038/s41558-026-02604-1
Author Correction: The hard road back from overshootEngineering confidence to navigate uncertainty
Flying on Mars — or any other world — is an extraordinary challenge. An autonomous spacecraft, operating millions of miles from pilots or engineers who could intervene on Earth, must be able to navigate unfamiliar and changing environments, avoid obstacles, land on uncertain terrain, and make decisions entirely on its own. Every maneuver depends on careful perception, planning, and control systems that are fault-tolerant, allowing the craft to recover if something goes wrong. A single miscalculation can leave a multi-million dollar spacecraft face-down on the surface, ending the mission before it even begins.
“This problem is in no way solved, in industry or even in research settings,” says Nicholas Roy, the Jerome C. Hunsaker Professor in the MIT Department of Aeronautics and Astronautics (AeroAstro). “You’ve got to bring together a lot of pieces of code, software, and integrate multiple pieces of hardware. Putting those together is not trivial.”
Not trivial, but for students nearing the culmination of their Course 16 undergraduate careers, far from impossible. In class 16.85 Autonomy Capstone (Design and Testing of Autonomous Vehicles), students design, implement, deploy, and test a full software architecture for flying autonomous systems. These systems have wide-ranging applications, from urban air-mobility and reusable launch vehicles to extraterrestrial exploration. With robust autonomous technology, vehicles can operate far from home while engineers watch from mission control centers not too different from the high bay in AeroAstro’s Kresa Center for Autonomous Systems.
Roy and Jonathan How, Ford Professor of Engineering, developed the new course to build on the foundations of class 16.405 (Robotics: Science and Systems), which introduces students to working with complex robotic platforms and autonomous navigation through ground vehicles with pre-built software. 16.85 applies those same principles to flight, with a basic quadrotor drone and an entirely blank slate to build their own navigation systems. The vehicles are then tested on an obstacle course featuring dubious landing pads and uncertain terrain. Students work in large teams (for this first run, two teams of seven — the SLAMdunkers and the Spelunkers) designed to mirror real-world missions where coordination across roles is essential.
“The vehicles need to be able to differentiate between all these hidden risks that are in the mission and the environment that they’re in and still survive,” says How. “We really want the students to learn how to make a system that they have confidence in.”
Mission: Figure it out, together
“The specific mission we gave them this semester is to imagine that you are an aircraft of some kind, and you’ve got to go and explore the surface of an extraterrestrial body like Mars or the moon,” Roy explains. “You need to use onboard sensors to fly around and explore, build a map, identify interesting objects, and then land safely on what is probably not a flat surface, or not a perfectly horizontal surface.”
A mission of this magnitude is far too complex for any one engineer to tackle alone, but that too poses a challenge for a large team. “The hardest problems these days are coordination problems,” says Andrew Fishberg, a graduate student in the Aerospace Controls Laboratory and one of three teaching assistants (TAs) for the course. “To use the robotics term, a team of this size is something of a heterogeneous swarm. Not everyone has the same skill set, but everyone shows up with something to contribute, and managing that together is a challenge.”
The challenge asks students to apply multiple types of “systems thinking” to the task. Relationships, interdependencies, and feedback loops are critical to their software architecture, and equally important in how students communicate and coordinate with their teammates. “Writing the reports and communicating with a team feels like overhead sometimes, but if you don’t communicate, you have a team of one,” says Fishberg. “We don’t have these ‘solo inventor’ situations where one person figures everything out anymore — it’s hundreds of people building this huge thing.”
The new faces of flight
Students in the class say they are eager to enter the rapidly evolving field, working with unconventional tools and vehicles that go beyond traditional applications.
“We continue to send rovers to extraterrestrial bodies. But there is an increasing interest in deploying unmanned systems to explore Earth,” says Roy. “There’s lots of places on Earth where we want to send robots to go and explore, places where it’s hazardous for humans to go.” That expanding set of applications is exactly what draws students to the field.
“I was really excited for the idea of a new class, especially one that was focused on autonomy, because that’s where I see my career going,” says senior Norah Miller. “This class has given me a really great experience in what it feels like to develop software from zero to a full flying mission.”
The Design and Testing of Autonomous Vehicles course offers a unique perspective for instructors and TAs who have known many of the students throughout their undergraduate careers. As a capstone, it provides an opportunity to see that growth come full circle. “A couple years ago we’re solving differential equations, and now they’re implementing software they wrote on a quadrotor in the high bay,” says How.
After weeks of learning, building, testing, refinement, and finally, flight, the results reflected the goals of the course. “It was exactly what we wanted to see happen,” says Roy. “We gave them a pretty challenging mission. We gave them hardware that should be capable of completing the mission, but not guaranteed. And the students have put in a tremendous amount of effort and have really risen to the challenge.”
EFF to Court: Don’t Make Embedding Illegal
Who should be directly liable for online infringement – the entity that serves it up or a user who embeds a link to it? For almost two decades, most U.S. courts have held that the former is responsible, applying a rule called the server test. Under the server test, whomever controls the server that hosts a copyrighted work—and therefore determines who has access to what and how—can be directly liable if that content turns out to be infringing. Anyone else who merely links to it can be secondarily liable in some circumstances (for example, if that third party promotes the infringement), but isn’t on the hook under most circumstances.
The test just makes sense. In the analog world, a person is free to tell others where they may view a third party’s display of a copyrighted work, without being directly liable for infringement if that display turns out to be unlawful. The server test is the straightforward application of the same principle in the online context. A user that links to a picture, video, or article isn’t in charge of transmitting that content to the world, nor are they in a good position to know whether that content violates copyright. In fact, the user doesn’t even control what’s located on the other end of the link—the person that controls the server can change what’s on it at any time, such as swapping in different images, re-editing a video or rewriting an article.
But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing. If they are right, the common act of embedding is a legally fraught activity and a trap for the unwary.
The Court should decline, or risk destabilizing fundamental, and useful, online activities. As we explain in an amicus brief filed with several public interest and trade organizations, linking and embedding are not unusual, nefarious, or misleading practices. Rather, the ability to embed external content and code is a crucial design feature of internet architecture, responsible for many of the internet’s most useful functions. Millions of websites—including EFF’s—embed external content or code for everything from selecting fonts and streaming music to providing services like customer support and legal compliance. The server test provides legal certainty for internet users by assigning primary responsibility to the person with the best ability to prevent infringement. Emmerich’s approach, by contrast, invites legal chaos.
Emmerich also claims that altering a URL violates the Digital Millennium Copyright Act’s prohibition on changing or deleting copyright management information. If they are correct, using a link shortener could put users at risks of statutory penalties—an outcome Congress surely did not intend.
Both of these theories would make common internet activities legally risky and undermine copyright’s Constitutional purpose: to promote the creation of and access to knowledge. The district court recognized as much and we hope the appeals court agrees.
Related Cases: Emmerich Newspapers v. Particle MediaEFF to Court: Don’t Make Embedding Illegal
Who should be directly liable for online infringement – the entity that serves it up or a user who embeds a link to it? For almost two decades, most U.S. courts have held that the former is responsible, applying a rule called the server test. Under the server test, whomever controls the server that hosts a copyrighted work—and therefore determines who has access to what and how—can be directly liable if that content turns out to be infringing. Anyone else who merely links to it can be secondarily liable in some circumstances (for example, if that third party promotes the infringement), but isn’t on the hook under most circumstances.
The test just makes sense. In the analog world, a person is free to tell others where they may view a third party’s display of a copyrighted work, without being directly liable for infringement if that display turns out to be unlawful. The server test is the straightforward application of the same principle in the online context. A user that links to a picture, video, or article isn’t in charge of transmitting that content to the world, nor are they in a good position to know whether that content violates copyright. In fact, the user doesn’t even control what’s located on the other end of the link—the person that controls the server can change what’s on it at any time, such as swapping in different images, re-editing a video or rewriting an article.
But a news publisher, Emmerich Newspapers, wants the Fifth Circuit to reject the server test, arguing that the entity that embeds links to the content is responsible for “displaying” it and, therefore, can be directly liable if the content turns out to be infringing. If they are right, the common act of embedding is a legally fraught activity and a trap for the unwary.
The Court should decline, or risk destabilizing fundamental, and useful, online activities. As we explain in an amicus brief filed with several public interest and trade organizations, linking and embedding are not unusual, nefarious, or misleading practices. Rather, the ability to embed external content and code is a crucial design feature of internet architecture, responsible for many of the internet’s most useful functions. Millions of websites—including EFF’s—embed external content or code for everything from selecting fonts and streaming music to providing services like customer support and legal compliance. The server test provides legal certainty for internet users by assigning primary responsibility to the person with the best ability to prevent infringement. Emmerich’s approach, by contrast, invites legal chaos.
Emmerich also claims that altering a URL violates the Digital Millennium Copyright Act’s prohibition on changing or deleting copyright management information. If they are correct, using a link shortener could put users at risks of statutory penalties—an outcome Congress surely did not intend.
Both of these theories would make common internet activities legally risky and undermine copyright’s Constitutional purpose: to promote the creation of and access to knowledge. The district court recognized as much and we hope the appeals court agrees.
Related Cases: Emmerich Newspapers v. Particle MediaW.M. Keck Foundation to support research on healthy aging at MIT
A prestigious grant from the W.M. Keck Foundation to Alison E. Ringel, an MIT assistant professor of biology, will support groundbreaking healthy aging research at the Institute.
Ringel, who is also a core member of the Ragon Institute of Mass General Brigham, MIT, and Harvard, will draw on her background in cancer immunology to create a more comprehensive biomedical understanding of the cause and possible treatments for aging-related decline.
“It is such an honor to receive this grant,” Ringel says. “This support will enable us to draw new connections between immunology and aging biology. As the U.S. population grows older, advancing this research is increasingly important, and this line of inquiry is only possible because of the W.M. Keck Foundation.”
Understanding how to extend healthy years of life is a fundamental question of biomedical research with wide-ranging societal implications. Although modern science and medicine have greatly expanded global life expectancy, it remains unclear why everyone ages differently; some maintain physical and cognitive fitness well into old age, while others become debilitatingly frail later in life.
Our immune systems are adaptable, but they do naturally decline as we get older. One critical component of our immune system is CD8+ T cells, which are known to target and destroy cancerous or damaged cells. As we age, our tissues accumulate cells that can no longer divide. These senescent cells are present throughout our lives, but reach seemingly harmful levels as a normal part of aging, causing tissue damage and diminished resilience under stress.
There is now compelling evidence that the immune system plays a more active role in aging than previously thought.
“Decades of research have revealed that T cells can eliminate cancer cells, and studies of how they do so have led directly to the development of cancer immunotherapy,” Ringel says. “Building on these discoveries, we can now ask what roles T cells play in normal aging, where the accumulation of senescent cells, which are remarkably similar to cancer cells in some respects, may cause health problems later in life.”
In animal models, reconstituting elements of a young immune system has been shown to improve age-related decline, potentially due to CD8+ T cells selectively eliminating senescent cells. CD8+ T cells progressively losing the ability to cull senescent cells could explain some age-related pathology.
Ringel aims to build models for the express purpose of tracking and manipulating T cells in the context of aging and to evaluate how T cell behavior changes over a lifespan.
“By defining the protective processes that slow aging when we are young and healthy, and defining how these go awry in older adults, our goal is to generate knowledge that can be applied to extend healthy years of life,” Ringel says. “I’m really excited about where this research can take us.”
The W.M. Keck Foundation was established in 1954 in Los Angeles by William Myron Keck, founder of The Superior Oil Co. One of the nation’s largest philanthropic organizations, the W.M. Keck Foundation supports outstanding science, engineering, and medical research. The foundation also supports undergraduate education and maintains a program within Southern California to support arts and culture, education, health, and community service projects.
