Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Global trends in ocean fronts and impacts on the air–sea CO<sub>2</sub> flux and chlorophyll concentrations
Nature Climate Change, Published online: 22 January 2026; doi:10.1038/s41558-025-02538-0
Changes in ocean fronts could impact biological productivity and carbon exchange. By analysing satellite and reanalysis data, the authors identify areas with active frontal activity and rapid change in properties, as well as highlighting the correspondence with surface productivity and CO2 uptake.Copyright Kills Competition
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
Copyright owners increasingly claim more draconian copyright law and policy will fight back against big tech companies. In reality, copyright gives the most powerful companies even more control over creators and competitors. Today’s copyright policy concentrates power among a handful of corporate gatekeepers—at everyone else’s expense. We need a system that supports grassroots innovation and emerging creators by lowering barriers to entry—ultimately offering all of us a wider variety of choices.
Pro-monopoly regulation through copyright won’t provide any meaningful economic support for vulnerable artists and creators. Because of the imbalance in bargaining power between creators and publishing gatekeepers, trying to help creators by giving them new rights under copyright law is like trying to help a bullied kid by giving them more lunch money for the bully to take.
Entertainment companies’ historical practices bear out this concern. For example, in the late-2000’s to mid-2010’s, music publishers and recording companies struck multimillion-dollar direct licensing deals with music streaming companies and video sharing platforms. Google reportedly paid more than $400 million to a single music label, and Spotify gave the major record labels a combined 18 percent ownership interest in its now- $100 billion company. Yet music labels and publishers frequently fail to share these payments with artists, and artists rarely benefit from these equity arrangements. There’s no reason to think that these same companies would treat their artists more fairly now.
AI TrainingIn the AI era, copyright may seem like a good way to prevent big tech from profiting from AI at individual creators’ expense—it’s not. In fact, the opposite is true. Developing a large language model requires developers to train the model on millions of works. Requiring developers to license enough AI training data to build a large language model would limit competition to all but the largest corporations—those that either have their own trove of training data or can afford to strike a deal with one that does. This would result in all the usual harms of limited competition, like higher costs, worse service, and heightened security risks. New, beneficial AI tools that allow people to express themselves or access information.
For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry.
Legacy gatekeepers have already used copyright to stifle access to information and the creation of new tools for understanding it. Consider, for example, Thomson Reuters v. Ross Intelligence, the first of many copyright lawsuits over the use of works train AI. ROSS Intelligence was a legal research startup that built an AI-based tool to compete with ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. ROSS trained its tool using “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. The tool didn’t output any of the headnotes, but Thomson Reuters sued ROSS anyways. A federal appeals court is still considering the key copyright issues in the case—which EFF weighed in on last year. EFF hopes that the appeals court will the in this overbroad interpretation of copyright law. But in the meantime, the case has already forced the startup out of business, eliminating a would-be competitor that might have helped increase access to the law.
Requiring developers to license AI training materials benefits tech monopolists as well. For giant tech companies that can afford to pay, pricey licensing deals offer a way to lock in their dominant positions in the generative AI market by creating prohibitive barriers to entry. The cost of licensing enough works to train an LLM would be prohibitively expensive for most would-be competitors.
The DMCA’s “Anti-Circumvention” ProvisionThe Digital Millennium Copyright Act’s “anti-circumvention” provision is another case in point. Congress ostensibly passed the DMCA to discourage would-be infringers from defeating Digital Rights Management (DRM) and other access controls and copy restrictions on creative works.
Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers
In practice, it’s done little to deter infringement—after all, large-scale infringement already invites massive legal penalties. Instead, Section 1201 has been used to block competition and innovation in everything from printer cartridges to garage door openers, videogame console accessories, and computer maintenance services. It’s been used to threaten hobbyists who wanted to make their devices and games work better. And the problem only gets worse as software shows up in more and more places, from phones to cars to refrigerators to farm equipment. If that software is locked up behind DRM, interoperating with it so you can offer add-on services may require circumvention. As a result, manufacturers get complete control over their products, long after they are purchased, and can even shut down secondary markets (as Lexmark did for printer ink, and Microsoft tried to do for Xbox memory cards.)
Giving rights holders a veto on new competition and innovation hurts consumers. Instead, we need balanced copyright policy that rewards consumers without impeding competition.
Professor of the practice Robert Liebeck, leading expert on aircraft design, dies at 87
Robert Liebeck, a professor of the practice in the MIT Department of Aeronautics and Astronautics and one of the world’s leading experts on aircraft design, aerodynamics, and hydrodynamics, died on Jan. 12 at age 87.
“Bob was a mentor and dear friend to so many faculty, alumni, and researchers at AeroAstro over the course of 25 years,” says Julie Shah, department head and the H.N. Slater Professor of Aeronautics and Astronautics at MIT. “He’ll be deeply missed by all who were fortunate enough to know him.”
Liebeck’s long and distinguished career in aerospace engineering included a number of foundational contributions to aerodynamics and aircraft design, beginning with his graduate research into high-lift airfoils. His novel designs came to be known as “Liebeck airfoils” and are used primarily for high-altitude reconnaissance airplanes; Liebeck airfoils have also been adapted for use in Formula One racing cars, racing sailboats, and even a flying replica of a giant pterosaur.
He was perhaps best known for his groundbreaking work on blended wing body (BWB) aircraft. He oversaw the BWB project at Boeing during his celebrated five-decade tenure at the company, working closely with NASA on the X-48 experimental aircraft. After retiring as senior technical fellow at Boeing in 2020, Liebeck remained active in BWB research. He served as technical advisor at BWB startup JetZero, which is aiming to build a more fuel-efficient aircraft for both military and commercial use and has set a target date of 2027 for its demonstration flight.
Liebeck was appointed a professor of the practice at MIT in 2000, and taught classes on aircraft design and aerodynamics.
“Bob contributed to the department both in aircraft capstones and also in advising students and mentoring faculty, including myself,” says John Hansman, the T. Wilson Professor of Aeronautics and Astronautics. “In addition to aviation, Bob was very significant in car racing and developed the downforce wing and flap system which has become standard on F1, IndyCar, and NASCAR cars.”
He was a major contributor to the Silent Aircraft Project, a collaboration between MIT and Cambridge University led by Dame Ann Dowling. Liebeck also worked closely with Professor Woody Hoburg ’08 and his research group, advising on students’ research into efficient methods for designing aerospace vehicles. Before Hoburg was accepted into the NASA astronaut corps in 2017, the group produced an open-source Python package, GPkit, for geometric programming, which was used to design a five-day endurance unmanned aerial vehicle for the U.S. Air Force.
“Bob was universally respected in aviation and he was a good friend to the department,” remembers Professor Ed Greitzer.
Liebeck was an AIAA honorary fellow and Boeing senior technical fellow, as well as a member of the National Academy of Engineering, Royal Aeronautical Society, and Academy of Model Aeronautics. He was a recipient of the Guggenheim Medal and ASME Spirit of St. Louis Medal, among many other awards, and was inducted into the International Air and Space Hall of Fame.
An avid runner and motorcyclist, Liebeck is remembered by friends and colleagues for his adventurous nature and generosity of spirit. Throughout a career punctuated by honors and achievements, Liebeck found his greatest satisfaction in teaching. In addition to his role at MIT, he was an adjunct faculty member at University of California at Irving and served as faculty member for that university’s Design/Build/Fly and Human-Powered Airplane teams.
“It is the one job where I feel I have done some good — even after a bad lecture,” he told AeroAstro Magazine in 2007. “I have decided that I am finally beginning to understand aeronautical engineering, and I want to share that understanding with our youth.”
Copyright Should Not Enable Monopoly
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
There’s a crisis of creativity in mainstream American culture. We have fewer and fewer studios and record labels and fewer and fewer platforms online that serve independent artists and creators.
At its core, copyright is a monopoly right on creative output and expression. It’s intended to allow people who make things to make a living through those things, to incentivize creativity. To square the circle that is “exclusive control over expression” and “free speech,” we have fair use.
However, we aren’t just seeing artists having a time-limited ability to make money off of their creations. We are also seeing large corporations turn into megacorporations and consolidating huge stores of copyrights under one umbrella. When the monopoly right granted by copyright is compounded by the speed and scale of media company mergers, we end up with a crisis in creativity.
People have been complaining about the lack of originality in Hollywood for a long time. What is interesting is that the response from the major studios has rarely, especially recently, to invest in original programming. Instead, they have increased their copyright holdings through mergers and acquisitions. In today’s consolidated media world, copyright is doing the opposite of its intended purpose: instead of encouraging creativity, it’s discouraging it. The drive to snap up media franchises (or “intellectual properties”) that can generate sequels, reboots, spinoffs, and series for years to come has crowded out truly original and fresh creativity in many sectors. And since copyright terms last so long, there isn’t even a ticking clock to force these corporations to seek out new original creations.
In theory, the internet should provide a counterweight to this problem by lowering barriers to entry for independent creators. But as online platforms for creativity likewise shrink in number and grow in scale, they have closed ranks with the major studios.
It’s a betrayal of the promise of the internet: that it should be a level playing field where you get to decide what you want to do, watch, listen to, read. And our government should be ashamed for letting it happen.
Internet Voting is Too Insecure for Use in Elections
No matter how many times we say it, the idea comes back again and again. Hopefully, this letter will hold back the tide for at least a while longer.
Executive summary: Scientists have understood for many years that internet voting is insecure and that there is no known or foreseeable technology that can make it secure. Still, vendors of internet voting keep claiming that, somehow, their new system is different, or the insecurity doesn’t matter. Bradley Tusk and his Mobile Voting Foundation keep touting internet voting to journalists and election administrators; this whole effort is misleading and dangerous...
New Jersey governor leans on climate funds for ‘affordability’ push
EPA thwarts Musk’s use of diesel turbines for AI
Budget plan would stymie Trump’s FEMA cuts
Former Biden officials go to bat for kids’ climate case
Red states back EPA freeze of $20B in climate grants
Four-bill ‘minibus’: EV chargers, energy aid, disaster mitigation
Climate activist predicts Trump’s attacks on green energy will hurt GOP
Italy unveils Arctic strategy as polar race heats up
Mozambique floods impacting over 600,000 people, official says
Researchers find Antarctic penguin breeding starts sooner
Broadening climate migration research across impacts, adaptation and mitigation
Nature Climate Change, Published online: 21 January 2026; doi:10.1038/s41558-025-02545-1
Current climate migration literature focuses on quantifying the link between climate drivers and migration, yet overlooks its broader and more complex interactions with mitigation, adaptation and climate impacts. This Perspective highlights key gaps and offers concrete solutions.Electrifying boilers to decarbonize industry
More than 200 years ago, the steam boiler helped spark the Industrial Revolution. Since then, steam has been the lifeblood of industrial activity around the world. Today the production of steam — created by burning gas, oil, or coal to boil water — accounts for a significant percentage of global energy use in manufacturing, powering the creation of paper, chemicals, pharmaceuticals, food, and more.
Now, the startup AtmosZero, founded by Addison Stark SM ’10, PhD ’14; Todd Bandhauer; and Ashwin Salvi, is taking a new approach to electrify the centuries-old steam boiler. The company has developed a modular heat pump capable of delivering industrial steam at temperatures up to 150 degrees Celsius to serve as a drop-in replacement for combustion boilers.
The company says its first 1-megawatt steam system is far cheaper to operate than commercially available electric solutions thanks to ultra-efficient compressor technology, which uses 50 percent less electricity than electric resistive boilers. The founders are hoping that’s enough to make decarbonized steam boilers drive the next industrial revolution.
“Steam is the most important working fluid ever,” says Stark, who serves as AtmosZero’s CEO. “Today everything is built around the ubiquitous availability of steam. Cost-effectively electrifying that requires innovation that can scale. In other words, it requires a mass-produced product — not one-off projects.”
Tapping into steam
Stark joined the Technology and Policy Program when he came to MIT in 2007. He ultimately completed a dual master’s degree by adding mechanical engineering to his studies.
“I was interested in the energy transition and in accelerating solutions to enable that,” Stark says. “The transition isn’t happening in a vacuum. You need to align economics, policy, and technology to drive that change.”
Stark stayed at MIT to earn his PhD in mechanical engineering, studying thermochemical biofuels.
After MIT, Stark began working on early-stage energy technologies with the Department of Energy’s Advanced Research Projects Agency— Energy (ARPA-E), with a focus on manufacturing efficiency, the energy-water nexus, and electrification.
“Part of that work involved applying my training at MIT to things that hadn’t really been innovated on for 50 years,” Stark says. “I was looking at the heat exchanger. It’s so fundamental. I thought, ‘How might we reimagine it in the context of modern advances in manufacturing technology?’”
The problem is as difficult as it is consequential, touching nearly every corner of the global industrial economy. More than 2.2 gigatons of CO2 emissions are generated each year to turn water into steam — accounting for more than 5 percent of global energy-related emissions.
In 2020, Stark co-authored an article in the journal Joule with Gregory Thiel SM ’12, PhD ’15 titled, “To decarbonize industry, we must decarbonize heat.” The article examined opportunities for industrial heat decarbonization, and it got Stark excited about the potential impact of a standardized, scalable electric heat pump.
Most electric boiler options today bring huge increases in operating costs. Many also make use of factory waste heat, which requires pricey retrofits. Stark never imagined he’d become an entrepreneur, but he soon realized no one was going to act on his findings for him.
“The only path to seeing this invention brought out into the world was to found and run the company,” Stark says. “It’s something I didn’t anticipate or necessarily want, but here I am.”
Stark partnered with former ARPA-E awardee Todd Bandhauer, who had been inventing new refrigerant compressor technology in his lab at Colorado State University, and former ARPA-E colleague Ashwin Salvi. The team officially founded AtmosZero in 2022.
“The compressor is the engine of the heat pump and defines the efficiency, cost, and performance,” Stark says. “The fundamental challenge of delivering heat is that the higher your heat pump is raising the air temperature, the lower your maximum efficiency. It runs into thermodynamic limitations. By designing for optimum efficiency in the operational windows that matter for the refrigerants we’re using, and for the precision manufacturing of our compressors, we’re able to maximize the individual stages of compression to maximize operational efficiency.”
The system can work with waste heat from air or water, but it doesn’t need waste heat to work. Many other electric boilers rely on waste heat, but Stark thinks that adds too much complexity to installation and operations.
Instead, in AtmosZero’s novel heat pump cycle, heat from ambient-temperature air is used to warm a liquid heat transfer material, which evaporates a refrigerant so it flows into the system’s series of compressors and heat exchangers, reaching high enough temperatures to boil water while recovering heat from the refrigerant once it reaches lower temperatures. The system can be ramped up and down to seamlessly fit into existing industrial processes.
“We can work just like a combustion boiler,” Stark says. “At the end of the day, customers don’t want to change how their manufacturing facilities operate in order to electrify. You can’t change or increase complexity on-site.”
That approach means the boiler can be deployed in a range of industrial contexts without unique project costs or other changes.
“What we really offer is flexibility and something that can drop in with ease and minimize total capital costs,” Stark says.
From 1 to 1,000
AtmosZero already has a pilot 650 kilowatt system operating at a customer facility near its headquarters in Loveland, Colorado. The company is currently focused on demonstrating year-round durability and reliability of the system as they work to build out their backlog of orders and prepare to scale.
Stark says once the system is brought to a customer’s facility, it can be installed in an afternoon and deployed in a matter of days, with zero downtime.
AtmosZero is aiming to deliver a handful of units to customers over the next year or two, with plans to deploy hundreds of units a year after that. The company is currently targeting manufacturing plants using under 10 megawatts of thermal energy at peak demand, which represents most U.S. manufacturing facilities.
Stark is proud to be part of a growing group of MIT-affiliated decarbonization startups, some of which are targeting specific verticals, like Boston Metal for steel and Sublime Systems for cement. But he says beyond the most common materials, the industry gets very fragmented, with one of the only common threads being the use of steam.
“If we look across industrial segments, we see the ubiquity of steam,” Stark says. “It’s a tremendously ripe opportunity to have impact at scale. Steam cannot be removed from industry. So much of every industrial process that we’ve designed over the last 160 years has been around the availability of steam. So, we need to focus on ways to deliver low-emissions steam rather than removing it from the equation.”
Why it’s critical to move beyond overly aggregated machine-learning metrics
MIT researchers have identified significant examples of machine-learning model failure when those models are applied to data other than what they were trained on, raising questions about the need to test whenever a model is deployed in a new setting.
“We demonstrate that even when you train models on large amounts of data, and choose the best average model, in a new setting this ‘best model’ could be the worst model for 6-75 percent of the new data,” says Marzyeh Ghassemi, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Institute for Medical Engineering and Science, and principal investigator at the Laboratory for Information and Decision Systems.
In a paper that was presented at the Neural Information Processing Systems (NeurIPS 2025) conference in December, the researchers point out that models trained to effectively diagnose illness in chest X-rays at one hospital, for example, may be considered effective in a different hospital, on average. The researchers’ performance assessment, however, revealed that some of the best-performing models at the first hospital were the worst-performing on up to 75 percent of patients at the second hospital, even though when all patients are aggregated in the second hospital, high average performance hides this failure.
Their findings demonstrate that although spurious correlations — a simple example of which is when a machine-learning system, not having “seen” many cows pictured at the beach, classifies a photo of a beach-going cow as an orca simply because of its background — are thought to be mitigated by just improving model performance on observed data, they actually still occur and remain a risk to a model’s trustworthiness in new settings. In many instances — including areas examined by the researchers such as chest X-rays, cancer histopathology images, and hate speech detection — such spurious correlations are much harder to detect.
In the case of a medical diagnosis model trained on chest X-rays, for example, the model may have learned to correlate a specific and irrelevant marking on one hospital’s X-rays with a certain pathology. At another hospital where the marking is not used, that pathology could be missed.
Previous research by Ghassemi’s group has shown that models can spuriously correlate such factors as age, gender, and race with medical findings. If, for instance, a model has been trained on more older people’s chest X-rays that have pneumonia and hasn’t “seen” as many X-rays belonging to younger people, it might predict that only older patients have pneumonia.
“We want models to learn how to look at the anatomical features of the patient and then make a decision based on that,” says Olawale Salaudeen, an MIT postdoc and the lead author of the paper, “but really anything that’s in the data that’s correlated with a decision can be used by the model. And those correlations might not actually be robust with changes in the environment, making the model predictions unreliable sources of decision-making.”
Spurious correlations contribute to the risks of biased decision-making. In the NeurIPS conference paper, the researchers showed that, for example, chest X-ray models that improved overall diagnosis performance actually performed worse on patients with pleural conditions or enlarged cardiomediastinum, meaning enlargement of the heart or central chest cavity.
Other authors of the paper included PhD students Haoran Zhang and Kumail Alhamoud, EECS Assistant Professor Sara Beery, and Ghassemi.
While previous work has generally accepted that models ordered best-to-worst by performance will preserve that order when applied in new settings, called accuracy-on-the-line, the researchers were able to demonstrate examples of when the best-performing models in one setting were the worst-performing in another.
Salaudeen devised an algorithm called OODSelect to find examples where accuracy-on-the-line was broken. Basically, he trained thousands of models using in-distribution data, meaning the data were from the first setting, and calculated their accuracy. Then he applied the models to the data from the second setting. When those with the highest accuracy on the first-setting data were wrong when applied to a large percentage of examples in the second setting, this identified the problem subsets, or sub-populations. Salaudeen also emphasizes the dangers of aggregate statistics for evaluation, which can obscure more granular and consequential information about model performance.
In the course of their work, the researchers separated out the “most miscalculated examples” so as not to conflate spurious correlations within a dataset with situations that are simply difficult to classify.
The NeurIPS paper releases the researchers’ code and some identified subsets for future work.
Once a hospital, or any organization employing machine learning, identifies subsets on which a model is performing poorly, that information can be used to improve the model for its particular task and setting. The researchers recommend that future work adopt OODSelect in order to highlight targets for evaluation and design approaches to improving performance more consistently.
“We hope the released code and OODSelect subsets become a steppingstone,” the researchers write, “toward benchmarks and models that confront the adverse effects of spurious correlations.”
Statutory Damages: The Fuel of Copyright-based Censorship
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
Imagine every post online came with a bounty of up to $150,000 paid to anyone who finds it violates opaque government rules—all out of the pocket of the platform. Smaller sites could be snuffed out, and big platforms would avoid crippling liability by aggressively blocking, taking down, and penalizing speech that even possibly violates these rules. In turn, users would self-censor, and opportunists would turn accusations into a profitable business.
This dystopia isn’t a fantasy, it’s close to how U.S. copyright’s broken statutory damages regime actually works.
Copyright includes "statutory damages,” which means letting a jury decide how big of a penalty the defendant will have to pay—anywhere from $200 to $150,000 per work—without the jury necessarily seeing any evidence of actual financial losses or illicit profits. In fact, the law gives judges and juries almost no guidelines on how to set damages. This is a huge problem for online speech.
One way or another, everyone builds on the speech of others when expressing themselves online: quoting posts, reposting memes, sharing images from the news. For some users, re-use is central to their online expression: parodists, journalists, researchers, and artists use others’ words, sounds, and images as part of making something new every day. Both these users and the online platforms they rely on risk unpredictable, potentially devastating penalties if a copyright holder objects to some re-use and a court disagrees with the user’s well-intentioned efforts.
On Copyright Week, we like to talk about ways to improve copyright law. One of the most important would be to fix U.S. copyright’s broken statutory damages regime. In other areas of civil law, the courts have limited jury-awarded punitive damages so that they can’t be far higher than the amount of harm caused. Extremely large jury awards for fraud, for example, have been found to offend the Constitution’s Due Process Clause. But somehow, that’s not the case in copyright—some courts have ruled that Congress can set damages that are potentially hundreds of times greater than actual harm.
Massive, unpredictable damages awards for copyright infringement, such as a $222,000 penalty for sharing 24 music tracks online, are the fuel that drives overzealous or downright abusive takedowns of creative material from online platforms. Capricious and error-prone copyright enforcement bots, like YouTube’s Content ID, were created in part to avoid the threat of massive statutory damages against the platform. Those same damages create an ever-present bias in favor of major rightsholders and against innocent users in the platforms’ enforcement decisions. And they stop platforms from addressing the serious problems of careless and downright abusive copyright takedowns.
By turning litigation into a game of financial Russian roulette, statutory damages also discourage artistic and technological experimentation at the boundaries of fair use. None but the largest corporations can risk ruinous damages if a well-intentioned fair use crosses the fuzzy line into infringement.
“But wait”, you might say, “don’t legal protections like fair use and the safe harbors of the Digital Millennium Copyright Act protect users and platforms?” They do—but the threat of statutory damages makes that protection brittle. Fair use allows for many important re-uses of copyrighted works without permission. But fair use is heavily dependent on circumstances and can sometimes be difficult to predict when copyright is applied to new uses. Even well-intentioned and well-resourced users avoid experimenting at the boundaries of fair use when the cost of a court disagreeing is so high and unpredictable.
Many reforms are possible. Congress could limit statutory damages to a multiple of actual harm. That would bring U.S. copyright in line with other countries, and with other civil laws like patent and antitrust. Congress could also make statutory damages unavailable in cases where the defendant has a good-faith claim of fair use, which would encourage creative experimentation. Fixing fair use would make many of the other problems in copyright law more easily solvable, and create a fairer system for creators and users alike.
