Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Friday Squid Blogging: Squid Cartoon
I like this one.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Exploring materials at the atomic scale
MIT.nano has added a new X-ray diffraction (XRD) instrument to its characterization toolset, enhancing facility users’ ability to analyze materials at the nanoscale. While many XRD systems exist across MIT’s campus, this new instrument, the Bruker D8 Discover Plus, is unique in that it features a high-brilliance micro-focus copper X-ray source — ideal for measuring small areas of thin film samples using a large area detector.
The new system is positioned within Characterization.nano’s X-ray diffraction and imaging shared experimental facility (SEF), where advanced instrumentation allows researchers to “see inside” materials at very small scales. Here, scientists and engineers can examine surfaces, layers, and internal structures without damaging the material, and create detailed 3D images to map composition and organization. The information gathered is supporting materials research for applications ranging from electronics and energy storage to health care and nanotechnology.
“The Bruker instrument is an important addition to MIT.nano that will help researchers efficiently gain insights into their materials’ structure and properties,” says Charlie Settens, research specialist and operations manager in the Characterization.nano X-ray diffraction and imaging SEF. “It brings high-performance diffraction capabilities to our lab, supporting everything from routine phase identification to complex thin film microstructural analysis and high-temperature studies.”
What is X-ray diffraction?
When people think of X-rays, they often picture medical imaging, where dense structures like bones appear in contrast to soft tissue. X-ray diffraction takes that concept further, revealing the crystalline structure of materials by measuring the interference patterns that form when X-rays interact with atomic planes. These diffraction patterns provide detailed information about a material’s crystalline phase, grain size, grain orientation, defects, and other structural properties.
XRD is essential across many fields. Civil engineers use it to analyze the components of concrete mixtures and monitor material changes over time. Materials scientists engineer new microstructures and track how atomic arrangements shift with different element combinations. Electrical engineers study crystalline thin film deposition on substrates — critical for semiconductor manufacturing. MIT.nano’s new X-ray diffractometer will support all of these applications, and more.
“The addition of another high-resolution XRD will make it a lot easier to get time on these very popular tools,” says Fred Tutt, PhD student in the MIT Department of Materials Science and Engineering. “The wide variety of options on the new Bruker will also make it easier for myself and my group members to take some of the more atypical measurements that aren't readily accessible with the current XRD tools.”
A closer, clearer look
Replacing two older systems, the Bruker D8 Discover Plus introduces the latest in X-ray diffraction technology to MIT.nano, along with several major upgrades for the Characterization.nano facility. One key feature is the high-brilliance microfocus copper X-ray source, capable of producing intense X-rays from a small spot size — ranging from 2mm down to 200 microns.
“It’s invaluable to have the flexibility to measure distinct regions of a sample with high flux and fine spatial resolution,” says Jordan Cox, MIT.nano research specialist in the MIT.nano X-ray diffraction and imaging facility.
Another highlight is in-plane XRD, a technique that enables surface diffraction studies of thin films with non-uniform grain orientations.
“In-plane XRD pairs well with many thin film projects that start in the fab,” says Settens. After researchers deposit thin film coatings in MIT.nano’s cleanroom, they can selectively measure the top 100 nanometers of the surface, he explains.
But it’s not just about collecting diffraction patterns. The new system includes a powerful software suite for advanced data analysis. Cox and Settens are now training users how to operate the diffractometer, as well as how to analyze and interpret the valuable structural data it provides.
Visit Characterization.nano for more information about this and other tools.
3 Questions: Exploring the mechanisms underlying changes during infection
With respiratory illness season in full swing, a bad night’s sleep, sore throat, and desire to cancel dinner plans could all be considered hallmark symptoms of the flu, Covid-19 or other illnesses. Although everyone has, at some point, experienced illness and these stereotypical symptoms, the mechanisms that generate them are not well understood.
Zuri Sullivan, a new assistant professor in the MIT Department of Biology and core member of the Whitehead Institute for Biomedical Research, works at the interface of neuroscience, microbiology, physiology, and immunology to study the biological workings underlying illness. In this interview, she describes her work on immunity thus far as well as research avenues — and professional collaborations — she’s excited to explore at MIT.
Q: What is immunity, and why do we get sick in the first place?
A: We can think of immunity in two ways: the antimicrobial programs that defend against a pathogen directly, and sickness, the altered organismal state that happens when we get an infection.
Sickness itself arises from brain-immune system interaction. The immune system is talking to the brain, and then the brain has a system-wide impact on host defense via its ability to have top-down control of physiologic systems and behavior. People might assume that sickness is an unintended consequence of infection, that it happens because your immune system is active, but we hypothesize that it’s likely an adaptive process that contributes to host defense.
If we consider sickness as immunity at the organismal scale, I think of my work as bridging the dynamic immunological processes that occur at the cellular scale, the tissue scale, and the organismal scale. I’m interested in the molecular and cellular mechanisms by which the immune system communicates with the brain to generate changes in behavior and physiology, such as fever, loss of appetite, and changes in social interaction.
Q: What sickness behaviors fascinate you?
A: During my thesis work at Yale University, I studied how the gut processes different nutrients and the role of the immune system in regulating gut homeostasis in response to different kinds of food. I’m especially interested in the interaction between food, the immune system, and the brain. One of the things I’m most excited about is the reduction in appetite, or changes in food choice, because we have what I would consider pretty strong evidence that these may be adaptive.
Sleep is another area we’re interested in exploring. From their own subjective experience, everyone knows that sleep is often altered during infection.
I also don’t just want to examine snapshots in time. I want to characterize changes over the course of an infection. There’s probably going to be individual variability, which I think may be in part because pathogens are also changing over the course of an illness — we’re studying two different biological systems interacting with each other.
Q: What sorts of expertise are you hoping to recruit to your lab, and what collaborations are you excited about pursuing?
A: I really want to bring together different areas of biology to think about organism-wide questions. The thing that’s most important to me is people who are creative — I’d rather trainees come in with an interesting idea than a perfectly formed question within the bounds of what we already believe to be true. I’m also interested in people who would complement my expertise; I’m fascinated by microbiology, but I don’t have any formal training.
The Whitehead Institute is really invested in interdisciplinary work, and there’s a natural synergy between my work and the other labs in this small community at the Whitehead Institute.
I’ve been collaborating with Sebastian Lourido’s lab for a few years, looking at how Toxoplasma gondii influences social behavior, and I’m excited to invest more time in that project. I’m also interested in molecular neuroscience, which is a focus of Siniša Hrvatin’s lab. That lab is interested in the hypothalamus, and trying to understand the mechanisms that generate torpor. My work also focuses on the hypothalamus because it regulates homeostatic behaviors that change during sickness, such as appetite, sleep, social behavior, and body temperature.
By studying different sickness states generated by different kinds of pathogens — parasites, viruses, bacteria — we can ask really interesting questions about how and why we get sick.
Fragile X study uncovers brain wave biomarker bridging humans and mice
Numerous potential treatments for neurological conditions, including autism spectrum disorders, have worked well in mice but then disappointed in humans. What would help is a non-invasive, objective readout of treatment efficacy that is shared in both species.
In a new open-access study in Nature Communications, a team of MIT researchers, backed by collaborators across the United States and in the United Kingdom, identifies such a biomarker in fragile X syndrome, the most common inherited form of autism.
Led by postdoc Sara Kornfeld-Sylla and Picower Professor Mark Bear, the team measured the brain waves of human boys and men, with or without fragile X syndrome, and comparably aged male mice, with or without the genetic alteration that models the disorder. The novel approach Kornfeld-Sylla used for analysis enabled her to uncover specific and robust patterns of differences in low-frequency brain waves between typical and fragile X brains shared between species at each age range. In further experiments, the researchers related the brain waves to specific inhibitory neural activity in the mice and showed that the biomarker was able to indicate the effects of even single doses of a candidate treatment for fragile X called arbaclofen, which enhances inhibition in the brain.
Both Kornfeld-Sylla and Bear praised and thanked colleagues at Boston Children’s Hospital, the Phelan-McDermid Syndrome Foundation, Cincinnati Children’s Hospital, the University of Oklahoma, and King’s College London for gathering and sharing data for the study.
“This research weaves together these different datasets and finds the connection between the brain wave activity that’s happening in fragile X humans that is different from typically developed humans, and in the fragile X mouse model that is different than the ‘wild-type’ mice,” says Kornfeld-Sylla, who earned her PhD in Bear’s lab in 2024 and continued the research as a FRAXA postdoc. “The cross-species connection and the collaboration really makes this paper exciting.”
Bear, a faculty member in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT, says having a way to directly compare brain waves can advance treatment studies.
“Because that is something we can measure in mice and humans minimally invasively, you can pose the question: If drug treatment X affects this signature in the mouse, at what dose does that same drug treatment change that same signature in a human?” Bear says. “Then you have a mapping of physiological effects onto measures of behavior. And the mapping can go both ways.”
Peaks and powers
In the study, the researchers measured EEG over the occipital lobe of humans and on the surface of the visual cortex of the mice. They measured power across the frequency spectrum, replicating previous reports of altered low-frequency brain waves in adult humans with fragile X and showing for the first time how these disruptions differ in children with fragile X.
To enable comparisons with mice, Kornfeld-Sylla subtracted out background activity to specifically isolate only “periodic” fluctuations in power (i.e., the brain waves) at each frequency. She also disregarded the typical way brain waves are grouped by frequency (into distinct bands with Greek letter designations delta, theta, alpha, beta, and gamma) so that she could simply juxtapose the periodic power spectra of the humans and mice without trying to match them band by band (e.g., trying to compare the mouse “alpha” band to the human one). This turned out to be crucial because the significant, similar patterns exhibited by the mice actually occurred in a different low-frequency band than in the humans (theta vs. alpha). Both species also had alterations in higher-frequency bands in fragile X, but Kornfeld-Sylla noted that the differences in the low-frequency brainwaves are easier to measure and more reliable in humans, making them a more promising biomarker.
So what patterns constitute the biomarker? In adult men and mice alike, a peak in the power of low-frequency waves is shifted to a significantly slower frequency in fragile X cases compared to in neurotypical cases. Meanwhile, in fragile X boys and juvenile mice, while the peak is somewhat shifted to a slower frequency, what is really significant is a reduced power in that same peak.
The researchers were also able to discern that the peak in question is actually made of two distinct subpeaks, and that the lower-frequency subpeak is the one that varies specifically with fragile X syndrome.
Curious about the neural activity underlying the measurements, the researchers engaged in experiments in which they turned off activity of two different kinds of inhibitory neurons that are known to help produce and shape brain wave patterns: somatostatin-expressing and parvalbumin-expressing interneurons. Manipulating the somatostatin neurons specifically affected the lower-frequency subpeak that contained the newly discovered biomarker in fragile X model mice.
Drug testing
Somatostatin interneurons exert their effects on the neurons they connect to via the neurotransmitter chemical GABA, and evidence from prior studies suggest that GABA receptivity is reduced in fragile X syndrome. A therapeutic approach pioneered by Bear and others has been to give the drug arbaclofen, which enhances GABA activity. In the new study, the researchers treated both control and fragile X model mice with arbaclofen to see how it affected the low-frequency biomarker.
Even the lowest administered single dose made a significant difference in the neurotypical mice, which is consistent with those mice having normal GABA responsiveness. Fragile X mice needed a higher dose, but after one was administered, there was a notable increase in the power of the key subpeak, reducing the deficit exhibited by juvenile mice.
The arbaclofen experiments therefore demonstrated that the biomarker provides a significant readout of an underlying pathophysiology of fragile X: the reduced GABA responsiveness. Bear also noted that it helped to identify a dose at which arbaclofen exerted a corrective effect, even though the drug was only administered acutely, rather than chronically. An arbaclofen therapy would, of course, be given over a long time frame, not just once.
“This is a proof of concept that a drug treatment could move this phenotype acutely in a direction that makes it closer to wild-type,” Bear says. “This effort reveals that we have readouts that can be sensitive to drug treatments.”
Meanwhile, Kornfeld-Sylla notes, there is a broad spectrum of brain disorders in which human patients exhibit significant differences in low-frequency (alpha) brain waves compared to neurotypical peers.
“Disruptions akin to the biomarker we found in this fragile X study might prove to be evident in mouse models of those other disorders, too,” she says. “Identifying this biomarker could broadly impact future translational neuroscience research.”
The paper’s other authors are Cigdem Gelegen, Jordan Norris, Francesca Chaloner, Maia Lee, Michael Khela, Maxwell Heinrich, Peter Finnie, Lauren Ethridge, Craig Erickson, Lauren Schmitt, Sam Cooke, and Carol Wilkinson.
The National Institutes of Health, the National Science Foundation, the FRAXA Foundation, the Pierce Family Fragile X Foundation, the Autism Science Foundation, the Thrasher Research Fund, Harvard University, the Simons Foundation, Wellcome, the Biotechnology and Biological Sciences Research Council, and the Freedom Together Foundation provided support for the research.
EPA endangerment repeal could expose industry to legal blowback
A quiet climate retreat at IEA
Mikie Sherrill uses New Jersey’s RGGI funds for affordability
Republican AGs to National Academies: Ditch the climate chapter
Enviro lawyer spars with ex-Trump official over endangerment finding
Pritzker cites property insurance ‘crisis’ to urge new regulation
Alabama sets limits on science used for regulations
New bill would let California drivers modify vehicles for cheaper ethanol fuel
Hillary Clinton says 500,000 Indian women have heat insurance
UK floods raise specter of ‘mortgage prisoners’ across banks
Mauritius needs $5.6B to help with climate funding, World Bank says
Emergence of Antarctic mineral resources in a warming world
Nature Climate Change, Published online: 20 February 2026; doi:10.1038/s41558-026-02569-1
Melting ice and associated sea-level change will expose new land in Antarctica. Here the authors quantify this change and combine it with our understanding of known Antarctic mineral occurrences, showing that substantial mineral deposits may become accessible over the next few centuries in Antarctica.Chip-processing method could assist cryptography schemes to keep data secure
Just like each person has unique fingerprints, every CMOS chip has a distinctive “fingerprint” caused by tiny, random manufacturing variations. Engineers can leverage this unforgeable ID for authentication, to safeguard a device from attackers trying to steal private data.
But these cryptographic schemes typically require secret information about a chip’s fingerprint to be stored on a third-party server. This creates security vulnerabilities and requires additional memory and computation.
To overcome this limitation, MIT engineers developed a manufacturing method that enables secure, fingerprint-based authentication, without the need to store secret information outside the chip.
They split a specially designed chip during fabrication in such a way that each half has an identical, shared fingerprint that is unique to these two chips. Each chip can be used to directly authenticate the other. This low-cost fingerprint fabrication method is compatible with standard CMOS foundry processes and requires no special materials.
The technique could be useful in power-constrained electronic systems with non-interchangeable device pairs, like an ingestible sensor pill and its paired wearable patch that monitor gastrointestinal health conditions. Using a shared fingerprint, the pill and patch can authenticate each other without a device in between to mediate.
“The biggest advantage of this security method is that we don’t need to store any information. All the secrets will always remain safe inside the silicon. This can give a higher level of security. As long as you have this digital key, you can always unlock the door,” says Eunseok Lee, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this security method.
Lee is joined on the paper by EECS graduate students Jaehong Jung and Maitreyi Ashok; as well as co-senior authors Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor of Electrical Engineering and Computer Science, and Ruonan Han, a professor of EECS and a member of the MIT Research Laboratory of Electronics. The research was recently presented at the IEEE International Solid-States Circuits Conference.
“Creation of shared encryption keys in trusted semiconductor foundries could help break the tradeoffs between being more secure and more convenient to use for protection of data transmission,” Han says. “This work, which is digital-based, is still a preliminary trial in this direction; we are exploring how more complex, analog-based secrecy can be duplicated — and only duplicated once.”
Leveraging variations
Even though they are intended to be identical, each CMOS chip is slightly different due to unavoidable microscopic variations during fabrication. These randomizations give each chip a unique identifier, known as a physical unclonable function (PUF), that is nearly impossible to replicate.
A chip’s PUF can be used to provide security just like the human fingerprint identification system on a laptop or door panel.
For authentication, a server sends a request to the device, which responds with a secret key based on its unique physical structure. If the key matches an expected value, the server authenticates the device.
But the PUF authentication data must be registered and stored in a server for access later, creating a potential security vulnerability.
“If we don’t need to store information on these unique randomizations, then the PUF becomes even more secure,” Lee says.
The researchers wanted to accomplish this by developing a matched PUF pair on two chips. One could authenticate the other directly, without the need to store PUF data on third-party servers.
As an analogy, consider a sheet of paper torn in half. The torn edges are random and unique, but the pieces have a shared randomness because they fit back together perfectly along the torn edge.
While CMOS chips aren’t torn in half like paper, many are fabricated at once on a silicon wafer which is diced to separate the individual chips.
By incorporating shared randomness at the edge of two chips before they are diced to separate them, the researchers could create a twin PUF that is unique to these two chips.
“We needed to find a way to do this before the chip leaves the foundry, for added security. Once the fabricated chip enters the supply chain, we won’t know what might happen to it,” Lee explains.
Sharing randomness
To create the twin PUF, the researchers change the properties of a set of transistors fabricated along the edge of two chips, using a process called gate oxide breakdown.
Essentially, they pump high voltage into a pair of transistors by shining light with a low-cost LED until the first transistor breaks down. Because of tiny manufacturing variations, each transistor has a slightly different breakdown time. The researchers can use this unique breakdown state as the basis for a PUF.
To enable a twin PUF, the MIT researchers fabricate two pairs of transistors along the edge of two chips before they are diced to separate them. By connecting the transistors with metal layers, they create paired structures that have correlated breakdown states. In this way, they enable a unique PUF to be shared by each pair of transistors.
After shining LED light to create the PUF, they dice the chips between the transistors so there is one pair on each device, giving each separate chip a shared PUF.
“In our case, transistor breakdown has not been modeled well in many of the simulations we had, so there was a lot of uncertainty about how the process would work. Figuring out all the steps, and the order they needed to happen, to generate this shared randomness is the novelty of this work,” Lee says.
After finetuning their PUF generation process, the researchers developed a prototype pair of twin PUF chips in which the randomization was matched with more than 98 percent reliability. This would ensure the generated PUF key matches consistently, enabling secure authentication.
Because they generated this twin PUF using circuit techniques and low-cost LEDs, the process would be easier to implement at scale than other methods that are more complicated or not compatible with standard CMOS fabrication.
“In the current design, shared randomness generated by transistor breakdown is immediately converted into digital data. Future versions could preserve this shared randomness directly within the transistors, strengthening security at the most fundamental physical level of the chip,” Lee says.
“There is a rapidly increasing demand for physical-layer security for edge devices, such as between medical sensors and devices on a body, which often operate under strict energy constraints. A twin-paired PUF approach enables secure communication between nodes without the burden of heavy protocol overhead, thereby delivering both energy efficiency and strong security. This initial demonstration paves the way for innovative advancements in secure hardware design,” Chandrakasan adds.
This work is funded by Lockheed Martin, the MIT School of Engineering MathWorks Fellowship, and the Korea Foundation for Advanced Studies Fellowship.
EFF’s Policy on LLM-Assisted Contributions to Our Open-Source Projects
We recently introduced a policy governing large language model (LLM) assisted contributions to EFF's open-source projects. At EFF, we strive to produce high quality software tools, rather than simply generating more lines of code in less time. We now explicitly require that contributors understand the code they submit to us and that comments and documentation be authored by a human.
LLMs excel at producing code that looks mostly human generated, but can often have underlying bugs that can be replicated at scale. This makes LLM-generated code exhausting to review, especially with smaller, less resourced teams. LLMs make it easy for well-intentioned people to submit code that may suffer from hallucination, omission, exaggeration, or misrepresentation.
It is with this in mind that we introduce a new policy on submitting LLM-assisted contributions to our open-source projects. We want to ensure that our maintainers spend their time reviewing well thought out submissions. We do not completely outright ban LLMs, as their use has become so pervasive a blanket ban is impractical to enforce.
Banning a tool is against our general ethos, but this class of tools comes with an ecosystem of problems. This includes issues with code reviews turning into code refactors for our maintainers if the contributor doesn’t understand the code they submitted. Or the sheer scale of contributions that could come in as AI generated code but is only marginally useful or potentially unreviewable. By disclosing when you use LLM tools, you help us spend our time wisely.
EFF has described how extending copyright is an impractical solution to the problem of AI generated content, but it is worth mentioning that these tools raise privacy, censorship, ethical, and climatic concerns for many. These issues are largely a continuation of tech companies’ harmful practices that led us to this point. LLM generated code isn’t written on a clean slate, but born out of a climate of companies speedrunning their profits over people. We are once again in “just trust us” territory of Big Tech being obtuse about the power it wields. We are strong advocates of using tools to innovate and come up with new ideas. However, we ask you to come to our projects knowing how to use them safely.
