Feed aggregator
North Carolina governor seeks more hurricane aid in legislative address
Estonia joins calls on EU to delay new carbon market for fuels
UK-backed FSD Africa sets $300M goal for climate projects
Publisher Correction: Climate-driven connectivity loss impedes species adaptation to warming in the deep ocean
Nature Climate Change, Published online: 14 March 2025; doi:10.1038/s41558-025-02313-1
Publisher Correction: Climate-driven connectivity loss impedes species adaptation to warming in the deep oceanWhen did human language emerge?
It is a deep question, from deep in our history: When did human language as we know it emerge? A new survey of genomic evidence suggests our unique language capacity was present at least 135,000 years ago. Subsequently, language might have entered social use 100,000 years ago.
Our species, Homo sapiens, is about 230,000 years old. Estimates of when language originated vary widely, based on different forms of evidence, from fossils to cultural artifacts. The authors of the new analysis took a different approach. They reasoned that since all human languages likely have a common origin — as the researchers strongly think — the key question is how far back in time regional groups began spreading around the world.
“The logic is very simple,” says Shigeru Miyagawa, an MIT professor and co-author of a new paper summarizing the results. “Every population branching across the globe has human language, and all languages are related.” Based on what the genomics data indicate about the geographic divergence of early human populations, he adds, “I think we can say with a fair amount of certainty that the first split occurred about 135,000 years ago, so human language capacity must have been present by then, or before.”
The paper, “Linguistic capacity was present in the Homo sapiens population 135 thousand years ago,” appears in Frontiers in Psychology. The co-authors are Miyagawa, who is a professor emeritus of linguistics and the Kochi-Manjiro Professor of Japanese Language and Culture at MIT; Rob DeSalle, a principal investigator at the American Museum of Natural History’s Institute for Comparative Genomics; Vitor Augusto Nóbrega, a faculty member in linguistics at the University of São Paolo; Remo Nitschke, of the University of Zurich, who worked on the project while at the University of Arizona linguistics department; Mercedes Okumura of the Department of Genetics and Evolutionary Biology at the University of São Paulo; and Ian Tattersall, curator emeritus of human origins at the American Museum of Natural History.
The new paper examines 15 genetic studies of different varieties, published over the past 18 years: Three used data about the inherited Y chromosome, three examined mitochondrial DNA, and nine were whole-genome studies.
All told, the data from these studies suggest an initial regional branching of humans about 135,000 years ago. That is, after the emergence of Homo sapiens, groups of people subsequently moved apart geographically, and some resulting genetic variations have developed, over time, among the different regional subpopulations. The amount of genetic variation shown in the studies allows researchers to estimate the point in time at which Homo sapiens was still one regionally undivided group.
Miyagawa says the studies collectively provide increasingly converging evidence about when these geographic splits started taking place. The first survey of this type was performed by other scholars in 2017, but they had fewer existing genetic studies to draw upon. Now, there are much more published data available, which when considered together point to 135,000 years ago as the likely time of the first split.
The new meta-analysis was possible because “quantity-wise we have more studies, and quality-wise, it’s a narrower window [of time],” says Miyagawa, who also holds an appointment at the University of São Paolo.
Like many linguists, Miyagawa believes all human languages are demonstrably related to each other, something he has examined in his own work. For instance, in his 2010 book, “Why Agree? Why Move?” he analyzed previously unexplored similarities between English, Japanese, and some of the Bantu languages. There are more than 7,000 identified human languages around the globe.
Some scholars have proposed that language capacity dates back a couple of million years, based on the physiological characteristics of other primates. But to Miyagawa, the question is not when primates could utter certain sounds; it is when humans had the cognitive ability to develop language as we know it, combining vocabulary and grammar into a system generating an infinite amount of rules-based expression.
“Human language is qualitatively different because there are two things, words and syntax, working together to create this very complex system,” Miyagawa says. “No other animal has a parallel structure in their communication system. And that gives us the ability to generate very sophisticated thoughts and to communicate them to others.”
This conception of human language origins also holds that humans had the cognitive capacity for language for some period of time before we constructed our first languages.
“Language is both a cognitive system and a communication system,” Miyagawa says. “My guess is prior to 135,000 years ago, it did start out as a private cognitive system, but relatively quickly that turned into a communications system.”
So, how can we know when distinctively human language was first used? The archaeological record is invaluable in this regard. Roughly 100,000 years ago, the evidence shows, there was a widespread appearance of symbolic activity, from meaningful markings on objects to the use of fire to produce ochre, a decorative red color.
Like our complex, highly generative language, these symbolic activities are engaged in by people, and no other creatures. As the paper notes, “behaviors compatible with language and the consistent exercise of symbolic thinking are detectable only in the archaeological record of H. sapiens.”
Among the co-authors, Tattersall has most prominently propounded the view that language served as a kind of ignition for symbolic thinking and other organized activities.
“Language was the trigger for modern human behavior,” Miyagawa says. “Somehow it stimulated human thinking and helped create these kinds of behaviors. If we are right, people were learning from each other [due to language] and encouraging innovations of the types we saw 100,000 years ago.”
To be sure, as the authors acknowledge in the paper, other scholars believe there was a more incremental and broad-based development of new activities around 100,000 years ago, involving materials, tools, and social coordination, with language playing a role in this, but not necessarily being the central force.
For his part, Miyagawa recognizes that there is considerable room for further progress in this area of research, but thinks efforts like the current paper are at least steps toward filling out a more detailed picture of language’s emergence.
“Our approach is very empirically based, grounded in the latest genetic understanding of early homo sapiens,” Miyagawa says. “I think we are on a good research arc, and I hope this will encourage people to look more at human language and evolution.”
This research was, in part, supported by the São Paolo Excellence Chair awarded to Miyagawa by the São Paolo Research Foundation.
Small step funding models fit better for climate research
Nature Climate Change, Published online: 14 March 2025; doi:10.1038/s41558-025-02281-6
Small step funding models fit better for climate researchEFF to NSF: AI Action Plan Must Put People First
This past January the new administration issued an executive order on Artificial Intelligence (AI), taking the place of the now rescinded Biden-era order, calling for a new AI Action Plan tasked with “unburdening” the current AI industry to stoke innovation and remove “engineered social agendas” from the industry. This new action plan for the president is currently being developed and open to public comments to the National Science Foundation (NSF).
EFF answered with a few clear points: First, government procurement of decision-making (ADM) technologies must be done with transparency and public accountability—no secret and untested algorithms should decide who keeps their job or who is denied safe haven in the United States. Second, Generative AI policy rules must be narrowly focused and proportionate to actual harms, with an eye on protecting other public interests. And finally, we shouldn't entrench the biggest companies and gatekeepers with AI licensing schemes.
Government Automated Decision MakingUS procurement of AI has moved with remarkable speed and an alarming lack of transparency. By wasting money on systems with no proven track record, this procurement not only entrenches the largest AI companies, but risks infringing the civil liberties of all people subject to these automated decisions.
These harms aren’t theoretical, we have already seen a move to adopt experimental AI tools in policing and national security, including immigration enforcement. Recent reports also indicate the Department of Government Efficiency (DOGE) intends to apply AI to evaluate federal workers, and use the results to make decisions about their continued employment.
Automating important decisions about people is reckless and dangerous. At best these new AI tools are ineffective nonsense machines which require more labor to correct inaccuracies, but at worst result in irrational and discriminatory outcomes obscured by the blackbox nature of the technology.
Instead, the adoption of such tools must be done with a robust public notice-and-comment practice as required by the Administrative Procedure Act. This process helps weed out wasteful spending on AI snake oil, and identifies when the use of such AI tools are inappropriate or harmful.
Additionally, the AI action plan should favor tools developed under the principles of free and open-source software. These principles are essential for evaluating the efficacy of these models, and ensure they uphold a more fair and scientific development process. Furthermore, more open development stokes innovation and ensures public spending ultimately benefits the public—not just the most established companies.
Spurred by the general anxiety about Generative AI, lawmakers have drafted sweeping regulations based on speculation, and with little regard for the multiple public interests at stake. Though there are legitimate concerns, this reactionary approach to policy is exactly what we warned against back in 2023.
For example, bills like NO FAKES and NO AI Fraud expand copyright laws to favor corporate giants over everyone else’s expression. NO FAKES even includes a scheme for a DMCA-like notice takedown process, long bemoaned by creatives online for encouraging broader and automated online censorship. Other policymakers propose technical requirements like watermarking that are riddled with practical points of failure.
Among these dubious solutions is the growing prominence of AI licensing schemes which limit the potential of AI development to the highest bidders. This intrusion on fair use creates a paywall protecting only the biggest tech and media publishing companies—cutting out the actual creators these licenses nominally protect. It’s like helping a bullied kid by giving them more lunch money to give their bully.
This is the wrong approach. Looking for easy solutions like expanding copyright, hurts everyone. Particularly smaller artists, researchers, and businesses who cannot compete with the big gatekeepers of industry. AI has threatened the fair pay and treatment of creative labor, but sacrificing secondary use doesn’t remedy the underlying imbalance of power between labor and oligopolies.
People have a right to engage with culture and express themselves unburdened by private cartels. Policymakers should focus on narrowly crafted policies to preserve these rights, and keep rulemaking constrained to tested solutions addressing actual harms.
You can read our comments here.
EFF Thanks Fastly for Donated Tools to Help Keep Our Website Secure
EFF’s most important platform for welcoming everyone to join us in our fight for a better digital future is our website, eff.org. We thank Fastly for their generous in-kind contribution of services helping keep EFF’s website online.
Eff.org was first registered in 1990, just three months after the organization was founded, and long before the web was an essential part of daily life. Our website and the fight for digital rights grew rapidly alongside each other. However, along with rising threats to our freedoms online, threats to our site have also grown.
It takes a village to keep eff.org online in 2025. Every day our staff work tirelessly to protect the site from everything from DDoS attacks to automated hacking attempts, and everything in between. As AI has taken off, so have crawlers and bots that scrape content to train LLMs, sometimes without respecting rate limits we’ve asked them to observe. Newly donated security add-ons from Fastly help us automate DDoS prevention and rate limiting, preventing our servers from getting overloaded when misbehaving visitors abuse our sites. Fastly also caches the content from our site around the globe, meaning that visitors from all over the world can access eff.org and our other sites quickly and easily.
EFF is member-supported by people who share our vision for a better digital future. We thank Fastly for showing their support for our mission to ensure that technology supports freedom, justice, and innovation for all people of the world with an in-kind gift of their full suite of services.
A collaboration across continents to solve a plastics problem
More than 60,000 tons of plastic makes the journey down the Amazon River to the Atlantic Ocean every year. And that doesn’t include what finds its way to the river’s banks, or the microplastics ingested by the region’s abundant and diverse wildlife.
It’s easy to demonize plastic, but it has been crucial in developing the society we live in today. Creating materials that have the benefits of plastics while reducing the harms of traditional production methods is a goal of chemical engineering and materials science labs the world over, including that of Bradley Olsen, the Alexander and I. Michael Kasser (1960) Professor of Chemical Engineering at MIT.
Olsen, a Fulbright Amazonia scholar and the faculty lead of MIT-Brazil, works with communities to develop alternative plastics solutions that can be derived from resources within their own environments.
“The word that we use for this is co-design,” says Olsen. “The idea is, instead of engineers just designing something independently, they engage and jointly design the solution with the stakeholders.”
In this case, the stakeholders were small businesses around Manaus in the Brazilian state of Amazonas curious about the feasibility of bioplastics and other alternative packaging.
“Plastics are inherent to modern life and actually perform key functions and have a really beautiful chemistry that we want to be able to continue to leverage, but we want to do it in a way that is more earth-compatible,” says Desirée Plata, MIT associate professor of civil and environmental engineering.
That’s why Plata joined Olsen in creating the course 1.096/10.496 (Design of Sustainable Polymer Systems) in 2021. Now, as a Global Classroom offering under the umbrella of MISTI since 2023, the class brings MIT students to Manaus during the three weeks of Independent Activities Period (IAP).
“In my work running the Global Teaching Labs in Brazil since 2016, MIT students collaborate closely with Brazilian undergraduates,” says Rosabelli Coelho-Keyssar, managing director of MIT-Brazil and MIT-Amazonia, who also runs MIT’s Global Teaching Labs program in Brazil. “This peer-learning model was incorporated into the Global Classroom in Manaus, ensuring that MIT and Brazilian students worked together throughout the course.”
The class leadership worked with climate scientist and MIT alumnus Carlos Nobre PhD ’83, who facilitated introductions to faculty at the Universidade Estadual de Amazonas (UAE), the state university of Amazonas. The group then scouted businesses in the Amazonas region who would be interested in partnering with the students.
“In the first year, it was Comunidade Julião, a community of people living on the edge of the Tarumã Mirim River west of Manaus,” says Olsen. “This year, we worked with Comunidade Para Maravilha, a community living in the dry land forest east of Manaus.”
A tailored solution
Plastic, by definition, is made up of many small carbon-based molecules, called monomers, linked by strong bonds into larger molecules called polymers. Linking different monomers and polymers in different ways creates different plastics — from trash bags to a swimming pool float to the dashboard of a car. Plastics are traditionally made from petroleum byproducts that are easy to link together, stable, and plentiful.
But there are ways to reduce the use of petroleum-based plastics. Packaging can be made from materials found within the local ecosystem, as was the focus of the 2024 class. Or carbon-based monomers can be extracted from high-starch plant matter through a number of techniques, the goal of the 2025 cohort. But plants that grow well in one location might not in another. And bioplastic production facilities can be tricky to install if the necessary resources aren’t immediately available.
“We can design a whole bunch of new sustainable chemical processes, use brand new top-of-the-line catalysts, but if you can’t actually implement them sustainably inside an environment, it falls short on a lot of the overall goals,” says Brian Carrick, a PhD candidate in the Olsen lab and a teaching assistant for the 2025 course offering.
So, identifying local candidates and tailoring the process is key. The 2025 MIT cohort collaborated with students from throughout the Amazonas state to explore the local flora, study its starch content in the lab, and develop a new plastic-making process — all within the three weeks of IAP.
“It’s easy when you have projects like this to get really locked into the MIT vacuum of just doing what sounds really cool, which isn’t always effective or constructive for people actually living in that environment,” says Claire Underwood, a junior chemical-biological engineering major who took the class. “That’s what really drew me into the project, being able to work with people in Brazil.”
The 31 students visited a protected area of the Amazon rainforest on Day One. They also had chances throughout IAP to visit the Amazon River, where the potential impact of their work became clear as they saw plastic waste collecting on its banks.
“That was a really cool aspect to the class, for sure, being able to actually see what we were working towards protecting and what the goal was,” says Underwood.
They interviewed stakeholders, such as farmers who could provide the feedstock and plastics manufacturers who could incorporate new techniques. Then, they got into the classroom, where massive intellectual ground was covered in a crash course on the sustainable design process, the nitty gritty of plastic production, and the Brazilian cultural context on how building such an industry would affect the community. For the final project, they separated into teams to craft preliminary designs of process and plant using a simplified model of these systems.
Connecting across boundaries
Working in another country brought to the fore how interlinked policy, culture, and technical solutions are.
“I know nothing about economics, and especially not Brazilian economics and politics,” says Underwood. But one of the Brazilian students in her group was a management and finance major. “He was super helpful when we were trying to source things and account for inflation and things like that — knowing what was feasible, and not just academically feasible.”
Before they parted at the end of IAP, each team presented their proposals to a panel of company representatives and Brazilian MIT alumni who chose first-, second-, and third-place winners. While more research is needed before comfortably implementing the ideas, the experience seemed to generate legitimate interest in creating a local bioplastics production facility.
Understanding sustainable design concepts and how to do interdisciplinary work is an important skill to learn. Even if these students don’t wind up working on bioplastics in the heart of the Amazon, being able to work with people of different perspectives — be it a different discipline or a different culture — is valuable in virtually every field.
“The exchange of knowledge across different fields and cultures is essential for developing innovative and sustainable solutions to global challenges such as climate change, waste management, and the development of eco-friendly materials,” says Taisa Sampaio, a PhD candidate in materials chemistry at UEA and a co-instructor for the course. “Programs like this are crucial in preparing professionals who are more aware and better equipped to tackle future challenges.”
Right now, Olsen and Plata are focused on harnessing the deep well of connections and resources they have around Manaus, but they hope to develop that kind of network elsewhere to expand this sustainable design exploration to other regions of the world.
“A lot of sustainability solutions are hyperlocal,” says Plata. “Understanding that not all locales are exactly the same is really powerful and important when we’re thinking about sustainability challenges. And it’s probably where we've gone wrong with the one-size-fits-all or silver-bullet solution — seeking that we’ve been doing for the past many decades.”
Collaborations for the 2026 trip are still in development but, as Olsen says, “we hope this is an experience we can continue to offer long into the future, based on how positive it has been for our students and our Brazilian partners.”
High-performance computing, with much less code
Many companies invest heavily in hiring talent to create the high-performance library code that underpins modern artificial intelligence systems. NVIDIA, for instance, developed some of the most advanced high-performance computing (HPC) libraries, creating a competitive moat that has proven difficult for others to breach.
But what if a couple of students, within a few months, could compete with state-of-the-art HPC libraries with a few hundred lines of code, instead of tens or hundreds of thousands?
That’s what researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have shown with a new programming language called Exo 2.
Exo 2 belongs to a new category of programming languages that MIT Professor Jonathan Ragan-Kelley calls “user-schedulable languages” (USLs). Instead of hoping that an opaque compiler will auto-generate the fastest possible code, USLs put programmers in the driver's seat, allowing them to write “schedules” that explicitly control how the compiler generates code. This enables performance engineers to transform simple programs that specify what they want to compute into complex programs that do the same thing as the original specification, but much, much faster.
One of the limitations of existing USLs (like the original Exo) is their relatively fixed set of scheduling operations, which makes it difficult to reuse scheduling code across different “kernels” (the individual components in a high-performance library).
In contrast, Exo 2 enables users to define new scheduling operations externally to the compiler, facilitating the creation of reusable scheduling libraries. Lead author Yuka Ikarashi, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate, says that Exo 2 can reduce total schedule code by a factor of 100 and deliver performance competitive with state-of-the-art implementations on multiple different platforms, including Basic Linear Algebra Subprograms (BLAS) that power many machine learning applications. This makes it an attractive option for engineers in HPC focused on optimizing kernels across different operations, data types, and target architectures.
“It’s a bottom-up approach to automation, rather than doing an ML/AI search over high-performance code,” says Ikarashi. “What that means is that performance engineers and hardware implementers can write their own scheduling library, which is a set of optimization techniques to apply on their hardware to reach the peak performance.”
One major advantage of Exo 2 is that it reduces the amount of coding effort needed at any one time by reusing the scheduling code across applications and hardware targets. The researchers implemented a scheduling library with roughly 2,000 lines of code in Exo 2, encapsulating reusable optimizations that are linear-algebra specific and target-specific (AVX512, AVX2, Neon, and Gemmini hardware accelerators). This library consolidates scheduling efforts across more than 80 high-performance kernels with up to a dozen lines of code each, delivering performance comparable to, or better than, MKL, OpenBLAS, BLIS, and Halide.
Exo 2 includes a novel mechanism called “Cursors” that provides what they call a “stable reference” for pointing at the object code throughout the scheduling process. Ikarashi says that a stable reference is essential for users to encapsulate schedules within a library function, as it renders the scheduling code independent of object-code transformations.
“We believe that USLs should be designed to be user-extensible, rather than having a fixed set of operations,” says Ikarashi. “In this way, a language can grow to support large projects through the implementation of libraries that accommodate diverse optimization requirements and application domains.”
Exo 2’s design allows performance engineers to focus on high-level optimization strategies while ensuring that the underlying object code remains functionally equivalent through the use of safe primitives. In the future, the team hopes to expand Exo 2’s support for different types of hardware accelerators, like GPUs. Several ongoing projects aim to improve the compiler analysis itself, in terms of correctness, compilation time, and expressivity.
Ikarashi and Ragan-Kelley co-authored the paper with graduate students Kevin Qian and Samir Droubi, Alex Reinking of Adobe, and former CSAIL postdoc Gilbert Bernstein, now a professor at the University of Washington. This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the U.S. National Science Foundation, while the first author was also supported by Masason, Funai, and Quad Fellowships.
EFFecting Change: Is There Hope for Social Media?
Please join EFF for the next segment of EFFecting Change, our livestream series covering digital privacy and free speech.
EFFecting Change Livestream Series:Is There Hope for Social Media?
Thursday, March 20th
12:00 PM - 1:00 PM Pacific - Check Local Time
This event is LIVE and FREE!
Users are frustrated with legacy social media companies. Is it possible to effectively build the kinds of communities we want online while avoiding the pitfalls that have driven people away?
Join our panel featuring EFF Civil Liberties Director David Greene, EFF Director for International Freedom of Expression Jillian York, Mastodon's Felix Hlatky, Bluesky's Emily Liu, and Spill's Kenya Parham as they explore the future of free expression online and why social media might still be worth saving.
We hope you and your friends can join us live! Be sure to spread the word, and share our past livestreams. Please note that all events will be recorded for later viewing on our YouTube page.
Want to make sure you don’t miss our next livestream? Here’s a link to sign up for updates about this series: eff.org/ECUpdates.
EFF Joins AllOut’s Campaign Calling for Meta to Stop Hate Speech Against LGBTQ+ Community
In January, Meta made targeted changes to its hateful conduct policy that would allow dehumanizing statements to be made about certain vulnerable groups. More specifically, Meta’s hateful conduct policy now contains the following text:
People sometimes use sex- or gender-exclusive language when discussing access to spaces often limited by sex or gender, such as access to bathrooms, specific schools, specific military, law enforcement, or teaching roles, and health or support groups. Other times, they call for exclusion or use insulting language in the context of discussing political or religious topics, such as when discussing transgender rights, immigration, or homosexuality. Finally, sometimes people curse at a gender in the context of a romantic break-up. Our policies are designed to allow room for these types of speech.
The revision of this policy timed to Trump’s second election demonstrates that the company is focused on allowing more hateful speech against specific groups, with a noticeable and particular focus on enabling more speech challenging LGBTQ+ rights. For example, the revised policy removed previous prohibitions on comparing people to inanimate objects, feces, and filth based on their protected characteristics, such as sexual identity.
In response, LGBTQ+ rights organization AllOut gathered social justice groups and civil society organizations, including EFF, to demand that Meta immediately reverse the policy changes. By normalizing such speech, Meta risks increasing hate and discrimination against LGBTQ+ people on Facebook, Instagram and Threads.
The campaign is supported by the following partners: All Out, Global Project Against Hate and Extremism (GPAHE), Electronic Frontier Foundation (EFF), EDRi - European Digital Rights, Bits of Freedom, SUPERRR Lab, Danes je nov dan, Corporación Caribe Afirmativo, Fundación Polari, Asociación Red Nacional de Consejeros, Consejeras y Consejeres de Paz LGBTIQ+, La Junta Marica, Asociación por las Infancias Transgénero, Coletivo LGBTQIAPN+ Somar, Coletivo Viveração, and ADT - Associação da Diversidade Tabuleirense, Casa Marielle Franco Brasil, Articulação Brasileira de Gays - ARTGAY, Centro de Defesa dos Direitos da Criança e do Adolescente Padre, Marcos Passerini-CDMP, Agência Ambiental Pick-upau, Núcleo Ypykuéra, Kurytiba Metropole, ITTC - Instituto Terra, Trabalho e Cidadania.
Sign the AllOut petition (external link) and tell Meta: Stop hate speech against LGBT+ people!
If Meta truly values freedom of expression, we urge it to redirect its focus to empowering some of its most marginalized speakers, rather than empowering only their detractors and oppressive voices.
RIP Mark Klein
2006 AT&T whistleblower Mark Klein has died.
MIT engineers turn skin cells directly into neurons for cell therapy
Converting one type of cell to another — for example, a skin cell to a neuron — can be done through a process that requires the skin cell to be induced into a “pluripotent” stem cell, then differentiated into a neuron. Researchers at MIT have now devised a simplified process that bypasses the stem cell stage, converting a skin cell directly into a neuron.
Working with mouse cells, the researchers developed a conversion method that is highly efficient and can produce more than 10 neurons from a single skin cell. If replicated in human cells, this approach could enable the generation of large quantities of motor neurons, which could potentially be used to treat patients with spinal cord injuries or diseases that impair mobility.
“We were able to get to yields where we could ask questions about whether these cells can be viable candidates for the cell replacement therapies, which we hope they could be. That’s where these types of reprogramming technologies can take us,” says Katie Galloway, the W. M. Keck Career Development Professor in Biomedical Engineering and Chemical Engineering.
As a first step toward developing these cells as a therapy, the researchers showed that they could generate motor neurons and engraft them into the brains of mice, where they integrated with host tissue.
Galloway is the senior author of two papers describing the new method, which appear today in Cell Systems. MIT graduate student Nathan Wang is the lead author of both papers.
From skin to neurons
Nearly 20 years ago, scientists in Japan showed that by delivering four transcription factors to skin cells, they could coax them to become induced pluripotent stem cells (iPSCs). Similar to embryonic stem cells, iPSCs can be differentiated into many other cell types. This technique works well, but it takes several weeks, and many of the cells don’t end up fully transitioning to mature cell types.
“Oftentimes, one of the challenges in reprogramming is that cells can get stuck in intermediate states,” Galloway says. “So, we’re using direct conversion, where instead of going through an iPSC intermediate, we’re going directly from a somatic cell to a motor neuron.”
Galloway’s research group and others have demonstrated this type of direct conversion before, but with very low yields — fewer than 1 percent. In Galloway’s previous work, she used a combination of six transcription factors plus two other proteins that stimulate cell proliferation. Each of those eight genes was delivered using a separate viral vector, making it difficult to ensure that each was expressed at the correct level in each cell.
In the first of the new Cell Systems papers, Galloway and her students reported a way to streamline the process so that skin cells can be converted to motor neurons using just three transcription factors, plus the two genes that drive cells into a highly proliferative state.
Using mouse cells, the researchers started with the original six transcription factors and experimented with dropping them out, one at a time, until they reached a combination of three — NGN2, ISL1, and LHX3 — that could successfully complete the conversion to neurons.
Once the number of genes was down to three, the researchers could use a single modified virus to deliver all three of them, allowing them to ensure that each cell expresses each gene at the correct levels.
Using a separate virus, the researchers also delivered genes encoding p53DD and a mutated version of HRAS. These genes drive the skin cells to divide many times before they start converting to neurons, allowing for a much higher yield of neurons, about 1,100 percent.
“If you were to express the transcription factors at really high levels in nonproliferative cells, the reprogramming rates would be really low, but hyperproliferative cells are more receptive. It’s like they’ve been potentiated for conversion, and then they become much more receptive to the levels of the transcription factors,” Galloway says.
The researchers also developed a slightly different combination of transcription factors that allowed them to perform the same direct conversion using human cells, but with a lower efficiency rate — between 10 and 30 percent, the researchers estimate. This process takes about five weeks, which is slightly faster than converting the cells to iPSCs first and then turning them into neurons.
Implanting cells
Once the researchers identified the optimal combination of genes to deliver, they began working on the best ways to deliver them, which was the focus of the second Cell Systems paper.
They tried out three different delivery viruses and found that a retrovirus achieved the most efficient rate of conversion. Reducing the density of cells grown in the dish also helped to improve the overall yield of motor neurons. This optimized process, which takes about two weeks in mouse cells, achieved a yield of more than 1,000 percent.
Working with colleagues at Boston University, the researchers then tested whether these motor neurons could be successfully engrafted into mice. They delivered the cells to a part of the brain known as the striatum, which is involved in motor control and other functions.
After two weeks, the researchers found that many of the neurons had survived and seemed to be forming connections with other brain cells. When grown in a dish, these cells showed measurable electrical activity and calcium signaling, suggesting the ability to communicate with other neurons. The researchers now hope to explore the possibility of implanting these neurons into the spinal cord.
The MIT team also hopes to increase the efficiency of this process for human cell conversion, which could allow for the generation of large quantities of neurons that could be used to treat spinal cord injuries or diseases that affect motor control, such as ALS. Clinical trials using neurons derived from iPSCs to treat ALS are now underway, but expanding the number of cells available for such treatments could make it easier to test and develop them for more widespread use in humans, Galloway says.
The research was funded by the National Institute of General Medical Sciences and the National Science Foundation Graduate Research Fellowship Program.