Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Trump gets his chance to upend FEMA
Virginia’s carbon market comeback risks a multistate affordability crunch
State judges rebuff oil industry bids to halt climate cases
Georgia residents seethe over 30M gallons of missing water
New York moves toward climate reset
EU floats making it easy for oil companies to break methane rules
Iran war shows EU must keep course on climate laws, Dutch minister says
Spanish government under fire over handling of hantavirus ship
Everest season opens late, with climbers undeterred by huge ice block
Mapping the ocean with autonomous sensors
In late October 2025, Tropical Storm Melissa moved through the Caribbean Sea with moderate winds that didn’t get much attention. But on Oct. 25, aided by a patch of warm ocean, the storm rapidly intensified. By the time it made landfall in Jamaica, it was one of the strongest Atlantic hurricanes on record, uprooting trees, tearing the roofs from buildings, and causing catastrophic flooding and power outages.
Ravi Pappu SM ’95, PhD ’01 blames the surprise on our inability to gather high-quality ocean data.
“The storm intensified because of a small pool of hot water in the Caribbean Ocean that fed it energy,” Pappu explains. “These pools are everywhere. They can be hundreds of kilometers wide and are literally invisible to us. If we knew about that pool, we could say very precisely how the hurricane would intensify and better deal with it.”
Pappu thinks he has a way to solve that problem. He is the founder of Apeiron Labs, a company deploying low-cost autonomous ocean sensors to capture more data, in more places, and at a lower cost than is possible today. The company’s devices roam the ocean up to a quarter mile below the surface and continuously gather data on temperature, acoustics, salinity, and more, providing a real-time look at one of the planet’s last known mysteries. He says the sensors can do for the ocean what small, modular CubeSat satellites did for Earth observation from space.
When the devices are ready to be recharged, trackers make it easy to scoop them from the ocean surface. Pappu envisions the recovery process being done by autonomous boats in the future.
“Humanity needs ocean measurements, and we need them at a scale that has never been attempted before,” Pappu says. “It’s a massively hard problem. In the last century, oceanographers resigned themselves to calling it the century of undersampling. If we are successful, we will have a much more fine-grained understanding of our oceans and how they impact humans. That’s what drives us.”
Homework
Pappu came to MIT after completing a 10-year homework assignment. It started when he was a child in India in the 1980s, when he saw a hologram on the cover of National Geographic for the first time.
“I was so taken by it that I decided I needed to learn how to make those three-dimensional images,” Pappu recalls. “I learned what I could by reading books and papers. I didn’t know who invented the hologram until I read a book about MIT’s Media Lab. The book named the person who invented the rainbow hologram, so I wrote him a letter. I didn’t know his address, so I just wrote on the envelope, ‘Steve Benton, holography researcher, MIT, USA.’”
To Pappu’s surprise, the letter reached Benton, and the former Media Lab professor even wrote back with some further topics he needed to learn about.
Pappu never forgot that. He earned a bachelor’s degree in electrical engineering in India, then earned his master’s degree at Villanova University, taking all the optics classes he could.
“Eventually, about 10 years after I saw my first hologram, I wrote to Steve and I said, ‘I did all these things you asked me, now I want to study with you,’” Pappu says. “That’s how I got into MIT.”
Pappu studied under Benton for the next three years. He also studied under Professor Neil Gershenfeld as part of his PhD. Following graduation, Pappu and four classmates started ThingMagic, a consulting company that eventually sold RFID readers. ThingMagic was acquired 2010. Pappu returned to MIT for two years as a visiting scientist around the time of the acquisition.
Following that experience, Pappu worked at In-Q-Tel, an organization that invested in ThingMagic and other companies with potential to advance national security. It was there that Pappu realized how badly the world needed large-scale, inexpensive ocean sensing.
“All of the ocean sensing up to that point, and even today, was about making a really expensive thing that cost $20 million, goes to the bottom of the ocean, and stays there for five years,” Pappu says. “We needed things that are cheap and scalable to deploy wherever you need them for as long as you want.”
Pappu officially founded Apeiron Labs in 2022.
“What we’re focused on is figuring out how the ocean works,” Pappu says. “How warm is it? What is the pH? How salty is it? These things vary from place to place every 10 kilometers or so. It varies over time, and it varies by season. If we knew the details of the ocean with the same fidelity we have for the atmosphere, we would be able to tell exactly when and where hurricanes hit. It would mean less uncertainty.”
Apeiron’s ocean-sensing devices are each 3 feet long and about 20 pounds. They’re designed to be dropped off a boat or plane with biodegradable parachutes and stay in the ocean for six months. Each device continuously sends data to the cloud, is controllable through a cloud-based ocean operating system, and is accessible on a mobile phone.
“We lower the carbon footprint and cost of gathering ocean data because everything else needs a diesel ship — and a fully crewed ship costs $100,000 a day,” Rappu says. “By the time you collect the first data in the old model, you’ve already committed to a lot of money in addition to millions of dollars for the sensors. “
The company’s devices currently have two types of sensors: one for measuring salinity, temperature, and depth, and the other that uses a hydrophone to passively listen for things like submarines and whales.
That could be used to detect the low-frequency calls and clicks of endangered whales and other fish species. Currently, fishermen must look for whales manually with spotters on ships or planes. The data could also be used to improve weather forecasts, monitor noise from offshore energy projects, and track currents.
“Currents are determined by temperature and salinity, so if there’s an oil spill, our data could help determine where that spill is going,” Pappu says. “Or if you’re a fisherman, knowing where the water changes from warm to cold, which is where the fish hang out, is very useful.”
An ocean of possibilities
Apeiron Labs has worked with government defense agencies including the U.S. Navy over the last two years. The company has also tested its devices off the coast of California and in the Boston Harbor.
“The most important thing is, when we show people our approach and what we’ve demonstrated so far, they are no longer asking, ‘Can it be done?’ they’re asking, ‘What can we do with it?’” Pappu says. “Our customers have spent decades working in the ocean and they understand how novel these capabilities are.”
Of all the possibilities, improved storm forecasting could be the one Pappu is most excited about.
“Our mission is to lower the barriers to ocean data,” Pappu says. “The ocean is a huge determinant of weather, climate, and short-term forecasting. Despite our best efforts to predict the intensity of storms, sudden changes are still the norm, and much of that comes down to a lack of understanding of our oceans. If we were monitoring these things over long periods of time and finer spatial scales, we could see these storms coming much earlier with more certainty.”
MIT student Jack Carson named 2026 Udall Scholar
Jack Carson, a second-year undergraduate at MIT majoring in electrical engineering and computer science, has been named a 2026 Udall Scholar, one of up to 65 undergraduates nationally to receive the prestigious $7,500 award.
The Udall Scholarship honors students who have demonstrated a commitment to the environment, Indigenous health care, or tribal public policy. Carson is only the third MIT student to win this award, and the first to win for tribal policy.
Carson, a member of the Cherokee Nation and resident of Oklahoma, exemplifies the multidisciplinary approach to problem-solving that the Udall Scholarship seeks to honor. His work spans artificial intelligence, biomedical research, Indigenous community development, and ethics.
"Jack is the type of leader the Udall Foundation exists to support," says Kim Benard, associate dean for distinguished fellowships. "He's not only conducting cutting-edge research, but he's actively creating opportunities for Indigenous students to enter tech fields."
At MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Carson works in the Barzilay Lab, developing multiomics models for personalized therapeutic target identification. His work on deep learning and statistical physics has resulted in a sole-author paper published at the International Conference on Machine Learning (ICML).
Carson founded Code.Tulsa, a summer technology program designed to introduce Indigenous high school students to computer science and tech careers. The initiative addresses a significant gap: Indigenous communities remain highly underrepresented in technology fields, despite the potential for tech to advance tribal sovereignty and economic development.
This year, Carson won the Elie Wiesel Prize in Ethics Essay Contest. He is an accomplished musician who has performed at Carnegie Hall and with the National Opera, a motorcycle racer, and a self-described philosopher deeply committed to questions of justice and responsibility.
MIT School of Engineering faculty receive awards in winter 2026
Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, institutes, labs, and centers. The following individuals were recognized in winter 2026:
Arup K. Chakraborty, the John M. Deutch (1961) Institute Professor in the departments of Chemical Engineering, Chemistry, and Physics, and the founding director of the Institute for Medical Engineering and Science, as well as James J. Collins, the Termeer Professor of Medical Engineering and Science in the Department of Biological Engineering and IMES, received the 2026 Laureate of the Tel Aviv University International Prize in Biophysics. The prize recognizes outstanding scientists whose work has significantly advanced the understanding of biological systems through physical principles.
Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor in the Department of Electrical Engineering and Computer Science, received the 2025 IEEE Journal of Solid-State Circuits Test of Time Award. The award recognizes an outstanding paper published in the IEEE Journal of Solid-State Circuits at least 10 years prior that has had significant impact on its field.
Charles Harvey, a professor in the Department of Civil and Environmental Engineering; Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor in the Department of Electrical Engineering and Computer Science; John Henry Lienhard, the Abdul Latif Jameel Professor of Water and Mechanical Engineering in the Department of Mechanical Engineering; Frances Ross, the TDK Professor in Materials Science and Engineering; Zoltán Sandor Spakovszky, the T. Wilson (1953) Professor in Aeronautics; and Ram Sasisekharan, the Alfred H. Caspary Professor of Biological Physics and Physics in the Department of Biological Engineering; were elected to the National Academy of Engineering for 2026. One of the highest professional distinctions for engineers, membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education,” and to “the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”
Michael Howland, the Jeffrey Cheah Career Development Professor and assistant professor in the Department of Civil and Environmental Engineering, received a 2026 Faculty Early Career Development (CAREER) Award from the National Science Foundation. The award supports early-career faculty who have the potential to serve as academic role models in research and education and to lead advances in the mission of their department or organization.
Yoon Kim, associate professor in the Department of Electrical Engineering and Computer Science; Anand Natarajan, an associate professor in the Department of Electrical Engineering and Computer Science; and Mengjia Yan, ITT Career Development Professor in Computer Technology and associate professor in the Department of Electrical Engineering and Computer Science, were named 2026 Sloan Research Fellows. Sloan Research Fellowships support fundamental research conducted by early-career scientists, and they are awarded annually to early-career researchers whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders.
Carlos Portela, the Robert N. Noyce Career Development Professor and associate professor in the Department of Mechanical Engineering, has received a 2026 Young Investigator Award from the Office of Naval Research. The Young Investigator Program seeks to identify and support academic scientists and engineers who are in their first or second full-time tenure-track or tenure-track-equivalent academic appointment, who have received their doctorate or equivalent degree in the past seven years, and who show exceptional promise for doing creative research.
Ellen Roche, the Abby Rockefeller Mauzé Professor and associate department head for research in the Department of Mechanical Engineering and a professor in the Institute for Medical Engineering and Science, received the 2026 Sony Women in Technology Award with Nature. The award recognizes exceptional early- to mid-career women researchers in technology who through their research are driving a positive impact on society and the planet.
Tess Smidt, an associate professor in the Department of EECS, was named co–principal investigator on a National Science Foundation (NSF) AI Research Institute award and also received a 2025 Department of Energy Office of Science Early Career Research Program Award. The NSF AI Materials Institute (AI-MI) aims to propel foundational AI research past the limitations of existing AI algorithms by pursuing materials discovery and conquering knowledge- and data-centric challenges. The DoE Early Career Research Program provides five-year awards to exceptional early career researchers at U.S. academic institutions, DoE National Laboratories, and Office of Science User Facilities to stimulate new research directions in mission critical areas supported by DoE’s Office of Science.
Antonio Torralba, the Delta Electronics Professor and faculty head of AI+D in the Department of EECS, was elected to the 2025 cohort of Association for Computing Machinery Fellows. ACM Fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.
Harry L. Tuller, a professor in the Department of Materials Science and Engineering, received The Senior Scientist Award from the International Society for Solid State Ionics. The Senior Scientist Award, the most prestigious award of the International Society for Solid State Ionics, is presented to a senior solid-state ionics researcher who has made outstanding contributions to the science and engineering of solid-state ionics.
Vinod Vaikuntanathan, the Ford Foundation Professor of Engineering in the Department of Electrical Engineering and Computer Science, was named a 2026 fellow of the International Association for Cryptologic Research. ACR has established the IACR Fellows Program to recognize outstanding IACR members for technical and professional contributions.
Celebrating dorm-to-market social entrepreneurship at MIT
Over 200 students, alumni, faculty, staff, funders, and community collaborators gathered at the MIT Media Lab on April 15 for the 25th annual IDEAS Social Innovation Incubator Showcase and Awards, hosted by the Priscilla King Gray (PKG) Center for Social Impact.
Since its founding in 2001, the PKG Center’s IDEAS Incubator has launched hundreds of social ventures in over 60 countries, guiding MIT’s technical talent toward urgent social challenges — from energy and climate to health care, education, and economic development.
“Global and local challenges are increasingly complex and interconnected,” said Lauren Tyger, assistant dean for social innovation at the PKG Center and director of IDEAS. “IDEAS educates technical founders in systems thinking and community-based innovation, helping students develop business models that achieve both measurable social outcomes and financial sustainability.”
IDEAS alumni celebrated
The event celebrated the many successful social ventures launched by IDEAS alumni with a 25-Year Impact Report and a keynote speech from IDEAS alumnus Bill Thies ’01, ’02, MNG ’02, PhD ’09.
Thies traced his tuberculosis medication adherence work in India from a low-cost electronic pillbox through multiple iterations that helped shift India’s treatment policies toward patient autonomy. Ultimately, his work led to Nikshay, a national electronic medical records platform now supporting 150 million people, which recently transitioned to full government control.
“Innovations can open doors for much more important changes than the innovations themselves,” Thies said. Limitations to technical interventions surface important questions, such as “what policies do we want to change, to become more supportive and human-centered? And how can technology be a bridge to that new world we would envision?”
Thinking back on the influence of IDEAS on his own path, Theis reflected: “I always assumed that in IDEAS we were incubating projects. But what I’ve come to realize is that it’s actually the other way around: the projects are incubating us. We are the ones who will ultimately drive the change we hope to see in the world.”
Vision for scaling social entrepreneurship at MIT and catalytic gift announced
Thies’ message was echoed by Chancellor for Academic Advancement Eric Grimson, who explained how IDEAS aligns with MIT’s strategic initiatives, including MIT’s Generative AI Impact Consortium (MGAIC), Health and Life Sciences Collaborative (MIT HEALS), and the Climate Project, as well as President Sally Kornbluth’s and Provost Anantha Chandrakasan’s recent call to accelerate entrepreneurship. “Many of the current presidential initiatives naturally include an opportunity for social entrepreneurship,” said Grimson, who applauded IDEAS alumni pursuing ventures in climate, health, and AI-powered social enterprises.
The PKG Center’s director, Alison Badgett, shared the center’s vision for the future of IDEAS. “As MIT’s only student entrepreneurship program focused solely on social impact,” said Badgett, “we recognize the need to both scale social entrepreneurship programming at MIT and to better position our student founders for scale after graduating.”
Badgett announced a first-in gift of $150,000 from the Morgridge Family Foundation to help realize the center’s vision. The foundation’s gift will enable the PKG Center to develop a robust social impact investor ecosystem at MIT, connecting student- and alumni-led ventures with potential funders and helping more aspiring entrepreneurs see social impact as a viable path.
This year’s award-winning social ventures
This year’s top $20,000 award winner was Beyond Words, an assistive application for iPhone and Apple Watch that gives nonverbal individuals a layer of support by passively capturing biometrics, audio, and location, and communicating it to caregivers.
Other award winners were:
- AyuConnect ($10,000) uses WhatsApp-native, voice-first electronic health records to enhance care access while reducing clinician burnout in India.
- PEAR ($7,500) offers a hands-on STEM research program for Nigerian and other African students, equipping them with technical skills to solve community problems.
- CommonGround ($5,000) connects Bostonians to tailored and hyper-local climate actions through an online platform, replacing eco-anxiety with collective resilience.
- Sehat Screen ($5,000) is an AI-powered cervical cancer screening device for women in Afghanistan and other resource-constrained countries.
- Breakthrough Health ($2,500) is a care coordination platform that links hepatitis C patients in recovery centers to health care.
- Sero ($2,500) is a voice-first AI tool that helps rural borrowers in Nepal understand loan contracts and access fair credit in their own language, with no dependency on literacy.
During the event, Shane Kosinski, executive director of the Office of the Vice President for Energy and Climate, announced inaugural Climate Student Innovators awards, funded by the MIT Climate Project. Four IDEAS teams received this award, which will be presented annually.
“The MIT Climate Project is an all-of-MIT initiative with the ambitious goal to make a measurable difference on climate change within a decade. We reach this global impact not by top down mandates, but by testing good ideas where they are needed most and supporting them to succeed,” explained Kosinski. “This vision is also hardwired into the character, history and purpose of PKG IDEAS.”
This year’s IDEAS teams awarded by the MIT Climate Project were:
- Q’ochas Resilientes ($15,000) co-designs climate-resilient water technology in the Peruvian Andes to uplift ancestral knowledge and support agricultural livelihoods.
- NECTICA ($15,000) tackles urban flooding in Lagos by empowering women-led cooperatives with a low-tech sorter bin to separate and monetize composite waste.
- MittiNav ($15,000) designs production and supply-chain systems to scale biochar technology that restores soil and stores carbon.
- Resilient Grid ($5,000) collects and processes food waste through anaerobic digestion on skid platforms to produce biogas for electricity and heat in Caribbean island nations.
“The Climate Project is thrilled to present the first-ever Climate Student Innovators Awards to these teams,” said Vice President for Energy and Climate Evelyn Wang. “We applaud this year’s IDEAS winners for developing systemic interventions in partnership with affected communities.”
Several additional teams received $1,000 awards:
- 1for1Health is a fertility platform offering education, testing, and insights to expand access and reduce disparities in reproductive health decisions.
- Ceed CRM brings cutting-edge AI to mission-driven organizations that have been stuck with tools built for sales teams, not social impact.
- CerviSeal created a medical device that reduces pain, tissue trauma, and risk during cervical manipulation for women undergoing hysteroscopy.
- FoodLoop connects farms and restaurants through matchmaking, demand forecasting, and forward contracts to strengthen local food systems.
- Homeroom Hero is an AI tool for teachers that instantly grades short-form assessments, reducing workload and improving student learning without putting tech in front of kids.
- Gees Health is a noninvasive, at-home hormone monitor that helps women with polycystic ovary syndrome track and manage their health with continuous insights.
- Illume makes discreet wearables that are a safe way for recovering victims of human trafficking to contact trusted people, building their support network.
- Longevia is an AI-powered platform that translates complex medical data into personalized, actionable insights for chronic kidney disease patients.
- Opta is an AI talent refinery upskilling Brazil's low-socioeconomic status students for small and medium business jobs, driving economic mobility.
- Recover Hospitality scales recovery-informed wellness coaching for hospitality workers through AI-powered motivational interviewing and benefits navigation.
The event closed with Tyger thanking the vast network of alumni, mentors, funders, and campus partners who make IDEAS possible, and the 104 volunteers who supported this year’s incubator challenge. “IDEAS builds more than social enterprises — we’re building the infrastructure and community needed for alumni and their ventures to achieve long-lasting impact. Our vision is a future where MIT entrepreneurship is not only groundbreaking, but fundamentally grounded in social impact.”
Rethinking how our brains use categories to make sense of the world
In the new review article, “Categorization is Baked into the Brain,” cognitive scientists Earl K. Miller, Picower Professor of Neuroscience at MIT, and Lisa Feldman Barrett, university distinguished professor at Northeastern University, contend that categorization is part of a predictive process the brain uses to efficiently meet the body’s needs in a fast-paced, otherwise overwhelming sensory world. In that sense, their paper in Nature Reviews Neuroscience challenges decades of dogma about how and why the brain boils down what it sees, hears, smells, tastes, and feels.
Categories are groups of things that are similar enough to be considered functionally equivalent. When you walk through a neighborhood, you’ll naturally experience the furry, four-legged, barking animal ahead of you as a “dog.” In the classic view of cognition, your brain arrives at that categorization by soaking in lots of basic sensory features of the hound — its shape, its size, the sounds it makes, its behavior — and compares that to some prototype “dog” stored in your memory. Hundreds of milliseconds after the first sensory inputs, you can then decide what you might want to do about the dog.
Barrett and Miller argue that that’s wrong. Instead, they propose that your brain comes prepared for sensory patterns with predictions of the motor action plans that are most likely to achieve the needs and goals you bring to the moment. Those prediction signals can be described as a momentary category that the brain constructs to shape the processing of sensory signals.
From the very start, incoming sensory signals are compressed and abstracted into that category to efficiently select the best predicted plan. If you are in an unfamiliar neighborhood your brain might construct the category “dog” to avoid being bit, resulting in: “Back away slowly while saying nice doggie.” If you are on your own block and encounter a familiar dog, your brain might construct a category to kneel and open up your arms to summon your neighbor’s adorable pup for a satisfying petting.
In either case, the category “dog” arises in the context of your needs and your prediction from a menu of learned action plans for similar situations, not from an intellectual exercise of neutrally regarding sensory inputs, comparing them to a fixed prototype, and then planning from there. If the brain really worked the classically believed way, you’d be on the back foot when the unfamiliar dog lunged at you.
“One of the main things your brain has to do is predict the world,” says Miller, a faculty member of The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “It takes several hundred milliseconds to process things, and meanwhile the world is moving on. Your brain has to anticipate things.”
The most pragmatic and efficient way to survive and thrive in such a world, Barrett says, is to have your needs and potential plans ready for the sensory situation. If your predictions are right, you’re prepared in time. If they are wrong, you adjust and learn from it.
“The stimulus, cognition, response model of the brain is wrong,” says Barrett, a faculty member in Northeastern’s Department of Psychology and co-director of the Interdisciplinary Affective Science Laboratory. “The brain prepares for a response and then perceives a stimulus. A brain is not reactive. It’s predictive. Action planning comes first. Perception comes second, as a function of the action plan.”
Anatomical and functional evidence
Throughout the review, Barrett and Miller ground the provocative proposal in copious anatomical, electrophysiological, and imaging evidence about the brain. They cite numerous experimental results that show how the brain is structured to broadcast memories to create motor plans that flow back toward signals that arrive from the body’s sensory surfaces, actively whittling them down and shaping them to give them meaning.
“The capacity to create similarities from differences — to abstract — is embedded in the architecture of the nervous system, and you can see that by looking at what is connected to what and by observing signal flow,” Barrett says.
For example, as circuits feed signals “forward” from sensory surfaces (such as the retina) to regions of the cerebral cortex that are focused on sensory processing (such as the visual cortex) toward the areas that are important for executive control (the prefrontal cortex) and control of the body (limbic cortex), information passes from many small, barely connected neurons to fewer, bigger, and more well-connected neurons. Such an architecture compresses sensory details into increasingly abstract representations that group many different features into smaller groups of similar features, and in doing so helps to select a predicted action plan from the broader category that’s already there.
“Your brain is a big funnel to take the outside world and turn it into an output,” Miller says.
Moreover, anatomical evidence shows that the neurons in the cortex maintain many more connections to provide feedback from memory that control sensory regions than to feed sensory information forward. As much as 90 percent of synapses in the visual cortex are “feedback” instead of “feedforward,” Barrett and Miller wrote. In other words, the brain is built to use memory to filter incoming sensory signals, consistent with imposing needs and goals on what would otherwise be a deluge of sights, sounds, and other sensations.
Yet another line of evidence are numerous studies from Miller’s own lab showing that at the broad network level of information flow in the cortex, the brain uses beta frequency waves that carry information about goals and plans, to constrain the expression of gamma frequency waves that carry information about specific sensory inputs.
Finally, the dominance of “feedback” over “feedforward” signals in the cortical architecture allows for the possibility that sensory signals are made meaningful in terms of predicted plans. When these plans are wrong, the resulting surprise can be integrated for future use.
“In science, there is a special name for that: learning,” Barrett says.
Implications for human thought and disease
In the end, Barrett and Miller’s proposal completely changes the idea of categorization, shifting it from being a particular intellectual skill to being a fundamental function for predictively meeting the body’s needs (or, “allostasis”).
“A category may not be a representation that an animal has, but a signal processing event than an animal does, predictively, to constrain the meaning of a high-dimensional ensemble of signals in a particular situation,” the authors wrote. “Categorization renders these signals meaningful — similar to one another and to past allostatic events — in terms of some goal or function.”
Humans, Barrett says, have a relatively massive amount of the neural network architecture to perform these pragmatic abstractions, and therefore can make categorizations that seem outright metaphorical (e.g., a functional similarity between “climbing the career ladder” and climbing a literal physical ladder).
But these processes can also go awry in disease, Barrett and Miller note. Depression can be seen as a disorder in which the brain imposes overly broad categories, such as “threat” or “criticism” on sensory episodes that don’t have to be perceived that way. By contrast, autism can manifest with features of inadequate compression of incoming sensory signals, not generalizing enough to recognize when a situation is similar enough to a prior one to select the appropriate plan.
Funding to support the paper came from the National Institutes of Health, The U.S. Army Research Institute for the Behavioral and Social Sciences, the Office of Naval Research, the Unlikely Collaborators Foundation, The Freedom Together Foundation, and The Picower Institute for Learning and Memory.
Smart Glasses for the Authorities
ICE is developing its own version of smart glasses, with facial recognition tied to various databases.
