Feed aggregator
Study: AI chatbots provide less-accurate information to vulnerable users
Large language models (LLMs) have been championed as tools that could democratize access to information worldwide, offering knowledge in a user-friendly interface regardless of a person’s background or location. However, new research from MIT’s Center for Constructive Communication (CCC) suggests these artificial intelligence systems may actually perform worse for the very users who could most benefit from them.
A study conducted by researchers at CCC, which is based at the MIT Media Lab, found that state-of-the-art AI chatbots — including OpenAI’s GPT-4, Anthropic’s Claude 3 Opus, and Meta’s Llama 3 — sometimes provide less-accurate and less-truthful responses to users who have lower English proficiency, less formal education, or who originate from outside the United States. The models also refuse to answer questions at higher rates for these users, and in some cases, respond with condescending or patronizing language.
“We were motivated by the prospect of LLMs helping to address inequitable information accessibility worldwide,” says lead author Elinor Poole-Dayan SM ’25, a technical associate in the MIT Sloan School of Management who led the research as a CCC affiliate and master’s student in media arts and sciences. “But that vision cannot become a reality without ensuring that model biases and harmful tendencies are safely mitigated for all users, regardless of language, nationality, or other demographics.”
A paper describing the work, “LLM Targeted Underperformance Disproportionately Impacts Vulnerable Users,” was presented at the AAAI Conference on Artificial Intelligence in January.
Systematic underperformance across multiple dimensions
For this research, the team tested how the three LLMs responded to questions from two datasets: TruthfulQA and SciQ. TruthfulQA is designed to measure a model’s truthfulness (by relying on common misconceptions and literal truths about the real world), while SciQ contains science exam questions testing factual accuracy. The researchers prepended short user biographies to each question, varying three traits: education level, English proficiency, and country of origin.
Across all three models and both datasets, the researchers found significant drops in accuracy when questions came from users described as having less formal education or being non-native English speakers. The effects were most pronounced for users at the intersection of these categories: those with less formal education who were also non-native English speakers saw the largest declines in response quality.
The research also examined how country of origin affected model performance. Testing users from the United States, Iran, and China with equivalent educational backgrounds, the researchers found that Claude 3 Opus in particular performed significantly worse for users from Iran on both datasets.
“We see the largest drop in accuracy for the user who is both a non-native English speaker and less educated,” says Jad Kabbara, a research scientist at CCC and a co-author on the paper. “These results show that the negative effects of model behavior with respect to these user traits compound in concerning ways, thus suggesting that such models deployed at scale risk spreading harmful behavior or misinformation downstream to those who are least able to identify it.”
Refusals and condescending language
Perhaps most striking were the differences in how often the models refused to answer questions altogether. For example, Claude 3 Opus refused to answer nearly 11 percent of questions for less educated, non-native English-speaking users — compared to just 3.6 percent for the control condition with no user biography.
When the researchers manually analyzed these refusals, they found that Claude responded with condescending, patronizing, or mocking language 43.7 percent of the time for less-educated users, compared to less than 1 percent for highly educated users. In some cases, the model mimicked broken English or adopted an exaggerated dialect.
The model also refused to provide information on certain topics specifically for less-educated users from Iran or Russia, including questions about nuclear power, anatomy, and historical events — even though it answered the same questions correctly for other users.
“This is another indicator suggesting that the alignment process might incentivize models to withhold information from certain users to avoid potentially misinforming them, although the model clearly knows the correct answer and provides it to other users,” says Kabbara.
Echoes of human bias
The findings mirror documented patterns of human sociocognitive bias. Research in the social sciences has shown that native English speakers often perceive non-native speakers as less educated, intelligent, and competent, regardless of their actual expertise. Similar biased perceptions have been documented among teachers evaluating non-native English-speaking students.
“The value of large language models is evident in their extraordinary uptake by individuals and the massive investment flowing into the technology,” says Deb Roy, professor of media arts and sciences, CCC director, and a co-author on the paper. “This study is a reminder of how important it is to continually assess systematic biases that can quietly slip into these systems, creating unfair harms for certain groups without any of us being fully aware.”
The implications are particularly concerning given that personalization features — like ChatGPT’s Memory, which tracks user information across conversations — are becoming increasingly common. Such features risk differentially treating already-marginalized groups.
“LLMs have been marketed as tools that will foster more equitable access to information and revolutionize personalized learning,” says Poole-Dayan. “But our findings suggest they may actually exacerbate existing inequities by systematically providing misinformation or refusing to answer queries to certain users. The people who may rely on these tools the most could receive subpar, false, or even harmful information.”
MIT faculty, alumni named 2026 Sloan Research Fellows
Eight MIT faculty and 22 additional MIT alumni are among 126 early-career researchers honored with 2026 Sloan Research Fellowships by the Alfred P. Sloan Foundation.
The fellowships honor exceptional researchers at U.S. and Canadian educational institutions, whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders. Winners receive a two-year, $75,000 fellowship that can be used flexibly to advance the fellow’s research.
"The Sloan Research Fellows are among the most promising early-career researchers in the U.S. and Canada, already driving meaningful progress in their respective disciplines," says Stacie Bloom, president and chief executive officer of the Alfred P. Sloan Foundation. "We look forward to seeing how these exceptional scholars continue to unlock new scientific advancements, redefine their fields, and foster the well-being and knowledge of all."
Including this year’s recipients, a total of 341 MIT faculty have received Sloan Research Fellowships since the program’s inception in 1955. The MIT recipients are:
Jacopo Borga is interested in probability theory and its connections to combinatorics, and in mathematical physics. He studies various random combinatorial structures — mathematical objects such as graphs or permutations — and their patterns and behavior at a large scale. This research includes random permutons, meanders, multidimensional constrained Brownian motions, Schramm-Loewner evolutions, and Liouville quantum gravity. Borga earned bachelor’s and master’s degrees in mathematics from the Università degli Studi di Padova in Italy, and a master’s degree in mathematics from Université Sorbonne Paris Cité in France, then proceeded to complete a PhD in mathematics at Unstitut für Mathematik at the Universität Zürich in Switzerland. Borga was an assistant professor at Stanford University before joining MIT as an assistant professor of mathematics in 2024.
Anna-Christina Eilers is an astrophysicist and assistant professor at MIT’s Department of Physics. Her research explores how black holes form and evolve across cosmic time, studying their origins and the role they play in shaping our universe. She leverages multi-wavelength data from telescopes all around the world and in space to study how the first galaxies, black holes, and quasars emerged during an epoch known as the Cosmic Dawn of our universe. She grew up in Germany and completed her PhD at the Max Planck Institute for Astronomy in Heidelberg. Subsequently, she was awarded a NASA Hubble Fellowship and a Pappalardo Fellowship to continue her research at MIT, where she joined the faculty in 2023. Her work has been recognized with several honors, including the PhD Prize of the International Astronomical Union, the Otto Hahn Medal of the Max Planck Society, and the Ludwig Biermann Prize of the German Astronomical Society.
Linlin Fan is the Samuel A. Goldblith Career Development Assistant Professor of Applied Biology in the Department of Brain and Cognitive Sciences and the Picower Institute for Learning and Memory at MIT. Her lab focuses on the development and application of advanced all-optical physiological techniques to understand the plasticity mechanisms underlying learning and memory. She has developed and applied high-speed, cellular-precision all-optical physiological techniques for simultaneously mapping and controlling membrane potential in specific neurons in behaving mammals. Prior to joining MIT, Fan was a Helen Hay Whitney Postdoctoral Fellow in Karl Deisseroth’s laboratory at Stanford University. She obtained her PhD in chemical biology from Harvard University in 2019 with Adam Cohen. Her work has been recognized by several awards, including the Larry Katz Memorial Lecture Award from the Cold Spring Harbor Laboratory, Helen Hay Whitney Fellowship, Career Award at the Scientific Interface from the Burroughs Wellcome Fund, Klingenstein-Simons Fellowship Award, Searle Scholar Award, and NARSAD Young Investigator Award.
Yoon Kim is an associate professor in the Department of EECS and a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT-IBM Watson AI Lab, where he works on natural language processing and machine learning. Kim earned a PhD in computer science at Harvard University, an MS in data science from New York University, an MA in statistics from Columbia University, and BA in both math and economics from Cornell University. He joined EECS in 2021, after spending a year as a postdoc at MIT-IBM Watson AI Lab.
Haihao Lu PhD ’19 is the Cecil and Ida Green Career Development Assistant Professor, and an assistant professor of operations research/statistics at the MIT Sloan School of Management. Lu’s research lies at the intersection of optimization, computation, and data science, with a focus on pushing the computational and mathematical frontiers of large-scale optimization. Much of his work is inspired by real-world challenges faced by leading technology companies and optimization software companies, such as first-order methods and scalable solvers and data-driven optimization for resource allocation. His research has had real-world impact, generating substantial revenue and advancing the state of practice in large-scale optimization, and has been recognized by several research awards. Before joining MIT Sloan, he was an assistant professor at the University of Chicago Booth School of Business and a faculty researcher at Google Research’s large-scale optimization team. He obtained his PhD in mathematics and operations research at MIT in 2019.
Brett McGuire is the Class of 1943 Career Development Associate Professor of Chemistry at MIT. He completed his undergraduate studies at the University of Illinois at Urbana-Champaign before earning an MS from Emory University and a PhD from the Caltech, both in physical chemistry. After Jansky and Hubble postdoctoral fellowships at the National Radio Astronomy Observatory, he joined the MIT faculty in 2020 and was promoted to associate professor in 2025. The McGuire Group integrates physical chemistry, molecular spectroscopy, and observational astrophysics to explore how the chemical building blocks of life evolve alongside the formation of stars and planets.
Anand Natarajan PhD ’18 is an associate professor in EECS and a principal investigator in CSAIL and the MIT-IBM Watson AI Lab. His research is mainly in quantum complexity theory, with a focus on the power of interactive proofs and arguments in a quantum world. Essentially, his work attempts to assess the complexity of computational problems in a quantum setting, determining both the limits of quantum computers’ capability and the trustworthiness of their output. Natarajan earned his PhD in physics from MIT, and an MS in computer science and BS in physics from Stanford University. Prior to joining MIT in 2020, he spent time as a postdoc at the Institute for Quantum Information and Matter at Caltech.
Mengjia Yan is an associate professor in the Department of EECS and a principal investigator in CSAIL. She is a security computer architect whose research advances secure processor design by bridging computer architecture, systems security, and formal methods. Her work identifies critical blind spots in hardware threat models and improves the resilience of real-world systems against information leakage and exploitation. Several of her discoveries have influenced commercial processor designs and contributed to changes in how hardware security risks are evaluated in practice. In parallel, Yan develops architecture-driven techniques to improve the scalability of formal verification and introduces new design principles toward formally verifiable processors. She also designed the Secure Hardware Design (SHD) course, now widely adopted by universities worldwide to teach computer architecture security from both offensive and defensive perspectives.
The following MIT alumni also received fellowships:
Ashok Ajoy PhD ’16
Chibueze Amanchukwu PhD ’17
Annie M. Bauer PhD ’17
Kimberly K. Boddy ’07
danah boyd SM ’02
Yuan Cao SM ’16, PhD ’20
Aloni Cohen SM ’15, PhD ’19
Fei Dai PhD ’19
Madison M. Douglas ’16
Philip Engel ’10
Benjamin Eysenbach ’17
Tatsunori B. Hashimoto SM ’14, PhD ’16
Xin Jin ’10
Isaac Kim ’07
Christina Patterson PhD ’19
Katelin Schutz ’14
Karthik Shekhar PhD ’15
Shriya S. Srinivasan PhD ’20
Jerzy O. Szablowski ’09
Anna Wuttig PhD ’18
Zoe Yan PhD ’20
Lingfu Zhang ’18
Exposing biases, moods, personalities, and abstract concepts hidden in large language models
By now, ChatGPT, Claude, and other large language models have accumulated so much human knowledge that they’re far from simple answer-generators; they can also express abstract concepts, such as certain tones, personalities, biases, and moods. However, it’s not obvious exactly how these models represent abstract concepts to begin with from the knowledge they contain.
Now a team from MIT and the University of California San Diego has developed a way to test whether a large language model (LLM) contains hidden biases, personalities, moods, or other abstract concepts. Their method can zero in on connections within a model that encode for a concept of interest. What’s more, the method can then manipulate, or “steer” these connections, to strengthen or weaken the concept in any answer a model is prompted to give.
The team proved their method could quickly root out and steer more than 500 general concepts in some of the largest LLMs used today. For instance, the researchers could home in on a model’s representations for personalities such as “social influencer” and “conspiracy theorist,” and stances such as “fear of marriage” and “fan of Boston.” They could then tune these representations to enhance or minimize the concepts in any answers that a model generates.
In the case of the “conspiracy theorist” concept, the team successfully identified a representation of this concept within one of the largest vision language models available today. When they enhanced the representation, and then prompted the model to explain the origins of the famous “Blue Marble” image of Earth taken from Apollo 17, the model generated an answer with the tone and perspective of a conspiracy theorist.
The team acknowledges there are risks to extracting certain concepts, which they also illustrate (and caution against). Overall, however, they see the new approach as a way to illuminate hidden concepts and potential vulnerabilities in LLMs, that could then be turned up or down to improve a model’s safety or enhance its performance.
“What this really says about LLMs is that they have these concepts in them, but they’re not all actively exposed,” says Adityanarayanan “Adit” Radhakrishnan, assistant professor of mathematics at MIT. “With our method, there’s ways to extract these different concepts and activate them in ways that prompting cannot give you answers to.”
The team published their findings today in a study appearing in the journal Science. The study’s co-authors include Radhakrishnan, Daniel Beaglehole and Mikhail Belkin of UC San Diego, and Enric Boix-Adserà of the University of Pennsylvania.
A fish in a black box
As use of OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, and other artificial intelligence assistants has exploded, scientists are racing to understand how models represent certain abstract concepts such as “hallucination” and “deception.” In the context of an LLM, a hallucination is a response that is false or contains misleading information, which the model has “hallucinated,” or constructed erroneously as fact.
To find out whether a concept such as “hallucination” is encoded in an LLM, scientists have often taken an approach of “unsupervised learning” — a type of machine learning in which algorithms broadly trawl through unlabeled representations to find patterns that might relate to a concept such as “hallucination.” But to Radhakrishnan, such an approach can be too broad and computationally expensive.
“It’s like going fishing with a big net, trying to catch one species of fish. You’re gonna get a lot of fish that you have to look through to find the right one,” he says. “Instead, we’re going in with bait for the right species of fish.”
He and his colleagues had previously developed the beginnings of a more targeted approach with a type of predictive modeling algorithm known as a recursive feature machine (RFM). An RFM is designed to directly identify features or patterns within data by leveraging a mathematical mechanism that neural networks — a broad category of AI models that includes LLMs — implicitly use to learn features.
Since the algorithm was an effective, efficient approach for capturing features in general, the team wondered whether they could use it to root out representations of concepts, in LLMs, which are by far the most widely used type of neural network and perhaps the least well-understood.
“We wanted to apply our feature learning algorithms to LLMs to, in a targeted way, discover representations of concepts in these large and complex models,” Radhakrishnan says.
Converging on a concept
The team’s new approach identifies any concept of interest within a LLM and “steers” or guides a model’s response based on this concept. The researchers looked for 512 concepts within five classes: fears (such as of marriage, insects, and even buttons); experts (social influencer, medievalist); moods (boastful, detachedly amused); a preference for locations (Boston, Kuala Lumpur); and personas (Ada Lovelace, Neil deGrasse Tyson).
The researchers then searched for representations of each concept in several of today’s large language and vision models. They did so by training RFMs to recognize numerical patterns in an LLM that could represent a particular concept of interest.
A standard large language model is, broadly, a neural network that takes a natural language prompt, such as “Why is the sky blue?” and divides the prompt into individual words, each of which is encoded mathematically as a list, or vector, of numbers. The model takes these vectors through a series of computational layers, creating matrices of many numbers that, throughout each layer, are used to identify other words that are most likely to be used to respond to the original prompt. Eventually, the layers converge on a set of numbers that is decoded back into text, in the form of a natural language response.
The team’s approach trains RFMs to recognize numerical patterns in an LLM that could be associated with a specific concept. As an example, to see whether an LLM contains any representation of a “conspiracy theorist,” the researchers would first train the algorithm to identify patterns among LLM representations of 100 prompts that are clearly related to conspiracies, and 100 other prompts that are not. In this way, the algorithm would learn patterns associated with the conspiracy theorist concept. Then, the researchers can mathematically modulate the activity of the conspiracy theorist concept by perturbing LLM representations with these identified patterns.
The method can be applied to search for and manipulate any general concept in an LLM. Among many examples, the researchers identified representations and manipulated an LLM to give answers in the tone and perspective of a “conspiracy theorist.” They also identified and enhanced the concept of “anti-refusal,” and showed that whereas normally, a model would be programmed to refuse certain prompts, it instead answered, for instance giving instructions on how to rob a bank.
Radhakrishnan says the approach can be used to quickly search for and minimize vulnerabilities in LLMs. It can also be used to enhance certain traits, personalities, moods, or preferences, such as emphasizing the concept of “brevity” or “reasoning” in any response an LLM generates. The team has made the method’s underlying code publicly available.
“LLMs clearly have a lot of these abstract concepts stored within them, in some representation,” Radhakrishnan says. “There are ways where, if we understand these representations well enough, we can build highly specialized LLMs that are still safe to use but really effective at certain tasks.”
This work was supported, in part, by the National Science Foundation, the Simons Foundation, the TILOS institute, and the U.S. Office of Naval Research.
A neural blueprint for human-like intelligence in soft robots
A new artificial intelligence control system enables soft robotic arms to learn a wide repertoire of motions and tasks once, then adjust to new scenarios on the fly, without needing retraining or sacrificing functionality.
This breakthrough brings soft robotics closer to human-like adaptability for real-world applications, such as in assistive robotics, rehabilitation robots, and wearable or medical soft robots, by making them more intelligent, versatile, and safe.
The work was led by the Mens, Manus and Machina (M3S) interdisciplinary research group — a play on the Latin MIT motto “mens et manus,” or “mind and hand,” with the addition of “machina” for “machine” — within the Singapore-MIT Alliance for Research and Technology. Co-leading the project are researchers from the National University of Singapore (NUS), alongside collaborators from MIT and Nanyang Technological University in Singapore (NTU Singapore).
Unlike regular robots that move using rigid motors and joints, soft robots are made from flexible materials such as soft rubber and move using special actuators — components that act like artificial muscles to produce physical motion. While their flexibility makes them ideal for delicate or adaptive tasks, controlling soft robots has always been a challenge because their shape changes in unpredictable ways. Real-world environments are often complicated and full of unexpected disturbances, and even small changes in conditions — like a shift in weight, a gust of wind, or a minor hardware fault — can throw off their movements.
Despite substantial progress in soft robotics, existing approaches often can only achieve one or two of the three capabilities needed for soft robots to operate intelligently in real-world environments: using what they’ve learned from one task to perform a different task, adapting quickly when the situation changes, and guaranteeing that the robot will stay stable and safe while adapting its movements. This lack of adaptability and reliability has been a major barrier to deploying soft robots in real-world applications until now.
In an open-access study titled “A general soft robotic controller inspired by neuronal structural and plastic synapses that adapts to diverse arms, tasks, and perturbations,” published Jan. 6 in Science Advances, the researchers describe how they developed a new AI control system that allows soft robots to adapt across diverse tasks and disturbances. The study takes inspiration from the way the human brain learns and adapts, and was built on extensive research in learning-based robotic control, embodied intelligence, soft robotics, and meta-learning.
The system uses two complementary sets of “synapses” — connections that adjust how the robot moves — working in tandem. The first set, known as “structural synapses”, is trained offline on a variety of foundational movements, such as bending or extending a soft arm smoothly. These form the robot’s built‑in skills and provide a strong, stable foundation. The second set, called “plastic synapses,” continually updates online as the robot operates, fine-tuning the arm’s behavior to respond to what is happening in the moment. A built-in stability measure acts like a safeguard, so even as the robot adjusts during online adaptation, its behavior remains smooth and controlled.
“Soft robots hold immense potential to take on tasks that conventional machines simply cannot, but true adoption requires control systems that are both highly capable and reliably safe. By combining structural learning with real-time adaptiveness, we’ve created a system that can handle the complexity of soft materials in unpredictable environments,” says MIT Professor Daniela Rus, co-lead principal investigator at M3S, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and co-corresponding author of the paper. “It’s a step closer to a future where versatile soft robots can operate safely and intelligently alongside people — in clinics, factories, or everyday lives.”
“This new AI control system is one of the first general soft-robot controllers that can achieve all three key aspects needed for soft robots to be used in society and various industries. It can apply what it learned offline across different tasks, adapt instantly to new conditions, and remain stable throughout — all within one control framework,” says Associate Professor Zhiqiang Tang, first author and co-corresponding author of the paper who was a postdoc at M3S and at NUS when he carried out the research and is now an associate professor at Southeast University in China (SEU China).
The system supports multiple task types, enabling soft robotic arms to execute trajectory tracking, object placement, and whole-body shape regulation within one unified approach. The method also generalizes across different soft-arm platforms, demonstrating cross-platform applicability.
The system was tested and validated on two physical platforms — a cable-driven soft arm and a shape-memory-alloy–actuated soft arm — and delivered impressive results. It achieved a 44–55 percent reduction in tracking error under heavy disturbances; over 92 percent shape accuracy under payload changes, airflow disturbances, and actuator failures; and stable performance even when up to half of the actuators failed.
“This work redefines what’s possible in soft robotics. We’ve shifted the paradigm from task-specific tuning and capabilities toward a truly generalizable framework with human-like intelligence. It is a breakthrough that opens the door to scalable, intelligent soft machines capable of operating in real-world environments,” says Professor Cecilia Laschi, co-corresponding author and principal investigator at M3S, Provost’s Chair Professor in the NUS Department of Mechanical Engineering at the College of Design and Engineering, and director of the NUS Advanced Robotics Centre.
This breakthrough opens doors for more robust soft robotic systems to develop manufacturing, logistics, inspection, and medical robotics without the need for constant reprogramming — reducing downtime and costs. In health care, assistive and rehabilitation devices can automatically tailor their movements to a patient’s changing strength or posture, while wearable or medical soft robots can respond more sensitively to individual needs, improving safety and patient outcomes.
The researchers plan to extend this technology to robotic systems or components that can operate at higher speeds and more complex environments, with potential applications in assistive robotics, medical devices, and industrial soft manipulators, as well as integration into real-world autonomous systems.
The research conducted at SMART was supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.
Malicious AI
Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into accepting its changes into a mainstream python library. This represents a first-of-its-kind case study of misaligned AI behavior in the wild, and raises serious concerns about currently deployed AI agents executing blackmail threats.
Europe defies Trump team over IEA climate fight
Alabama echoes Trump with bid to limit regulatory science
Nonprofit throws its weight behind Arctic geoengineering
Tech companies overstate AI’s climate benefits, report says
States sue Trump admin for revoked energy funds
Enviros, health groups are first to sue over Trump’s big climate rollback
Calif. lawmakers revive push to require coverage for wildfire-ready properties
Olympic skiers voice concern over receding glaciers
Reform UK vows to scrap Britain’s carbon border tax
EV sales boom as Ethiopia bans gas-powered car imports
Parking-aware navigation system could prevent frustration and emissions
It happens every day — a motorist heading across town checks a navigation app to see how long the trip will take, but they find no parking spots available when they reach their destination. By the time they finally park and walk to their destination, they’re significantly later than they expected to be.
Most popular navigation systems send drivers to a location without considering the extra time that could be needed to find parking. This causes more than just a headache for drivers. It can worsen congestion and increase emissions by causing motorists to cruise around looking for a parking spot. This underestimation could also discourage people from taking mass transit because they don’t realize it might be faster than driving and parking.
MIT researchers tackled this problem by developing a system that can be used to identify parking lots that offer the best balance of proximity to the desired location and likelihood of parking availability. Their adaptable method points users to the ideal parking area rather than their destination.
In simulated tests with real-world traffic data from Seattle, this technique achieved time savings of up to 66 percent in the most congested settings. For a motorist, this would reduce travel time by about 35 minutes, compared to waiting for a spot to open in the closest parking lot.
While they haven’t designed a system ready for the real world yet, their demonstrations show the viability of this approach and indicate how it could be implemented.
“This frustration is real and felt by a lot of people, and the bigger issue here is that systematically underestimating these drive times prevents people from making informed choices. It makes it that much harder for people to make shifts to public transit, bikes, or alternative forms of transportation,” says MIT graduate student Cameron Hickert, lead author on a paper describing the work.
Hickert is joined on the paper by Sirui Li PhD ’25; Zhengbing He, a research scientist in the Laboratory for Information and Decision Systems (LIDS); and senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of LIDS. The research appears today in Transactions on Intelligent Transportation Systems.
Probable parking
To solve the parking problem, the researchers developed a probability-aware approach that considers all possible public parking lots near a destination, the distance to drive there from a point of origin, the distance to walk from each lot to the destination, and the likelihood of parking success.
The approach, based on dynamic programming, works backward from good outcomes to calculate the best route for the user.
Their method also considers the case where a user arrives at the ideal parking lot but can’t find a space. It takes into the account the distance to other parking lots and the probability of success of parking at each.
“If there are several lots nearby that have slightly lower probabilities of success, but are very close to each other, it might be a smarter play to drive there rather than going to the higher-probability lot and hoping to find an opening. Our framework can account for that,” Hickert says.
In the end, their system can identify the optimal lot that has the lowest expected time required to drive, park, and walk to the destination.
But no motorist expects to be the only one trying to park in a busy city center. So, this method also incorporates the actions of other drivers, which affect the user’s probability of parking success.
For instance, another driver may arrive at the user’s ideal lot first and take the last parking spot. Or another motorist could try parking in another lot but then park in the user’s ideal lot if unsuccessful. In addition, another motorist may park in a different lot and cause spillover effects that lower the user’s chances of success.
“With our framework, we show how you can model all those scenarios in a very clean and principled manner,” Hickert says.
Crowdsourced parking data
The data on parking availability could come from several sources. For example, some parking lots have magnetic detectors or gates that track the number of cars entering and exiting.
But such sensors aren’t widely used, so to make their system more feasible for real-world deployment, the researchers studied the effectiveness of using crowdsourced data instead.
For instance, users could indicate available parking using an app. Data could also be gathered by tracking the number of vehicles circling to find parking, or how many enter a lot and exit after being unsuccessful.
Someday, autonomous vehicles could even report on open parking spots they drive by.
“Right now, a lot of that information goes nowhere. But if we could capture it, even by having someone simply tap ‘no parking’ in an app, that could be an important source of information that allows people to make more informed decisions,” Hickert adds.
The researchers evaluated their system using real-world traffic data from the Seattle area, simulating different times of day in a congested urban setting and a suburban area. In congested settings, their approach cut total travel time by about 60 percent compared to sitting and waiting for a spot to open, and by about 20 percent compared to a strategy of continually driving to the next closet parking lot.
They also found that crowdsourced observations of parking availability would have an error rate of only about 7 percent, compared to actual parking availability. This indicates it could be an effective way to gather parking probability data.
In the future, the researchers want to conduct larger studies using real-time route information in an entire city. They also want to explore additional avenues for gathering data on parking availability, such as using satellite images, and estimate potential emissions reductions.
“Transportation systems are so large and complex that they are really hard to change. What we look for, and what we found with this approach, is small changes that can have a big impact to help people make better choices, reduce congestion, and reduce emissions,” says Wu.
This research was supported, in part, by Cintra, the MIT Energy Initiative, and the National Science Foundation.
How MIT OpenCourseWare is fueling one learner’s passion for education
Training for a clerical military role in France, Gustavo Barboza felt a spark he couldn’t ignore. He remembered his love of learning, which once guided him through two college semesters of mechanical engineering courses in his native Colombia, coupled with supplemental resources from MIT Open Learning’s OpenCourseWare. Now, thousands of miles away, he realized it was time to follow that spark again.
“I wasn’t ready to sit down in the classroom,” says Barboza, remembering his initial foray into higher education. “I left to try and figure out life. I realized I wanted more adventure.”
Joining the military in France in 2017 was his answer. For the first three years of service, he was very military-minded, only focused on his training and deployments. With more seniority, he took on more responsibilities, and eventually was sent to take a four-month training course on military correspondence and software.
“I reminded myself that I like to study,” he says. “I started to go back to OpenCourseWare because I knew in the back of my mind that these very complete courses were out there.”
At that point, Barboza realized that military service was only a chapter in his life, and the next would lead him back to learning. He was still interested in engineering, and knew that MIT OpenCourseWare could help prepare him for what was next.
He dove into OpenCourseWare’s free, online, open educational resources — which cover nearly the entire MIT curriculum — including classical mechanics, intro to electrical engineering, and single variable calculus with David Jerison, which he says was his most-visited resource. These allowed him to brush up on old skills and learn new ones, helping him tremendously in preparing for college entrance exams and his first-year courses.
Now in his third year at Grenoble-Alpes University, Barboza studies electrical engineering, a shift from his initial interest in mechanical engineering.
“There is an OpenCourseWare lecture that explains all the specializations you can get into with electrical engineering,” he says. “They go from very natural things to things like microprocessors. What interests me is that if someone says they are an electrical engineer, there are so many different things they could be doing.”
At this point in his academic career, Barboza is most interested in microelectronics and the study of radio frequencies and electromagnetic waves. But he admits he has more to learn and is open to where his studies may take him.
MIT OpenCourseWare remains a valuable resource, he says. When thinking about his future, he checks out graduate course listings and considers the different paths he might take. When he is having trouble with a certain concept, he looks for a lecture on the subject, undeterred by the differences between French and U.S. conventions.
“Of course, the science doesn't change, but the way you would write an equation or draw a circuit is different at my school in France versus what I see from MIT. So, you have to be careful,” he explains. “But it is still the first place I visit for problem sets, readings, and lecture notes. It’s amazing.”
The thoroughness and openness of MIT Open Learning’s courses and resources — like OpenCourseWare — stand out to Barboza. In the wide world of the internet, he has found resources from other universities, but he says their offerings are not as robust. And in a time of disinformation and questionable sources, he appreciates that MIT values transparency, accessibility, and knowledge.
“Human knowledge has never been more accessible,” he says. “MIT puts coursework online and says, ‘here’s what we do.’ As long as you have an internet connection, you can learn all of it.”
“I just feel like MIT OpenCourseWare is what the internet was originally for,” Barboza continues. “A network for sharing knowledge. I’m a big fan.”
Explore lifelong learning opportunities from MIT, including courses, resources, and professional programs, on MIT Learn.
AI Found Twelve New Vulnerabilities in OpenSSL
The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:
In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the ...
