Feed aggregator
Scam USPS and E-Z Pass Texts and Websites
Google has filed a complaint in court that details the scam:
In a complaint filed Wednesday, the tech giant accused “a cybercriminal group in China” of selling “phishing for dummies” kits. The kits help unsavvy fraudsters easily “execute a large-scale phishing campaign,” tricking hordes of unsuspecting people into “disclosing sensitive information like passwords, credit card numbers, or banking information, often by impersonating well-known brands, government agencies, or even people the victim knows.”
These branded “Lighthouse” kits offer two versions of software, depending on whether bad actors want to launch SMS and e-commerce scams. “Members may subscribe to weekly, monthly, seasonal, annual, or permanent licenses,” Google alleged. Kits include “hundreds of templates for fake websites, domain set-up tools for those fake websites, and other features designed to dupe victims into believing they are entering sensitive information on a legitimate website.”...
EPA falls behind schedule for repealing endangerment finding
‘Drowning under paper’: Vulnerable countries push to slice red tape for climate aid
Gas exports may increase Americans’ heating bills, EIA says
Rising seas threaten thousands of hazardous US facilities
Alito is urged to back out of Louisiana coastal erosion case
Senate upholds Trump administration methane rule
New York Democrats split on climate law
Turkey to host 2026 climate summit, in defeat for Australia
EU missing from COP30 push to drop fossil fuels
EU strains to defend carbon levy as trade tensions engulf COP30
Rail project raises questions about Brazil’s effort to protect the Amazon
South Africa to urge rich nations to do more against climate change at G20
Misalignment between objective and perceived heat risks
Nature Climate Change, Published online: 20 November 2025; doi:10.1038/s41558-025-02505-9
Objective assessments indicate that extreme heat is increasing health risks; however, many of the most exposed populations do not perceive extreme heat as risky. This misperception may undermine public awareness of the need for effective cooling strategies, leaving a dangerous blind spot in adaptation and protection.Scientists get a first look at the innermost region of a white dwarf system
Some 200 light years from Earth, the core of a dead star is circling a larger star in a macabre cosmic dance. The dead star is a type of white dwarf that exerts a powerful magnetic field as it pulls material from the larger star into a swirling, accreting disk. The spiraling pair is what’s known as an “intermediate polar” — a type of star system that gives off a complex pattern of intense radiation, including X-rays, as gas from the larger star falls onto the other one.
Now, MIT astronomers have used an X-ray telescope in space to identify key features in the system’s innermost region — an extremely energetic environment that has been inaccessible to most telescopes until now. In an open-access study published in the Astrophysical Journal, the team reports using NASA’s Imaging X-ray Polarimetry Explorer (IXPE) to observe the intermediate polar, known as EX Hydrae.
The team found a surprisingly high degree of X-ray polarization, which describes the direction of an X-ray wave’s electric field, as well as an unexpected direction of polarization in the X-rays coming from EX Hydrae. From these measurements, the researchers traced the X-rays back to their source in the system’s innermost region, close to the surface of the white dwarf.
What’s more, they determined that the system’s X-rays were emitted from a column of white-hot material that the white dwarf was pulling in from its companion star. They estimate that this column is about 2,000 miles high — about half the radius of the white dwarf itself and much taller than what physicists had predicted for such a system. They also determined that the X-rays are reflected off the white dwarf’s surface before scattering into space — an effect that physicists suspected but hadn’t confirmed until now.
The team’s results demonstrate that X-ray polarimetry can be an effective way to study extreme stellar environments such as the most energetic regions of an accreting white dwarf.
“We showed that X-ray polarimetry can be used to make detailed measurements of the white dwarf's accretion geometry,” says Sean Gunderson, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research, who is the study’s lead author. “It opens the window into the possibility of making similar measurements of other types of accreting white dwarfs that also have never had predicted X-ray polarization signals.”
Gunderson’s MIT Kavli co-authors include graduate student Swati Ravi and research scientists Herman Marshall and David Huenemoerder, along with Dustin Swarm of the University of Iowa, Richard Ignace of East Tennessee State University, Yael Nazé of the University of Liège, and Pragati Pradhan of Embry Riddle Aeronautical University.
A high-energy fountain
All forms of light, including X-rays, are influenced by electric and magnetic fields. Light travels in waves that wiggle, or oscillate, at right angles to the direction in which the light is traveling. External electric and magnetic fields can pull these oscillations in random directions. But when light interacts and bounces off a surface, it can become polarized, meaning that its vibrations tighten up in one direction. Polarized light, then, can be a way for scientists to trace the source of the light and discern some details about the source’s geometry.
The IXPE space observatory is NASA’s first mission designed to study polarized X-rays that are emitted by extreme astrophysical objects. The spacecraft, which launched in 2021, orbits the Earth and records these polarized X-rays. Since launch, it has primarily focused on supernovae, black holes, and neutron stars.
The new MIT study is the first to use IXPE to measure polarized X-rays from an intermediate polar — a smaller system compared to black holes and supernovas, that nevertheless is known to be a strong emitter of X-rays.
“We started talking about how much polarization would be useful to get an idea of what’s happening in these types of systems, which most telescopes see as just a dot in their field of view,” Marshall says.
An intermediate polar gets its name from the strength of the central white dwarf’s magnetic field. When this field is strong, the material from the companion star is directly pulled toward the white dwarf’s magnetic poles. When the field is very weak, the stellar material instead swirls around the dwarf in an accretion disk that eventually deposits matter directly onto the dwarf’s surface.
In the case of an intermediate polar, physicists predict that material should fall in a complex sort of in-between pattern, forming an accretion disk that also gets pulled toward the white dwarf’s poles. The magnetic field should lift the disk of incoming material far upward, like a high-energy fountain, before the stellar debris falls toward the white dwarf’s magnetic poles, at speeds of millions of miles per hour, in what astronomers refer to as an “accretion curtain.” Physicists suspect that this falling material should run up against previously lifted material that is still falling toward the poles, creating a sort of traffic jam of gas. This pile-up of matter forms a column of colliding gas that is tens of millions of degrees Fahrenheit and should emit high-energy X-rays.
An innermost picture
By measuring any polarized X-rays emitted by EX Hydrae, the team aimed to test the picture of intermediate polars that physicists had hypothesized. In January 2025, IXPE took a total of about 600,000 seconds, or about seven days’ worth, of X-ray measurements from the system.
“With every X-ray that comes in from the source, you can measure the polarization direction,” Marshall explains. “You collect a lot of these, and they’re all at different angles and directions which you can average to get a preferred degree and direction of the polarization.”
Their measurements revealed an 8 percent polarization degree that was much higher than what scientists had predicted according to some theoretical models. From there, the researchers were able to confirm that the X-rays were indeed coming from the system’s column, and that this column is about 2,000 miles high.
“If you were able to stand somewhat close to the white dwarf’s pole, you would see a column of gas stretching 2,000 miles into the sky, and then fanning outward,” Gunderson says.
The team also measured the direction of EX Hydrae’s X-ray polarization, which they determined to be perpendicular to the white dwarf’s column of incoming gas. This was a sign that the X-rays emitted by the column were then bouncing off the white dwarf’s surface before traveling into space, and eventually into IXPE’s telescopes.
“The thing that’s helpful about X-ray polarization is that it’s giving you a picture of the innermost, most energetic portion of this entire system,” Ravi says. “When we look through other telescopes, we don’t see any of this detail.”
The team plans to apply X-ray polarization to study other accreting white dwarf systems, which could help scientists get a grasp on much larger cosmic phenomena.
“There comes a point where so much material is falling onto the white dwarf from a companion star that the white dwarf can’t hold it anymore, the whole thing collapses and produces a type of supernova that’s observable throughout the universe, which can be used to figure out the size of the universe,” Marshall offers. “So understanding these white dwarf systems helps scientists understand the sources of those supernovae, and tells you about the ecology of the galaxy.”
This research was supported, in part, by NASA.
The cost of thinking
Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things.
A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, they need some time to think through problems like these — and remarkably, scientists at MIT’s McGovern Institute for Brain Research have found that the kinds of problems that require the most processing from reasoning models are the very same problems that people need take their time with. In other words, they report today in the journal PNAS, the “cost of thinking” for a reasoning model is similar to the cost of thinking for a human.
The researchers, who were led by Evelina Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute, conclude that in at least one important way, reasoning models have a human-like approach to thinking. That, they note, is not by design. “People who build these models don’t care if they do it like humans. They just want a system that will robustly perform under all sorts of conditions and produce correct responses,” Fedorenko says. “The fact that there’s some convergence is really quite striking.”
Reasoning models
Like many forms of artificial intelligence, the new reasoning models are artificial neural networks: computational tools that learn how to process information when they are given data and a problem to solve. Artificial neural networks have been very successful at many of the tasks that the brain’s own neural networks do well — and in some cases, neuroscientists have discovered that those that perform best do share certain aspects of information processing in the brain. Still, some scientists argued that artificial intelligence was not ready to take on more sophisticated aspects of human intelligence.
“Up until recently, I was among the people saying, ‘These models are really good at things like perception and language, but it’s still going to be a long ways off until we have neural network models that can do reasoning,” Fedorenko says. “Then these large reasoning models emerged and they seem to do much better at a lot of these thinking tasks, like solving math problems and writing pieces of computer code.”
Andrea Gregor de Varda, a K. Lisa Yang ICoN Center Fellow and a postdoc in Fedorenko’s lab, explains that reasoning models work out problems step by step. “At some point, people realized that models needed to have more space to perform the actual computations that are needed to solve complex problems,” he says. “The performance started becoming way, way stronger if you let the models break down the problems into parts.”
To encourage models to work through complex problems in steps that lead to correct solutions, engineers can use reinforcement learning. During their training, the models are rewarded for correct answers and penalized for wrong ones. “The models explore the problem space themselves,” de Varda says. “The actions that lead to positive rewards are reinforced, so that they produce correct solutions more often.”
Models trained in this way are much more likely than their predecessors to arrive at the same answers a human would when they are given a reasoning task. Their stepwise problem-solving does mean reasoning models can take a bit longer to find an answer than the LLMs that came before — but since they’re getting right answers where the previous models would have failed, their responses are worth the wait.
The models’ need to take some time to work through complex problems already hints at a parallel to human thinking: if you demand that a person solve a hard problem instantaneously, they’d probably fail, too. De Varda wanted to examine this relationship more systematically. So he gave reasoning models and human volunteers the same set of problems, and tracked not just whether they got the answers right, but also how much time or effort it took them to get there.
Time versus tokens
This meant measuring how long it took people to respond to each question, down to the millisecond. For the models, Varda used a different metric. It didn’t make sense to measure processing time, since this is more dependent on computer hardware than the effort the model puts into solving a problem. So instead, he tracked tokens, which are part of a model’s internal chain of thought. “They produce tokens that are not meant for the user to see and work on, but just to have some track of the internal computation that they’re doing,” de Varda explains. “It’s as if they were talking to themselves.”
Both humans and reasoning models were asked to solve seven different types of problems, like numeric arithmetic and intuitive reasoning. For each problem class, they were given many problems. The harder a given problem was, the longer it took people to solve it — and the longer it took people to solve a problem, the more tokens a reasoning model generated as it came to its own solution.
Likewise, the classes of problems that humans took longest to solve were the same classes of problems that required the most tokens for the models: arithmetic problems were the least demanding, whereas a group of problems called the “ARC challenge,” where pairs of colored grids represent a transformation that must be inferred and then applied to a new object, were the most costly for both people and models.
De Varda and Fedorenko say the striking match in the costs of thinking demonstrates one way in which reasoning models are thinking like humans. That doesn’t mean the models are recreating human intelligence, though. The researchers still want to know whether the models use similar representations of information to the human brain, and how those representations are transformed into solutions to problems. They’re also curious whether the models will be able to handle problems that require world knowledge that is not spelled out in the texts that are used for model training.
The researchers point out that even though reasoning models generate internal monologues as they solve problems, they are not necessarily using language to think. “If you look at the output that these models produce while reasoning, it often contains errors or some nonsensical bits, even if the model ultimately arrives at a correct answer. So the actual internal computations likely take place in an abstract, non-linguistic representation space, similar to how humans don’t use language to think,” he says.
How a building creates and defines a region
As an undergraduate majoring in architecture, Dong Nyung Lee ’21 wasn’t sure how to respond when friends asked him what the study of architecture was about.
“I was always confused about how to describe it myself,” he says with a laugh. “I would tell them that it wasn’t just about a building, or a city, or a community. It’s a balance across different scales, and it has to touch everything all at once.”
As a graduate student enrolled in a design studio course last spring — 4.154 (Territory as Interior) — Lee and his classmates had to design a building that would serve a specific community in a specific location. The course, says Lee, gave him clarity as to “what architecture is all about.”
Designed by Roi Salgueiro Barrio, a lecturer in the MIT School of Architecture and Planning’s Department of Architecture, the coursework combines ecological principles, architectural design, urban economics, and social considerations to address real-world problems in marginalized or degraded areas.
“When we build, we always impact economies, mostly by the different types of technologies we use and their dependence on different types of labor and materials,” says Salgueiro Barrio. “The intention here was to think at both levels: the activities that can be accommodated, and how we can actually build something.”
Research first
Students were tasked with repurposing an abandoned fishing industry building on the Barbanza Peninsula in Galicia, Spain, and proposing a new economic activity for the building that would help regenerate the local economy. Working in groups, they researched the region’s material resources and fiscal sectors and designed detailed maps. This approach to constructing a building was new for Vincent Jackow a master's student in architecture.
“Normally in architecture, we work at the scale of one-to-100 meters,” he says. But this process allowed me to connect the dots between what the region offered and what could be built to support the economy.”
The aim of revitalizing this area is also a goal of Fundación RIA (FRIA), a nonprofit think tank established by Pritzker Prize-winning architect David Chipperfield. FRIA generates research and territorial planning with the goal of long-term sustainability of the built and natural environment in the Galicia region. During their spring break in March, the students traveled to Galicia, met with Chipperfield, business owners, fishermen, and farmers, and explored a variety of sites. They also consulted with the owner of the building they were to repurpose.
Returning to MIT, the students constructed nine detailed models. Master’s student Aleks Banaś says she took the studio because it required her to explore the variety of scales in an architectural project from territorial analysis to building detail, all while keeping the socio-economic aspect of design decisions in mind.
“I’m interested in how architecture can support local economies,” says Banaś. “Visiting Galicia was very special because of the communities we interacted with. We were no longer looking at articles and maps of the region; we were learning about day-to-day life. A lot of people shared with us the value of their work, which is not economically feasible.”
Banaś was impressed by the region’s strong maritime history and the generations of craftspeople working on timber boat-making. Inspired by the collective spirit of the region, she designed “House of Sea,” transforming the former cannery into a hub for community gathering and seafront activities. The reimagined building would accommodate a variety of functions including a boat-building workshop for the Ribeira carpenters’ association, a restaurant, and a large, covered section for local events such as the annual barnacle festival.
“I wanted to demonstrate how we can create space for an alternative economy that can host and support these skills and traditions,” says Banaś.
Jackow’s building — “La Nueva Cordelería,” or “New Rope Making” — was a facility using hemp to produce rope and hempcrete blocks (a construction material). The production of both “is very on-trend in the E.U.” and provides an alternative to petrochemical-based ropes for the region’s marine uses, says Jackow. The building would serve as a cultural hub, incorporating a café, worker housing, and offices. Even its very structure would also make use of the rope by joining timber with knots allowing the interior spaces to be redesigned.
Lee’s building was designed to engage with the forestry and agricultural industries.
“What intrigued me was that Galicia is heavily dependent on pulp production and wood harvesting,” he says. “I wanted to give value to the post-harvest residue.”
Lee designed a biochar plant using some of the concrete and terra cotta blocks on site. Biochar is made by heating the harvested wood residue through pyrolysis — thermal decomposition in an environment with little oxygen. The resulting biochar would be used by farmers for soil enhancement.
“The work demonstrated an understanding of the local resources and using them to benefit the revitalization of the area,” says Salgueiro Barrio, who was pleased with the results.
FRIA was so impressed with the work that they held an exhibition at their gallery in Santiago de Compostela in August and September to highlight the importance of connecting academic research with the territory through student projects. Banaś interned with FRIA over the summer working on multiple projects, including the plan and design for the exhibition. The challenge here, she says, was to design an exhibition of academic work for a general audience. The final presentation included maps, drawings, and photographs by the students.
For Lee, the course was more meaningful than any he has taken to date. Moving between the different scales of the project illustrated, for him, “the biggest challenge for a designer and an architect. Architecture is universal, and very specific. Keeping those dualities in focus was the biggest challenge and the most interesting part of this project. It hit at the core of what architecture is.”
Symposium examines the neural circuits that keep us alive and well
Taking an audience of hundreds on a tour around the body, seven speakers at The Picower Institute for Learning and Memory’s symposium “Circuits of Survival and Homeostasis” Oct. 21 shared their advanced and novel research about some of the nervous system’s most evolutionarily ancient functions.
Introducing the symposium that she arranged with a picture of a man at a campfire on a frigid day, Sara Prescott, assistant professor in the Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences, pointed out that the brain and the body cooperate constantly just to keep us going, and that when the systems they maintain fail, the consequence is disease.
“[This man] is tightly regulating his blood pressure, glucose levels, his energy expenditure, inflammation and breathing rate, and he’s doing this in the face of a fluctuating external environment,” Prescott said. “Behind each of these processes there are networks of neurons that are working quietly in the background to maintain internal stability. And this is, of course, the brain’s oldest job.”
Indeed, although the discoveries they shared about the underlying neuroscience were new, the speakers each described experiences that are as timeless as they are familiar: the beating of the heart, the transition from hunger to satiety, and the healing of cuts on our skin.
Feeling warm and full
Li Ye, a scientist at Scripps Research, picked right up on the example of coping with the cold. Mammals need to maintain a consistent internal body temperature, and so they will increase metabolism in the cold and then, as energy supplies dwindle, seek out more food. His lab’s 2023 study identified the circuit, centered in the Xiphoid nucleus of the brain’s thalamus, that regulates this behavior by sensing prolonged cold exposure and energy consumption. Ye described other feeding mechanisms his lab is studying as well, including searching out the circuitry that regulates how long an animal will feed at a time. For instance, if you’re worried about predators finding you, it’s a bad idea to linger for a leisurely lunch.
Physiologist Zachary Knight of the University of California at San Francisco also studies feeding and drinking behaviors. In particular, his lab asks how the brain knows when it’s time to stop. The conventional wisdom is that all that’s needed is a feeling of fullness coming from the gut, but his research shows there is more to the story. A 2023 study from his lab found a population of neurons in the caudal nucleus of the solitary tract in the brain stem that receive signals about ingestion and taste from the mouth, and that send that “stop eating” signal. They also found a separate neural population in the brain stem that indeed receives fullness signals from the gut, and teaches the brain over time how much food leads to satisfaction. Both neuron types work together to regulate the pace of eating. His lab has continued to study how brain stem circuits regulate feeding using these multiple inputs.
Energy balance depends not only on how many calories come in, but also on how much energy is spent. When food is truly scarce, many animals will engage in a state of radically lowered metabolism called torpor (like hibernation), where body temperature plummets. The brain circuits that exert control over body temperature are another area of active research. In his talk, Harvard University neurologist Clifford Saper described years of research in which his lab found neurons in the median preoptic nucleus that dictate this metabolic state. Recently, his lab demonstrated that the same neurons that regulate torpor also regulate fever during sickness. When the neurons are active, body temperature drops. When they are inhibited, fever ensues. Thus, the same neurons act as a two-way switch for body temperature in response to different threatening conditions.
Sickness, injury, and stress
As the idea of fever suggests, the body also has evolved circuits (that scientists are only now dissecting) to deal with sickness and injury.
Washington University neuroscientist Qin Liu described her research into the circuits governing coughing and sneezing, which, on one hand, can clear the upper airways of pathogens and obstructions but, on the other hand, can spread those pathogens to others in the community. She described her lab’s 2024 study in which her team pinpointed a population of neurons in the nasal passages that mediate sneezing and a different population of sensory neurons in the trachea that produce coughing. Identifying the specific cells and their unique characteristics makes them potentially viable drug targets.
While Liu tackled sickness, Harvard stem cell biologist Ya-Chieh Hsu discussed how neurons can reshape the body’s tissues during stress and injury, specifically the hair and skin. While it is common lore that stress can make your hair gray and fall out, Hsu’s lab has shown the actual physiological mechanisms that make it so. In 2020 her team showed that bursts of noradrenaline from the hyperactivation of nerves in the sympathetic nervous system kills the melanocyte stem cells that give hair its color. She described newer research indicating a similar mechanism may also make hair fall out by killing off cells at the base of hair follicles, releasing cellular debris and triggering auto-immunity. Her lab has also looked at how the nervous system influences skin healing after injury. For instance, while our skin may appear to heal after a cut because it closes up, many skin cell types actually don’t rebound (unless you’re still an embryo). By looking at the difference between embryos and post-birth mice, Hsu’s lab has traced the neural mechanisms that prevent fuller healing, identifying a role for cells called fibroblasts and the nervous system.
Continuing on the theme of stress, Caltech biologist Yuki Oka discussed a broad-scale project in his lab to develop a molecular and cellular atlas of the sympathetic nervous system, which innervates much of the body and famously produces its “fight or flight” responses. In work partly published last year, their journey touched on cells and circuits involved in functions ranging from salivation to secreting bile. Oka and co-authors made the case for the need to study the system more in a review paper earlier this year.
A new model to study human biology
In their search for the best ways to understand the circuits that govern survival and homeostasis, researchers often use rodents because they are genetically tractable, easy to house, and reproduce quickly, but Stanford University biochemist Mark Krasnow has worked to develop a new model with many of those same traits but a closer genetic relationship to humans: the mouse lemur. In his talk, he described that work (which includes extensive field research in Madagascar) and focused on insights the mouse lemurs have helped him make into heart arrhythmias. After studying the genes and health of hundreds of mouse lemurs, his lab identified a family with “sick sinus syndrome,” an arrhythmia also seen in humans. In a preprint study, his lab describes the specific molecular pathways at fault in disrupting the heart’s natural pace making.
By sharing some of the latest research into how the brain and body work to stay healthy, the symposium’s speakers highlighted the most current thinking about the nervous system’s most primal purposes.
Quantum modeling for breakthroughs in materials science and sustainable energy
Ernest Opoku knew he wanted to become a scientist when he was a little boy. But his school in Dadease, a small town in Ghana, offered no elective science courses — so Opoku created one for himself.
Even though they had neither a dedicated science classroom nor a lab, Opoku convinced his principal to bring in someone to teach him and five other friends he had convinced to join him. With just a chalkboard and some imagination, they learned about chemical interactions through the formulas and diagrams they drew together.
“I grew up in a town where it was difficult to find a scientist,” he says.
Today, Opoku has become one himself, recently earning a PhD in quantum chemistry from Auburn University. This year, he joins MIT as a part of the School of Science Dean’s Postdoctoral Fellowship program. Working with the Van Voorhis Group at the Department of Chemistry, Opoku’s goal is to advance computational methods to study how electrons behave — a fundamental research that underlies applications ranging from materials science to drug discovery.
“As a boy who wanted to satisfy my own curiosities at a young age, in addition to the fact that my parents had minimal formal education,” Opoku says, “I knew that the only way I would be able to accomplish my goal was to work hard.”
In pursuit of knowledge
When Opoku was 8 years old, he began independently learning English at school. He would come back with homework, but his parents were unable to help him, as neither of them could read or write in English. Frustrated, his mother asked an older student to help tutor her son.
Every day, the boys would meet at 6 o’clock. With no electricity at either of their homes, they practiced new vocabulary and pronunciations together by a kerosene lamp.
As he entered junior high school, Opoku’s fascination with nature grew.
“I realized that chemistry was the central science that really offered the insight that I wanted to really understand Creation from the smallest level,” he says.
He studied diligently and was able to get into one of Ghana’s top high schools — but his parents couldn’t afford the tuition. He therefore enrolled in Dadease Agric Senior High School in his hometown. By growing tomatoes and maize, he saved up enough money to support his education.
In 2012, he got into Kwame Nkrumah University of Science and Technology (KNUST), a first-ranking university in Ghana and the West Africa region. There, he was introduced to computational chemistry. Unlike many other branches of science, the field required only a laptop and the internet to study chemical reactions.
“Anything that comes to mind, anytime I can grab my computer and I’ll start exploring my curiosity. I don’t have to wait to go to the laboratory in order to interrogate nature,” he says.
Opoku worked from early morning to late night. None of it felt like work, though, thanks to his supervisor, the late quantum chemist Richard Tia, who was an associate professor of chemistry at KNUST.
“Every single day was a fun day,” he recalls of his time working with Tia. “I was being asked to do the things that I myself wanted to know, to satisfy my own curiosity, and by doing that I’ll be given a degree.”
In 2020, Opoku’s curiosity brought him even further, this time overseas to Auburn University in Alabama for his PhD. Under the guidance of his advisor, Professor J. V. Ortiz, Opoku contributed to the development of new computational methods to simulate how electrons bind to or detach from molecules, a process known as electron propagation.
What is new about Opoku’s approach is that it does not rely on any adjustable or empirical parameters. Unlike some earlier computational methods that require tuning to match experimental results, his technique uses advanced mathematical formulations to directly account for first principles of electron interactions. This makes the method more accurate — closely resembling results from lab experiments — while using less computational power.
By streamlining the calculations and eliminating guesswork, Opoku’s work marks a major step toward faster, more trustworthy quantum simulations across a wide range of molecules, including those never studied before — laying the groundwork for breakthroughs in many areas such as materials science and sustainable energy.
For his postdoctoral research at MIT, Opoku aims to advance electron propagator methods to address larger and more complex molecules and materials by integrating quantum computing, machine learning, and bootstrap embedding — a technique that simplifies quantum chemistry calculations by dividing large molecules into smaller, overlapping fragments. He is collaborating with Troy Van Voorhis, the Haslam and Dewey Professor of Chemistry, whose expertise in these areas can help make Opoku’s advanced simulations more computationally efficient and scalable.
“His approach is different from any of the ways that we've pursued in the group in the past,” Van Voorhis says.
Passing along the opportunity to learn
Opoku thanks previous mentors who helped him overcome the “intellectual overhead required to make contributions to the field,” and believes Van Voorhis will offer the same kind of support.
In 2021, Opoku joined the National Organization for the Professional Advancement of Black Chemists and Chemical Engineers (NOBCChE) to gain mentorship, networking, and career development opportunities within a supportive community. He later led the Auburn University chapter as president, helping coordinate k-12 outreach to inspire the next generation of scientists, engineers, and innovators.
“Opoku’s mentorship goes above and beyond what would be typical at his career stage,” says Van Voorhis. “One reason is his ability to communicate science to people, and not just the concepts of science, but also the process of science."
Back home, Opoku founded the Nesvard Institute of Molecular Sciences to support African students to develop not only skills for graduate school and professional careers, but also a sense of confidence and cultural identity. Through the nonprofit, he has mentored 29 students so far, passing along the opportunity for them to follow their curiosity and help others do the same.
“There are many areas of science and engineering to which Africans have made significant contributions, but these contributions are often not recognized, celebrated, or documented,” Opoku says.
He adds: “We have a duty to change the narrative.”
The Patent Office Is About To Make Bad Patents Untouchable
The U.S. Patent and Trademark Office (USPTO) has proposed new rules that would effectively end the public’s ability to challenge improperly granted patents at their source—the Patent Office itself. If these rules take effect, they will hand patent trolls exactly what they’ve been chasing for years: a way to keep bad patents alive and out of reach. People targeted with troll lawsuits will be left with almost no realistic or affordable way to defend themselves.
We need EFF supporters to file public comments opposing these rules right away. The deadline for public comments is December 2. The USPTO is moving quickly, and staying silent will only help those who profit from abusive patents.
Tell USPTO: The public has a right to challenge bad patents
We’re asking supporters who care about a fair patent system to file comments using the federal government’s public comment system. Your comments don’t need to be long, or use legal or technical vocabulary. The important thing is that everyday users and creators of technology have the chance to speak up, and be counted.
Below is a short, simple comment you can copy and paste. Your comment will carry more weight if you add a personal sentence or two of your own. Please note that comments should be submitted under your real name and will become part of the public record.
Sample comment:
I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.
Why This Rule Change MattersInter partes review, (IPR), isn’t perfect. It hasn’t eliminated patent trolling, and it’s not available in every case. But it is one of the few practical ways for ordinary developers, small companies, nonprofits, and creators to challenge a bad patent without spending millions of dollars in federal court. That’s why patent trolls hate it—and why the USPTO’s new rules are so dangerous.
IPR isn’t easy or cheap, but compared to years of litigation, it’s a lifeline. When the system works, it removes bogus patents from the table for everyone, not just the target of a single lawsuit.
IPR petitions are decided by the Patent Trial and Appeal Board (PTAB), a panel of specialized administrative judges inside the USPTO. Congress designed IPR to provide a fresh, expert look at whether a patent should have been granted in the first place—especially when strong prior art surfaces. Unlike full federal trials, PTAB review is faster, more technical, and actually accessible to small companies, developers, and public-interest groups.
Here are three real examples of how IPR protected the public:
- The “Podcasting Patent” (Personal Audio)
Personal Audio claimed it had “invented” podcasting and demanded royalties from audio creators using its so-called podcasting patent. EFF crowdsourced prior art, filed an IPR, and ultimately knocked out the patent—benefiting the entire podcasting world.
Under the new rules, this kind of public-interest challenge could easily be blocked based on procedural grounds like timing, before the PTAB even examines the patent.
- SportBrain’s “upload your fitness data” patent
SportBrain sued more than 80 companies over a patent that claimed to cover basic gathering of user data and sending it over a network. A panel of PTAB judges canceled every claim.
Under the new rules, this patent could have survived long enough to force dozens more companies to pay up.
- Shipping & Transit: a troll that sued hundreds of businesses
For more than a decade, Shipping & Transit sued companies over extremely broad “delivery notifications”patents. After repeated losses at PTAB and in court (including fee awards), the company finally collapsed.
Under the new rules, a troll like this could keep its patents alive and continue carpet-bombing small businesses with lawsuits.
IPR hasn’t ended patent trolling. But when a troll waves a bogus patent at hundreds or thousands of people, IPR is one of the only tools that can actually fix the underlying problem: the patent itself. It dismantles abusive patent monopolies that never should have existed, saving entire industries from predatory litigation. That’s exactly why patent trolls and their allies have fought so hard to shut it down. They’ve failed to dismantle IPR in court or in Congress—and now they’re counting on the USPTO’s own leadership to do it for them.
What the USPTO Plans To DoFirst, they want you to give up your defenses in court. Under this proposal, a defendant can’t file an IPR unless they promise to never challenge the patent’s validity in court.
For someone actually being sued or threatened with patent infringement, that’s simply not a realistic promise to make. The choice would be: use IPR and lose your defenses—or keep your defenses and lose IPR.
Second, the rules allow patents to become “unchallengeable” after one prior fight. That’s right. If a patent survives any earlier validity fight, anywhere, these rules would block everyone else from bringing an IPR, even years later and even if new prior art surfaces. One early decision—even one that’s poorly argued, or didn’t have all the evidence—would block the door on the entire public.
Third, the rules will block IPR entirely if a district court case is projected to move faster than PTAB.
So if a troll sues you with one of the outrageous patents we’ve seen over the years, like patents on watching an ad, showing picture menus, or clocking in to work, the USPTO won’t even look at it. It’ll be back to the bad old days, where you have exactly one way to beat the troll (who chose the court to sue in)—spend millions on experts and lawyers, then take your chances in front of a federal jury.
The USPTO claims this is fine because defendants can still challenge patents in district court. That’s misleading. A real district-court validity fight costs millions of dollars and takes years. For most people and small companies, that’s no opportunity at all.
IPR was created by Congress in 2013 after extensive debate. It was meant to give the public a fast, affordable way to correct the Patent Office’s own mistakes. Only Congress—not agency rulemaking—can rewrite that system.
The USPTO shouldn’t be allowed to quietly undermine IPR with procedural traps that block legitimate challenges.
Bad patents still slip through every year. The Patent Office issues hundreds of thousands of new patents annually. IPR is one of the only tools the public has to push back.
These new rules rely on the absurd presumption that it’s the defendants—the people and companies threatened by questionable patents—who are abusing the system with multiple IPR petitions, and that they should be limited to one bite at the apple.
That’s utterly upside-down. It’s patent trolls like Shipping & Transit and Personal Audio that have sued, or threatened, entire communities of developers and small businesses.
When people have evidence that an overbroad patent was improperly granted, that evidence should be heard. That’s what Congress intended. These rules twist that intent beyond recognition.
In 2023, more than a thousand EFF supporters spoke out and stopped an earlier version of this proposal—your comments made the difference then, and they can again.
Our principle is simple: the public has a right to challenge bad patents. These rules would take that right away. That’s why it’s vital to speak up now.
Sample comment:
I oppose the USPTO’s proposed rule changes for inter partes review (IPR), Docket No. PTO-P-2025-0025. The IPR process must remain open and fair. Patent challenges should be decided on their merits, not shut out because of legal activity elsewhere. These rules would make it nearly impossible for the public to challenge bad patents, and that will harm innovation and everyday technology users.
