Feed aggregator

Von der Leyen vs. Weber: The EU’s climate fight reaches its endgame

ClimateWire News - Mon, 07/14/2025 - 6:08am
The two EU conservative heavyweights’ growing divisions are coming to a head over a crucial 2040 climate target.

Elon Musk faces a new threat in Canada

ClimateWire News - Mon, 07/14/2025 - 6:08am
Prime Minister Mark Carney is under pressure from Washington to make an EV U-turn.

Breaking down the force of water in the Texas floods

ClimateWire News - Mon, 07/14/2025 - 6:07am
A small amount of water — less than many might think — can sweep away people, cars and homes. Six inches is enough to knock people off their feet.

How hot can it get? Scientists struggle to find an answer.

ClimateWire News - Mon, 07/14/2025 - 6:07am
The answer has grave implications for humanity as climate change makes heat more intense and frequent.

Squid Dominated the Oceans in the Late Cretaceous

Schneier on Security - Fri, 07/11/2025 - 5:04pm

New research:

One reason the early years of squids has been such a mystery is because squids’ lack of hard shells made their fossils hard to come by. Undeterred, the team instead focused on finding ancient squid beaks—hard mouthparts with high fossilization potential that could help the team figure out how squids evolved.

With that in mind, the team developed an advanced fossil discovery technique that completely digitized rocks with all their embedded fossils in complete 3D form. Upon using that technique on Late Cretaceous rocks from Japan, the team identified 1,000 fossilized cephalopod beaks hidden inside the rocks, which included 263 squid specimens and 40 previously unknown squid species...

Simulation-based pipeline tailors training data for dexterous robots

MIT Latest News - Fri, 07/11/2025 - 3:20pm

When ChatGPT or Gemini give what seems to be an expert response to your burning questions, you may not realize how much information it relies on to give that reply. Like other popular generative artificial intelligence (AI) models, these chatbots rely on backbone systems called foundation models that train on billions, or even trillions, of data points.

In a similar vein, engineers are hoping to build foundation models that train a range of robots on new skills like picking up, moving, and putting down objects in places like homes and factories. The problem is that it’s difficult to collect and transfer instructional data across robotic systems. You could teach your system by teleoperating the hardware step-by-step using technology like virtual reality (VR), but that can be time-consuming. Training on videos from the internet is less instructive, since the clips don’t provide a step-by-step, specialized task walk-through for particular robots.

A simulation-driven approach called “PhysicsGen” from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Robotics and AI Institute customizes robot training data to help robots find the most efficient movements for a task. The system can multiply a few dozen VR demonstrations into nearly 3,000 simulations per machine. These high-quality instructions are then mapped to the precise configurations of mechanical companions like robotic arms and hands. 

PhysicsGen creates data that generalize to specific robots and condition via a three-step process. First, a VR headset tracks how humans manipulate objects like blocks using their hands. These interactions are mapped in a 3D physics simulator at the same time, visualizing the key points of our hands as small spheres that mirror our gestures. For example, if you flipped a toy over, you’d see 3D shapes representing different parts of your hands rotating a virtual version of that object.

The pipeline then remaps these points to a 3D model of the setup of a specific machine (like a robotic arm), moving them to the precise “joints” where a system twists and turns. Finally, PhysicsGen uses trajectory optimization — essentially simulating the most efficient motions to complete a task — so the robot knows the best ways to do things like repositioning a box.

Each simulation is a detailed training data point that walks a robot through potential ways to handle objects. When implemented into a policy (or the action plan that the robot follows), the machine has a variety of ways to approach a task, and can try out different motions if one doesn’t work.

“We’re creating robot-specific data without needing humans to re-record specialized demonstrations for each machine,” says Lujie Yang, an MIT PhD student in electrical engineering and computer science and CSAIL affiliate who is the lead author of a new paper introducing the project. “We’re scaling up the data in an autonomous and efficient way, making task instructions useful to a wider range of machines.”

Generating so many instructional trajectories for robots could eventually help engineers build a massive dataset to guide machines like robotic arms and dexterous hands. For example, the pipeline might help two robotic arms collaborate on picking up warehouse items and placing them in the right boxes for deliveries. The system may also guide two robots to work together in a household on tasks like putting away cups.

PhysicsGen’s potential also extends to converting data designed for older robots or different environments into useful instructions for new machines. “Despite being collected for a specific type of robot, we can revive these prior datasets to make them more generally useful,” adds Yang.

Addition by multiplication

PhysicsGen turned just 24 human demonstrations into thousands of simulated ones, helping both digital and real-world robots reorient objects.

Yang and her colleagues first tested their pipeline in a virtual experiment where a floating robotic hand needed to rotate a block into a target position. The digital robot executed the task at a rate of 81 percent accuracy by training on PhysicGen’s massive dataset, a 60 percent improvement from a baseline that only learned from human demonstrations.

The researchers also found that PhysicsGen could improve how virtual robotic arms collaborate to manipulate objects. Their system created extra training data that helped two pairs of robots successfully accomplish tasks as much as 30 percent more often than a purely human-taught baseline.

In an experiment with a pair of real-world robotic arms, the researchers observed similar improvements as the machines teamed up to flip a large box into its designated position. When the robots deviated from the intended trajectory or mishandled the object, they were able to recover mid-task by referencing alternative trajectories from their library of instructional data.

Senior author Russ Tedrake, who is the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT, adds that this imitation-guided data generation technique combines the strengths of human demonstration with the power of robot motion planning algorithms.

“Even a single demonstration from a human can make the motion planning problem much easier,” says Tedrake, who is also a senior vice president of large behavior models at the Toyota Research Institute and CSAIL principal investigator. “In the future, perhaps the foundation models will be able to provide this information, and this type of data generation technique will provide a type of post-training recipe for that model.”

The future of PhysicsGen

Soon, PhysicsGen may be extended to a new frontier: diversifying the tasks a machine can execute.

“We’d like to use PhysicsGen to teach a robot to pour water when it’s only been trained to put away dishes, for example,” says Yang. “Our pipeline doesn’t just generate dynamically feasible motions for familiar tasks; it also has the potential of creating a diverse library of physical interactions that we believe can serve as building blocks for accomplishing entirely new tasks a human hasn’t demonstrated.”

Creating lots of widely applicable training data may eventually help build a foundation model for robots, though MIT researchers caution that this is a somewhat distant goal. The CSAIL-led team is investigating how PhysicsGen can harness vast, unstructured resources — like internet videos — as seeds for simulation. The goal: transform everyday visual content into rich, robot-ready data that could teach machines to perform tasks no one explicitly showed them.

Yang and her colleagues also aim to make PhysicsGen even more useful for robots with diverse shapes and configurations in the future. To make that happen, they plan to leverage datasets with demonstrations of real robots, capturing how robotic joints move instead of human ones.

The researchers also plan to incorporate reinforcement learning, where an AI system learns by trial and error, to make PhysicsGen expand its dataset beyond human-provided examples. They may augment their pipeline with advanced perception techniques to help a robot perceive and interpret their environment visually, allowing the machine to analyze and adapt to the complexities of the physical world.

For now, PhysicsGen shows how AI can help us teach different robots to manipulate objects within the same category, particularly rigid ones. The pipeline may soon help robots find the best ways to handle soft items (like fruits) and deformable ones (like clay), but those interactions aren’t easy to simulate yet.

Yang and Tedrake wrote the paper with two CSAIL colleagues: co-lead author and MIT PhD student Hyung Ju “Terry” Suh SM ’22 and MIT PhD student Bernhard Paus Græsdal. Robotics and AI Institute researchers Tong Zhao ’22, MEng ’23, Tarik Kelestemur, Jiuguang Wang, and Tao Pang PhD ’23 are also authors. Their work was supported by the Robotics and AI Institute and Amazon.

The researchers recently presented their work at the Robotics: Science and Systems conference.

New AI system uncovers hidden cell subtypes, boosts precision medicine

MIT Latest News - Fri, 07/11/2025 - 2:40pm

In order to produce effective targeted therapies for cancer, scientists need to isolate the genetic and phenotypic characteristics of cancer cells, both within and across different tumors, because those differences impact how tumors respond to treatment.

Part of this work requires a deep understanding of the RNA or protein molecules each cancer cell expresses, where it is located in the tumor, and what it looks like under a microscope.

Traditionally, scientists have looked at one or more of these aspects separately, but now a new deep learning AI tool, CellLENS (Cell Local Environment and Neighborhood Scan), fuses all three domains together, using a combination of convolutional neural networks and graph neural networks to build a comprehensive digital profile for every single cell. This allows the system to group cells with similar biology — effectively separating even those that appear very similar in isolation, but behave differently depending on their surroundings.

The study, published recently in Nature Immunology, details the results of a collaboration between researchers from MIT, Harvard Medical School, Yale University, Stanford University, and University of Pennsylvania — an effort led by Bokai Zhu, an MIT postdoc and member of the Broad Institute of MIT and Harvard and the Ragon Institute of MGH, MIT, and Harvard.

Zhu explains the impact of this new tool: “Initially we would say, oh, I found a cell. This is called a T cell. Using the same dataset, by applying CellLENS, now I can say this is a T cell, and it is currently attacking a specific tumor boundary in a patient.

“I can use existing information to better define what a cell is, what is the subpopulation of that cell, what that cell is doing, and what is the potential functional readout of that cell. This method may be used to identify a new biomarker, which provides specific and detailed information about diseased cells, allowing for more targeted therapy development.”

This is a critical advance because current methodologies often miss critical molecular or contextual information — for example, immunotherapies may target cells that only exist at the boundary of a tumor, limiting efficacy. By using deep learning, the researchers can detect many different layers of information with CellLENS, including morphology and where the cell is spatially in a tissue.

When applied to samples from healthy tissue and several types of cancer, including lymphoma and liver cancer, CellLENS uncovered rare immune cell subtypes and revealed how their activity and location relate to disease processes — such as tumor infiltration or immune suppression.

These discoveries could help scientists better understand how the immune system interacts with tumors and pave the way for more precise cancer diagnostics and immunotherapies.

“I’m extremely excited by the potential of new AI tools, like CellLENS, to help us more holistically understand aberrant cellular behaviors within tissues,” says co-author Alex K. Shalek, the director of the Institute for Medical Engineering and Science (IMES), the J. W. Kieckhefer Professor in IMES and Chemistry, and an extramural member of the Koch Institute for Integrative Cancer Research at MIT, as well as an Institute member of the Broad Institute and a member of the Ragon Institute. “We can now measure a tremendous amount of information about individual cells and their tissue contexts with cutting-edge, multi-omic assays. Effectively leveraging that data to nominate new therapeutic leads is a critical step in developing improved interventions. When coupled with the right input data and careful downsteam validations, such tools promise to accelerate our ability to positively impact human health and wellness.”

Tradecraft in the Information Age

Schneier on Security - Fri, 07/11/2025 - 12:06pm

Long article on the difficulty (impossibility?) of human spying in the age of ubiquitous digital surveillance.

Study shows a link between obesity and what’s on local restaurant menus

MIT Latest News - Fri, 07/11/2025 - 11:35am

For many years, health experts have been concerned about “food deserts,” places where residents lack good nutritional options. Now, an MIT-led study of three major global cities uses a new, granular method to examine the issue, and concludes that having fewer and less nutritional eating options nearby correlates with obesity and other health outcomes.

Rather than just mapping geographic areas, the researchers examined the dietary value of millions of food items on roughly 30,000 restaurant menus and derived a more precise assessment of the connection between neighborhoods and nutrition.

“We show that what is sold in a restaurant has a direct correlation to people’s health,” says MIT researcher Fabio Duarte, co-author of a newly published paper outlining the study’s results. “The food landscape matters.”

The open-access paper, “Data-driven nutritional assessment of urban food landscapes: insights from Boston, London, Dubai,” was published this week in Nature: Scientific Reports.

The co-authors are Michael Tufano, a PhD student at Wageningen University, in the Netherlands; Duarte, associate director of MIT’s Senseable City Lab, which uses data to study cities as dynamic systems; Martina Mazzarello, a postdoc at the Senseable City Lab; Javad Eshtiyagh, a research fellow at the Senseable City Lab; Carlo Ratti, professor of the practice and director of the Senseable City Lab; and Guido Camps, a senior researcher at Wageningen University.

Scanning the menu

To conduct the study, the researchers examined menus from Boston, Dubai, and London, in the summer of 2023, compiling a database of millions of items available through popular food-delivery platforms. The team then evaluated the food items as rated by the USDA’s FoodData Central database, an information bank with 375,000 kinds of food products listed. The study deployed two main metrics, the Meal Balance Index, and the Nutrient-Rich Foods Index.

The researchers examined about 222,000 menu items from over 2,000 restaurants in Boston, about 1.6 million menu items from roughly 9,000 restaurants in Dubai, and about 3.1 million menu items from about 18,000 restaurants in London. In Boston, about 71 percent of the items were in the USDA database; in Dubai and London, that figure was 42 percent and 56 percent, respectively.

The team then rated the nutritional value of the items appearing on menus, and correlated the food data with health-outcome data from Boston and London. In London, they found a clear correlation between neighborhood menu offerings and obesity, or the lack thereof; with a slightly less firm correlation in Boston. Areas with food options that include a lot of dietary fibers, sometimes along with fruits and vegetables, tend to have better health data.

In Dubai, the researchers did not have the same types of health data available but did observe a strong correlation between rental prices and the nutritional value of neighborhood-level food, suggesting that wealthier residents have better nourishment options.

“At the item level, when we have less nutritional food, we see more cases of obsesity,” Tufano says. “It’s true that not only do we have more fast food in poor neighborhoods, but the nutritional value is not the same.”

Re-mapping the food landscape

By conducting the study in this fashion, the scholars added a layer of analysis to past studies of food deserts. While past work has broken ground by identifying neighborhoods and areas lacking good food access, this research makes a more comprehensive assessment of what people consume. The research moves toward evaluating the complex mix of food available in any given area, which can be true even of areas with more limited options.

“We were not satisfied with this idea that if you only have fast food, it’s a food desert, but if you have a Whole Foods, it’s not,” Duarte says. “It’s not necessarily like that.”

For the Senseable City Lab researchers, the study is a new technique further enabling them to understand city dynamics and the effects of the urban environment on health. Past lab studies have often focused on issues such as urban mobility, while extending to matters such as mobility and air pollution, among other topics.

Being able to study food and health at the neighborhood level, though, is still another example of the ways that data-rich spheres of life can be studied in close detail.

“When we started working on cities and data, the data resolution was so low,” Ratti says. “Today the amount of data is so immense we see this great opportunity to look at cities and see the influence of the urban environment as a big determinant of health. We see this as one of the new frontiers of our lab. It’s amazing how we can now look at this very precisely in cities.”

How an MIT professor introduced hundreds of thousands of students to neuroscience

MIT Latest News - Fri, 07/11/2025 - 9:00am

From the very beginning, MIT Professor Mark Bear’s philosophy for the textbook “Neuroscience: Exploring the Brain” was to provide an accessible and exciting introduction to the field while still giving undergraduates a rigorous scientific foundation. In the 30 years since its first print printing in 1995, the treasured 975-page tome has gone on to become the leading introductory neuroscience textbook, reaching hundreds of thousands of students at hundreds of universities around the world.

“We strive to present the hard science without making the science hard,” says Bear, the Picower Professor in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. The fifth edition of the textbook is out today from the publisher Jones & Bartlett Learning.

Bear says the book is conceived, written, and illustrated to instill students with the state of knowledge in the field without assuming prior sophistication in science. When he first started writing it in the late 1980s — in an effort soon joined by his co-authors and former Brown University colleagues Barry Connors and Michael Paradiso — there simply were no undergraduate neuroscience textbooks. Up until then, first as a graduate teaching assistant and then as a young professor, Bear taught Brown’s pioneering introductory neuroscience class with a spiral-bound stack of photocopied studies and other scrounged readings.

Don’t overwhelm

Because universities were only beginning to launch neuroscience classes and majors at the time, Bear recalls that it was hard to find a publisher. The demand was just too uncertain. With an unsure market, Bear says, the original publisher, Williams & Wilkins, wanted to keep costs down by printing only in black and white. But Bear and his co-authors insisted on color. Consistent with their philosophy for the book, they wanted students, even before they began reading, to be able to learn from attractive, high-quality illustrations.

“Rather than those that speak a thousand words, we wanted to create illustrations that each make a single point.” Bear says. “We don’t want to overwhelm students with a bunch of detail. If people want to know what’s in our book, just look at the pictures.”

Indeed, if the book had struck students as impenetrable and dull, Bear says, he and his co-authors would have squandered the advantage they had in presenting their subject: the inherently fascinating and exciting brain.

“Most good scientists are extremely enthusiastic about the science. It exciting. It’s fun. It turns them on,” Bear says. “We try to communicate the joy. We’re so lucky because the object of our affection is the brain.”

To help bring that joy and excitement across, another signature of the book throughout its 30-year-history has been the way it presents the process of discovery alongside the discoveries themselves, Bear says. While it’s instructive to provide students with the experimental evidence that supports the concepts they are learning, it would bog down the text to delineate the details of every experiment. Instead, Bear, Connors, and Paradiso have chosen to highlight the process of discovery via one-page guest essays by prominent neuroscientists who share their discovery stories personally. Each edition has featured about 25 such “Path of Discovery” essays, so more than 100 scientists have participated, including several Nobel Prize winners, such as the Picower Institute’s founding director, Susumu Tonegawa.

The new edition includes Path of Discovery essays by current Picower Institute Director Li-Huei Tsai and Picower Institute colleague Emery N. Brown. Tsai recounts her discovery that sensory stimulation of 40Hz rhythms in the brain can trigger a health-promoting response among many different cell types. Brown writes about how various biological cycles and rhythms in the brain and body, such as the circadian rhythms and brain waves, help organize our daily lives.

Immense impact

Jones & Bartlett reports that more than 470 colleges and universities in 48 U.S. states and the District of Columbia have used the fourth edition of the book. Various editions have also been translated into seven other languages, including Chinese, French, Portuguese, and Spanish. There are hundreds of reviews on Amazon.com with an average around 4.6 stars. One reviewer wrote about the fourth edition: “I never knew it was possible to love a textbook before!”

The reviews sometimes go beyond mere internet postings. Once, after Bear received an award in Brazil, he found himself swarmed at the podium by scores of students eager for him to sign their copies of the book. And earlier this year, when Bear needed surgery, the anesthesiologist was excited to meet him.

“The anesthesiologist was like, ‘Are you the Mark Bear who wrote the textbook?,’ and she was so excited, because she said, ‘This book changed my life,’” Bear recalls. “After I recovered, she showed up in the ICU for me to sign it. All of us authors have had this experience that there are people whose lives we’ve touched.”

While Bear is proud that so many students have benefited from the book, he also notes that teaching and textbook writing have benefited him as a scientist. They have helped him present his research more clearly, he says, and have given him a broad perspective on what’s truly important in the field.

“Experience teaching will influence the impact of your own science by making you more able to effectively communicate it.” Bear says. “And the teacher has a difficult job of surveying a field and saying, ‘I’ve got to capture the important advances and set aside the less-important stuff.’ It gives you a perspective that helps you to discriminate between more-important and less-important problems in your own research.”

Over the course of 30 years via their carefully crafted book, Bear, Connors, and Paradiso have lent that perspective to generations of students. And the next generation will start with today’s publishing of the new edition. 

Megalaw complicates Trump’s plans to quickly ax renewable credits

ClimateWire News - Fri, 07/11/2025 - 6:19am
Many planned solar and wind projects slated to go online by 2030 may still qualify for Biden-era credits under the new law.

Marjorie Taylor Greene pledges probe into geoengineering

ClimateWire News - Fri, 07/11/2025 - 6:18am
The Georgia Republican said she spoke with EPA Administrator Lee Zeldin and will hold a hearing on weather-changing technology.

Texas Legislature to consider flood measures in special session

ClimateWire News - Fri, 07/11/2025 - 6:17am
Lawmakers also will debate proposals related to abortion, hemp and redistricting when they convene July 21.

GOP attorneys general seek to intervene in climate case against Trump

ClimateWire News - Fri, 07/11/2025 - 6:16am
They say the youth-led lawsuit — which targets three of the president's energy-related executive orders — would cost money and jobs.

California scales back plan to cool prisons

ClimateWire News - Fri, 07/11/2025 - 6:16am
A budget deficit prompted state lawmakers to trim — but not eliminate — funding for a pilot program that will pay for air conditioning and insulation at three correctional facilities.

European Parliament rejects EU anti-deforestation black list

ClimateWire News - Fri, 07/11/2025 - 6:15am
It’s yet another blow for the European Commission in its effort to get the anti-deforestation law up and running.

Climate change makes South Asia’s monsoons more erratic and intense

ClimateWire News - Fri, 07/11/2025 - 6:14am
Monsoon season is now punctuated with intense flooding and dry spells, rather than sustained rain throughout.

US faces more extreme weather, but attitudes and actions aren’t keeping up

ClimateWire News - Fri, 07/11/2025 - 6:14am
People and governments are generally living in the past and haven’t embraced that extreme weather is now the norm, to say nothing about preparing for a nastier future.

BYD, other EV battery makers face more pressure to cut emissions

ClimateWire News - Fri, 07/11/2025 - 6:13am
Many leading suppliers have battery production hubs in China and Poland, where power systems remain heavily reliant on polluting fossil fuels, said a Greenpeace report.

Consequential differences in satellite-era sea surface temperature trends across datasets

Nature Climate Change - Fri, 07/11/2025 - 12:00am

Nature Climate Change, Published online: 11 July 2025; doi:10.1038/s41558-025-02362-6

Global datasets of surface temperature and sea surface temperature (SST) are routinely used in climate change studies. Here the authors show that while surface temperature datasets closely agree, four main SST datasets show substantial variation, with implications for their application.

Pages