Feed aggregator
Canada Needs Nationalized, Public AI
Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?
Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon...
Why the Iran war is bad for clean energy
Judge orders FEMA to step up funding to states
CEO of climate nonprofit who fought EPA for $7B departs
One obstacle for Trump’s AI power pledge: The neighbors
Senate Democrats accuse FEMA of obstructing Congress
Louisiana nears deal with ConocoPhillips over coastal erosion
Florida Legislature passes bill that could impact condo owners’ insurance
It might be hard to fathom in the East, but US saw second-warmest winter
Posting your sweaty subway slog on social media? You’re not alone, study says.
Alberta carbon market rally fades as April 1 deadline nears
EU investment bank to spend $87B on clean energy this decade
A better method for planning complex visual tasks
MIT researchers have developed a generative artificial intelligence-driven approach for planning long-term visual tasks, like robot navigation, that is about twice as effective as some existing techniques.
Their method uses a specialized vision-language model to perceive the scenario in an image and simulate actions needed to reach a goal. Then a second model translates those simulations into a standard programming language for planning problems, and refines the solution.
In the end, the system automatically generates a set of files that can be fed into classical planning software, which computes a plan to achieve the goal. This two-step system generated plans with an average success rate of about 70 percent, outperforming the best baseline methods that could only reach about 30 percent.
Importantly, the system can solve new problems it hasn’t encountered before, making it well-suited for real environments where conditions can change at a moment’s notice.
“Our framework combines the advantages of vision-language models, like their ability to understand images, with the strong planning capabilities of a formal solver,” says Yilun Hao, an aeronautics and astronautics (AeroAstro) graduate student at MIT and lead author of an open-access paper on this technique. “It can take a single image and move it through simulation and then to a reliable, long-horizon plan that could be useful in many real-life applications.”
She is joined on the paper by Yongchao Chen, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS); Chuchu Fan, an associate professor in AeroAstro and a principal investigator in LIDS; and Yang Zhang, a research scientist at the MIT-IBM Watson AI Lab. The paper will be presented at the International Conference on Learning Representations.
Tackling visual tasks
For the past few years, Fan and her colleagues have studied the use of generative AI models to perform complex reasoning and planning, often employing large language models (LLMs) to process text inputs.
Many real-world planning problems, like robotic assembly and autonomous driving, have visual inputs that an LLM can’t handle well on its own. The researchers sought to expand into the visual domain by utilizing vision-language models (VLMs), powerful AI systems that can process images and text.
But VLMs struggle to understand spatial relationships between objects in a scene and often fail to reason correctly over many steps. This makes it difficult to use VLMs for long-range planning.
On the other hand, scientists have developed robust, formal planners that can generate effective long-horizon plans for complex situations. However, these software systems can’t process visual inputs and require expert knowledge to encode a problem into language the solver can understand.
Fan and her team built an automatic planning system that takes the best of both methods. The system, called VLM-guided formal planning (VLMFP), utilizes two specialized VLMs that work together to turn visual planning problems into ready-to-use files for formal planning software.
The researchers first carefully trained a small model they call SimVLM to specialize in describing the scenario in an image using natural language and simulating a sequence of actions in that scenario. Then a much larger model, which they call GenVLM, uses the description from SimVLM to generate a set of initial files in a formal planning language known as the Planning Domain Definition Language (PDDL).
The files are ready to be fed into a classical PDDL solver, which computes a step-by-step plan to solve the task. GenVLM compares the results of the solver with those of the simulator and iteratively refines the PDDL files.
“The generator and simulator work together to be able to reach the exact same result, which is an action simulation that achieves the goal,” Hao says.
Because GenVLM is a large generative AI model, it has seen many examples of PDDL during training and learned how this formal language can solve a wide range of problems. This existing knowledge enables the model to generate accurate PDDL files.
A flexible approach
VLMFP generates two separate PDDL files. The first is a domain file that defines the environment, valid actions, and domain rules. It also produces a problem file that defines the initial states and the goal of a particular problem at hand.
“One advantage of PDDL is the domain file is the same for all instances in that environment. This makes our framework good at generalizing to unseen instances under the same domain,” Hao explains.
To enable the system to generalize effectively, the researchers needed to carefully design just enough training data for SimVLM so the model learned to understand the problem and goal without memorizing patterns in the scenario. When tested, SimVLM successfully described the scenario, simulated actions, and detected if the goal was reached in about 85 percent of experiments.
Overall, the VLMFP framework achieved a success rate of about 60 percent on six 2D planning tasks and greater than 80 percent on two 3D tasks, including multirobot collaboration and robotic assembly. It also generated valid plans for more than 50 percent of scenarios it hadn’t seen before, far outpacing the baseline methods.
“Our framework can generalize when the rules change in different situations. This gives our system the flexibility to solve many types of visual-based planning problems,” Fan adds.
In the future, the researchers want to enable VLMFP to handle more complex scenarios and explore methods to identify and mitigate hallucinations by the VLMs.
“In the long term, generative AI models could act as agents and make use of the right tools to solve much more complicated problems. But what does it mean to have the right tools, and how do we incorporate those tools? There is still a long way to go, but by bringing visual-based planning into the picture, this work is an important piece of the puzzle,” Fan says.
This work was funded, in part, by the MIT-IBM Watson AI Lab.
Policy interactions reshape the outcomes of carbon pricing policies
Nature Climate Change, Published online: 11 March 2026; doi:10.1038/s41558-026-02578-0
The adoption and effectiveness of carbon pricing are highly reshaped by interactions with other climate mitigation policies. A global comparative assessment of policy synergies and conflicts can guide policymakers in designing policy portfolios that can achieve higher mitigation cost-effectiveness.Cross-national comparative assessment of synergies and conflicts in climate policy mixes
Nature Climate Change, Published online: 11 March 2026; doi:10.1038/s41558-026-02574-4
Interactions between climate policy instruments can have synergistic and conflicting effects, but these interactions are not systematically understood. This research provides global evidence on how policy characteristics and interactions in different contexts could lead to different outcomes.Climate policy feasibility across Europe relies on the conditional middle
Nature Climate Change, Published online: 11 March 2026; doi:10.1038/s41558-026-02562-8
The feasibility of climate policies hinges on public support. A survey of 13 EU countries shows that ‘middle groups’—citizens whose support across mitigation measures varies, rather than being uniformly supportive or opposed—play a pivotal role in shaping overall public policy support and electoral outcomes.Copyright Bullying vs. Religious Freedom
The government should not help a religious institution to punish or deter members from inquiring about their faith. Yet, once again, the Watch Tower Bible and Tract Society is trying to use flimsy copyright claims to exploit the special legal tools available to copyright owners in order to unmask anonymous online speakers. And, once again, EFF has stepped in to urge the courts not to give Watch Tower’s attempts the force of law, with the help of local counsel Jonathan Phillips of Phillips & Bathke, P.C.
EFF’s client, J. Doe, is a member of the Jehovah’s Witnesses who became interested in the history of the organization’s public statements, and how they’ve changed over time. They created research tools to analyze those documents and ultimately created a website, JWS Library, allowing others to use those tools and verify their findings through an archive that included documents suppressed by the church. Doe and others discovered prophecies that failed to come true, erasure of a leader’s disgrace, increased calls for obedience and donations, and other insights about the Jehovah’s Witnesses’ practices. Doe also used machine translation on a foreign-language document to help the community understand what the church was saying to different audiences and also to help understand potential changes in the organization’s attitudes towards dissent.
Within the church, dissent or even asking questions has often been punished by labeling members as apostates and ostracizing—or “disfellowshipping”— them. As a result, Doe and others choose to speak anonymously to avoid retaliation that could cost them family, friend, and professional relationships.
There is no law against questioning the Jehovah’s Witnesses. Instead, Watch Tower argues that Doe’s activities constitute copyright infringement and seeks to use the special process provided in the Digital Millennium Copyright Act (DMCA) to unmask them. It sent DMCA subpoenas to Google and Cloudflare, seeking information that would help them uncover Doe’s identity.
The problem for Watch Tower is that Doe’s research and commentary are clear fair uses allowed under copyright law. The First Amendment does not permit the unmasking of anonymous speakers based on such weak claims. Indeed, the First Amendment protects anonymous speakers precisely because some would be deterred from speaking if they faced retribution for doing so.
EFF stands with those who question the claims of those in power and who share the tools and knowledge needed to do so. We urge the judges in the Southern District of New York to quash these improper subpoenas and not allow copyright to be used to suppress important, legitimate speech.
2026 MIT Sloan Sports Analytics Conference shows why data make a difference
With time dwindling in the Olympic women’s ice hockey gold medal game on Feb. 19, players for Team USA and Team Canada lined up for a key faceoff in Canada’s end. Canada had a 1-0 lead. USA had 2:23 left, and an ace up their sleeve: analytics.
USA Coach John Wroblewski pulled the goalkeeper, to get a player advantage, and had forward Alex Carpenter take the faceoff. Statistics show that Carpenter is not only very good at winning faceoffs; she also wins a lot of them cleanly. That allows her team to quickly regain possession, without too many teammates nearby. Knowing that, Wroblewski directed the USA players to spread out, largely away from the faceoff circle, in position to circulate the puck as soon as they got it back.
Carpenter won the faceoff, and Team USA quickly started a passing move. Laila Edwards soon launched a shot that longtime star Hilary Knight deflected in for the crucial, game-tying goal with 2:04 left. Team USA then won in overtime. And data-driven decision-making had also won big; indeed, it helped change the Olympics.
“What it does for a coach, the other thing these analytics do, is … it allows you to move forward with this confidence level,” Wroblewski said on Saturday at the 20th annual MIT Sloan Sports Analytics Conference (SSAC), during a hockey analytics panel where he detailed his decision-making for that faceoff, and in the gold medal game generally.
Using the data, he added, lets coaches “limit the emotion” that might cloud their in-game decisions.
“By the time you get to that decision, you’re then allowed the freedom to step away from the decision, to allow the players to go earn their medal,” Wroblewski added.
You don’t usually find coaches divulging their tactical secrets just three weeks after a big game has been played. But then, this is the MIT Sloan conference, a trailblazing forum that has helped analytics ideas spread throughout sports. Coaches, players, and analysts know any data-driven discussion will find an interested audience.
“Analytics was massive for us going into the gold medal game,” Wroblewski said.
20 years on: From classrooms to convention halls
The 20th edition of SSAC was a strong one, with many substantive panel discussions and interviews; the annual research paper, hackathon, and case study contests; mentorship events and informal networking opportunities; and more. Over 2,500 people attended the two-day event, held at Boston’s Menino Conference and Exhibition Center (MCEC). The conference was founded in 2007 by Daryl Morey, now president of basketball operations for the NBA Philadelphia 76ers, and Jessica Gelman, now CEO of the Kraft Analytics Group.
The first three editions of the conference were held on the MIT campus. In 2010, it first moved to the MCEC (one of two regular convention-center sites it uses), and starting in 2011, the conference became a two-day event.
Today people attend for the panels, the career opportunities, and, in some cases, to make news. NBA Commissioner Adam Silver was on hand this year, engaging in an on-stage conversation with former WNBA great Sue Bird, publicly addressing some of the key issues facing his league, and drawing wide media coverage.
First, though, Silver reflected about attending the second edition of the conference on the MIT campus in 2008, when he was deputy commissioner.
“It was literally a classroom of 20 people we were talking to,” Silver recalled. “I think it was the beginning of the moment when people were taking sports as a discipline more seriously. … I give Jessica and Daryl a lot of credit [for that].”
Addressing tanking and gambling
A core part of Silver’s comments focused on two big issues in pro basketball: tanking and gambling. About eight NBA teams appear to be tanking this season, that is, losing games in order to increase their chances of getting a high draft pick.
“We are going to make substantial changes for next year,” Silver said, although he also added: “I am an incrementalist. I think we’ve got to be a little bit careful about how huge a change we make at once. I’m not ruling anything out. But I am paying attention to that.”
To be sure, tanking has long been a part of professional basketball, as Bird noted during the conversation.
“We did it in Seattle, to be honest,” Bird said. “Breanna Stewart was coming out of college. We were in a ‘rebuild.’”
Still, in this NBA season, tanking has become an epidemic, in “a little bit of a perfect storm,” as Silver put it on Friday. And almost every proposed solution seems to have drawbacks. Perhaps the simplest cure for tanking, actually, would be robust analytical studies showing that it is not a very effective team-building strategy. If that is what the numbers reveal, of course.
Meanwhile, multiple arrests of NBA players and coaches at the beginning of the season show further that sports gambling continues to present challenges to professional sports leagues.
“I personally think there should be more regulation now, not less,” Silver said on Friday, suggesting that federal rules would simplify things in the U.S., where 39 states allow sports gambling to some extent. He also said the NBA can continue to work on monitoring data to protect against gambling scandals.
“I think there are some large-platform companies are that are looking at a business opportunity to come in and in a much more sophisticated way work as a detection service with the league,” Silver said.
Through it all, Silver said, the NBA will continue to be a data-driven operation. Have you watched a game with a long instant-replay review, and gotten a little impatient? Still, have you kept watching that game? So does almost everyone.
“For years people would tell us, ‘Don’t use instant replay, because you’ll turn fans off,’” Silver said. However, he added, “The data suggests, in terms of ratings and what servers tell us, you almost never lose a fan when you’re going to replay. Because they want to see the replay and they want to see what happened.”
The minnows got big
Sports analytics took root in baseball, with its discrete pitcher-hitter actions. Legendary MLB general manager Branch Rickey employed a statistician for the great Brooklyn Dodgers of the 1950s; the famous manager Earl Weaver thought analytically with the Baltimore Orioles in the 1970s. Baseball analyst Bill James made sports analytics a viable pursuit with his annual “Baseball Abstract” bestsellers in the 1980s, and Michael Lewis’ “Moneyball” popularized it.
But data can be applied to all sports — and sometimes is most valuable when only some teams are interested in it. Take soccer. In the English Premier League, about three clubs have been heavily oriented around analytics over the last decade: Liverpool FC, Brighton FC, and Brentford FC. That has helped Liverpool win multiple titles, while Brighton and Brentford, smaller clubs, have startled many with their success.
Saturday at SSAC, Brentford’s majority owner Matthew Benham made one of his most visible public appearances, in an onstage interview with podcaster Roger Bennett. Benham first made money wagering on soccer, then invested in Brentford, his childhood club.
“The information we used in the early days was really, really rudimentary,” Benham said. In his account, his success building an analytics-based club has only partly been about the numbers.
“A lot of the success has just been in running things efficiently.” Benham said. He prefers to have management discussions that are an “exchange of views, rather than debate,” since the latter implies an interaction with a clear winner and loser. Instead, compiling independent-minded views from his executives is more important.
Brentford also uses “a combination of old-style scouting and data” for its player acquisition decisions, Benham said. Not every decision works. Brentford could have signed current Arsenal FC star Eberechi Eze for a mere $4 million pounds in 2019, and passed; Crystal Palace FC acquired Eze, then realized a windfall when Arsenal purchased his services.
Still, pressed by Bennett to specify a little more about his analytical thinking, Benham implied that strikers are valuable not only for their finishing skills, but for consistently getting open for shots on goal. Fans tend to focus too much on a player’s misses, rather than how many chances are created by their off-ball work.
“Getting in position is way, way more informative than finishing,” Benham said.
A similar insight seems to have guided Liverpool’s thinking. As it happens, a Friday panel at SSAC featured Ian Graham, who ran Liverpool’s analytics operations from 2012 to 2023, and weighed in on a number of subjects. Among other things, Graham noted, teams are too cautious when tied late in a match; soccer grants three points for a win, one for a draw, and zero for a loss, so from a tied position, the reward for winning is twice as great as the penalty for losing.
“Teams don’t go for it enough,” Graham said. “Teams think a draw is an okay result.”
The limits of knowledge
Sports, of course, are ultimately played by imperfect, injury-prone, and sometimes exhausted athletes. One consistent lesson from the MIT Sloan conference involves the limits of data and plans.
“We think the data is giving us an answer, when actually it’s giving us some information, and we still have to make a choice,” said Ariana Andonian, vice president of player personnel for the Philadelphia 76ers, during a basketball panel on Saturday.
Asked about the promise of artificial intelligence for sports analytics, Sonia Raman, head coach of the WNBA’s Seattle Storm, noted that its insights might always be limited by circumstances.
“It’s not like you can just get an AI report in the middle of the game that says, ‘Get some shooting in,’” said Raman, who, prior to coaching in the WNBA and NBA served for 12 years as head coach of the MIT women’s basketball team.
“You can have a great plan, but if it’s poorly executed, it’s way worse than a poor plan that’s well executed,” added Steven Adams, a center for the NBA’s Houston Rockets (who is currently not playing due to injury), during the same panel.
And yet, in some games and matches, the analytics do work, the plans do come to fruition, and the numbers do make a difference. When that happens, as John Wroblewski can now attest, the results are golden.
Think Twice Before Buying or Using Meta’s Ray-Bans
Over the last decade or so, the tech industry has tried, and mostly failed, to make “smart glasses”—tech-infused glasses with cameras, AI, maps, displays, and more—a thing. But over the past year, products like Meta’s Ray-Ban Display Glasses and Oakley’s Meta Glasses have gone from a curious niche to the mainstream.
Before you strap a dashcam to your face and sprint out into the world filming everything and everyone in your life, there are some civil liberties and privacy concerns to consider before buying or using a pair.
Meta is the biggest company that makes these sorts of glasses and their partnerships with Ray-Ban and Oakely are the most popular options, so we’ll be mostly focusing on them here. Others, like models from Snapchat are similar in form but far less ubiquitous. But Meta won’t hold this space for long. Google’s already announced a partnership with Warby Parker for their “AI-powered smart glasses,” and there are rumors around a competing product from Apple.
With that, let’s dive into some of the considerations you should make before purchasing a pair.
If You’re Thinking About Buying Smart Glasses You’re likely not the only one who can see (and hear) your footageThe photos and videos you record with most smartglasses will likely be stored online at some point in the process. On Meta’s offerings, unless you are livestreaming, media you capture when you press the camera button is kept on the glasses until you import them onto your phone, but media is imported automatically by default into the Meta AI mobile app, which is required to set up the glasses.
You can't use any AI features locally on the glasses. So anytime you use AI features, like when you say, “Hey Meta, start recording,” the footage is fed to Meta. You can use the glasses without the Meta AI app entirely, but considering you can’t easily download footage from the glasses to your phone without it, most people will likely use the app.
Some videos are fed to Meta for AI training, and we know at least in some cases that those videos go through human review. An investigation by Swedish newspapers found that workers were reviewing and annotating camera footage, which includes all sorts of sensitive videos, including nudity, sex, and going to the bathroom. Meta claimed to the BBC that this is in accordance with its terms of use, all in the name of AI training, which states:
In some cases, Meta will review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review may be automated or manual (human).
This all means that Meta and their third-party contractors will have access to at least some of what you record, and it’s very hard as a user to know where footage goes, who will have access to it, and what they will do with it. When you save footage to your phone’s camera roll, which is where the Meta AI app stores content, that might also be sent to Apple or Google’s servers, depending on your settings. Employees at these companies can then possibly access that media, and it could be shared with law enforcement.
The recorded audio from conversations with Meta AI are also saved by default, and if you don’t like that, tough luck, unless you go in and manually delete them every time you say something.
Filming all the time is even more privacy invasive than you thinkA common argument in favor of using the cameras in smartglasses is that phones and cameras can do this too, and it’s never been a problem.
But smartglasses are designed to resemble regular glasses, to the point where most reviews point out how friends didn’t notice that they had cameras embedded in them. They’re designed to be invisible to those being recorded outside of a small indicator light when they’re recording video footage (that cheap hacks can disable). Whereas it is often obvious that a person is recording if they pull their phone out of their pocket and point it at someone else.
They’re designed to be invisible to those being recorded outside of a small indicator light when they’re recording video footage
Moreover, constant recording of everything in public spaces can create all sorts of potential privacy problems, some more obvious than others. This is another way that cameras on glasses are different from cameras on phones: it is far easier to constantly record one’s whereabouts with the former than the latter. If you continuously record, maybe you just happen to catch someone entering their passcode or password onto their phone or computer at a coffee shop, or broadcast someone’s bank details when you’re standing in line at an ATM. That doesn’t even begin to get into when smartglasses are intentionally used for less socially responsible means. And some people may forget to turn off their smartglasses when they enter a private space like a bathroom.
And if you find yourself caught on someone’s camera, there’s not much you can do in recourse. If you do notice a stranger recording you, it’s up to you to intervene and ask not to be included in that footage, which can easily turn awkward or confrontational.
Our expectations of privacy shift when we’re in public, but bystanders in many cases will still have privacy interests. Public spaces are a place where you will be seen, but that shouldn’t mean it’s suddenly okay to catalog and identify everyone.
Consider the company’s the track record and public statementsMeta, Google, Apple—perhaps one benefit of all the major tech companies entering this market is that we already have a good idea of how much they tend to respect the privacy of their users or the openness of their platforms. Spoiler, it’s often not much.
Meta has a long history of privacy invasive technologies and practices. We’ve heard rumblings that Meta hopes to add face recognition to its smartglasses, preferably, “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” Yikes. This is a monumentally bad idea that should be abandoned by Meta and any of its competitors considering a similar feature. But regardless of whether they launch this feature, it’s a pretty clear indication of where Meta wants these sorts of devices to go.
If You Have Smartglasses Already Opt out of sharing with Meta where you canYou can disable a couple of the features where unnecessary data is sent to Meta. In the Meta AI app, under the device settings, there’s a privacy page where you can disable sharing additional data, and more importantly, turn off “Cloud media,” where your photos and videos are sent to Meta’s cloud for processing and temporary storage.
Decide your use-case and stick to itThese glasses can be useful for filming a variety of activities. We’ve seen fascinating scenes of tattoo artists doing their work (with client’s permission), and it doesn’t take a stretch of the imagination to see how people might use it to film extreme sports. Even on an everyday level, you might find them useful for capturing holidays, birthdays, and all sorts of other private occasions.
But if you buy these glasses for a specific, mostly private purpose, it is probably best to stick to that, instead of wearing them everywhere and recording everything you do.
Follow the rules of a businesses and social expectationsYou often have a right to record in public spaces, but that doesn’t mean other people will like it. Businesses, including restaurants and stores, may want nothing to do with continuous filming and may either post a sign asking you not to use smartglasses, or ask you to stop. This may reflect the preferences not just of the business owner, but the people around you. And don’t use glasses to record when you enter other people’s private spaces like bathrooms or changing rooms.
It’s also a good idea to check in with friends and family before tapping that record button at a social gathering. Some people may not be as comfortable with these glasses as they are with other recording equipment.
Consider blurring strangers if you’re going to upload videoBlurring video footage isn’t an easy task, but if you’re considering uploading footage from something like a protest, it may be worth the effort to do so (apps like Meta’s Edits simplify this process, as do some other video sites, like YouTube). Some people don’t want the government to see their faces at protests, and might be afraid to attend if other people are uploading their faces.
Some people don’t want the government to see their faces at protests, and might be afraid to attend if other people are uploading their faces.
It would be better if Meta leveraged its AI features to offer this sort of feature automatically, especially with livestreaming. It’s not that outlandish of a request, as it seems like the company tries to blur faces automatically in footage it captures for annotation, though it’s not always reliable. After all, Google began redacting faces in Street View years ago, following privacy concerns from groups like EFF.
Resist face recognitionAdding facial recognition technology to smartglasses would obliterate the privacy of everyone. We cannot let companies push face recognition into these glasses, and as a user, you should make your voice clear that this is not something you want.
Smartglasses don’t have to be used to decimate the privacy of anyone you encounter during the day. There are legitimate uses out there, but it’s up to those who use them to respect the social norms of the spaces they enter and the people they encounter.
3 Questions: Building predictive models to characterize tumor progression
Just as Darwin’s finches evolved in response to natural selection in order to endure, the cells that make up a cancerous tumor similarly counter selective pressures in order to survive, evolve, and spread. Tumors are, in fact, complex sets of cells with their own unique structure and ability to change.
Today, artificial Intelligence and machine learning tools offer an unparalleled opportunity to illuminate the generalizable rules governing tumor progression on the genetic, epigenetic, metabolic, and microenvironmental levels.
Matthew G. Jones, an assistant professor in the MIT Department of Biology, the Koch Institute for Integrative Cancer Research, and the Institute for Medical Engineering and Science, hopes to use computational approaches to build predictive models — to play a game of chess with cancer, making sense of a tumor’s ability to evolve and resist treatment with the ultimate goal of improving patient outcomes. In this interview, he describes his current work.
Q: What aspect of tumor progression are you working to explore and characterize?
A: A very common story with cancer is that patients will respond to a therapy at first, and then eventually that treatment will stop working. The reason this largely happens is that tumors have an incredible, and very challenging, ability to evolve: the ability to change their genetic makeup, protein signaling composition, and cellular dynamics. The tumor as a system also evolves at a structural level. Oftentimes, the reason why a patient succumbs to a tumor is because either the tumor has evolved to a state we can no longer control, or it evolves in an unpredictable manner.
In many ways, cancers can be thought of as, on the one hand, incredibly dysregulated and disorganized, and on the other hand, as having their own internal logic, which is constantly changing. The central thesis of my lab is that tumors follow stereotypical patterns in space and time, and we’re hoping to use computation and experimental technology to decode the molecular processes underlying these transformations.
We’re focused on one specific way tumors are evolving through a form of DNA amplification called extrachromosomal DNA. Excised from the chromosome, these ecDNAs are circularized and exist as their own separate pool of DNA particles in the nucleus.
Initially discovered in the 1960s, ecDNA were thought to be a rare event in cancer. However, as researchers began applying next-generation sequencing to large patient cohorts in the 2010s, it seemed like not only were these ecDNA amplifications conferring the ability of tumors to adapt to stresses, and therapies, faster, but that they were far more prevalent than initially thought.
We now know these ecDNA amplifications are apparent in about 25 percent of cancers, in the most aggressive cancers: brain, lung, and ovarian cancers. We have found that, for a variety of reasons, ecDNA amplifications are able to change the rule book by which tumors evolve in ways that allow them to accelerate to a more aggressive disease in very surprising ways.
Q: How are you using machine learning and artificial intelligence to study ecDNA amplifications and tumor evolution?
A: There’s a mandate to translate what I’m doing in the lab to improve patients’ lives. I want to start with patient data to discover how various evolutionary pressures are driving disease and the mutations we observe.
One of the tools we use to study tumor evolution is single-cell lineage tracing technologies. Broadly, they allow us to study the lineages of individual cells. When we sample a particular cell, not only do we know what that cell looks like, but we can (ideally) pinpoint exactly when aggressive mutations appeared in the tumor’s history. That evolutionary history gives us a way of studying these dynamic processes that we otherwise wouldn’t be able to observe in real time, and helps us make sense of how we might be able to intercept that evolution.
I hope we’re going to get better at stratifying patients who will respond to certain drugs, to anticipate and overcome drug resistance, and to identify new therapeutic targets.
Q: What excited you about joining the MIT community?
A: One of the things that I was really attracted to was the integration of excellence in both engineering and biological sciences. At the Koch Institute, every floor is structured to promote this interface between engineers and basic scientists, and beyond campus, we can connect with all the biomedical research enterprises in the greater Boston area.
Another thing that drew me to MIT was the fact that it places such a strong emphasis on education, training, and investing in student success. I’m a personal believer that what distinguishes academic research from industry research is that academic research is fundamentally a service job, in that we are training the next generation of scientists.
It was always a mission of mine to bring excellence to both computational and experimental technology disciplines. The types of trainees I’m hoping to recruit are those who are eager to collaborate and solve big problems that require both disciplines. The KI [Koch Institute] is uniquely set up for this type of hybrid lab: my dry lab is right next to my wet lab, and it’s a source of collaboration and connection, and that reflects the KI’s general vision.
