Feed aggregator
Whitehouse sets his last climate hearing of the year
Hochul administration fails to set new state building efficiency targets
America’s clean energy rivals will take advantage of Trump 2.0, EU green chief says
NOAA: 2024 temperatures set to break last year’s global record
Study: Climate change ‘supercharged’ Philippines’ typhoon season
Acknowledging the historic presence of justice in climate research
Nature Climate Change, Published online: 16 December 2024; doi:10.1038/s41558-024-02218-5
Acknowledging the historic presence of justice in climate researchStreet smarts
Dozens of major research labs dot the streets of Kendall Square, a Cambridge, Massachusetts, neighborhood in which MIT partially sits. But for Andres Sevtsuk’s City Form Lab, the streets of Kendall Square themselves, and those in other cities, are subjects for research.
Sevtsuk is an associate professor of urban science and planning at MIT and a leading expert in urban form and spatial analysis. His work examines how the design of built environments affects social life within them. The way cities are structured influences whether street-level retail commerce can thrive, whether and how much people walk, and how much they encounter each other face to face.
“City environments that allow us to get more things done on foot tend to not only make people healthier, but they are more sustainable in terms of emissions and energy use, and they provide more social encounters between different members of society, which is fundamental to democracy,” Sevtsuk says.
However, many things Sevtsuk studies do not come with much pre-existing data. While some aspects of cities are studied extensively — vehicle traffic, for instance — fewer people have studied how urban planning affects walking and cycling, which most city governments seek to increase.
To counter this trend, several years ago Sevtsuk and some research assistants began studying foot traffic in several cities, as well as Kendall Square — how much people walk, where they go, and why. Most urban walking trips are destination-driven: People go to offices, eateries, and transit stops. But a lot of pedestrian activity is also recreational and social, such as sitting in a square, people-watching, and window-shopping. Eventually Sevtsuk emerged with an innovative model of pedestrian activity, which is based around these spatial networks of interaction and calibrated to observed people counts.
He and his colleagues then scaled up their model and took it to major cities around the world, starting with the whole downtown of Melbourne, Australia. The model now includes detailed street characteristics — sidewalk dimensions, the presence of ground floor businesses, landscaping, and more — and Sevtsuk has also helped apply it to Beirut and, most recently, New York City.
The project is typical of Sevtsuk’s research, which creates new ways to bring data to urban design. In 2023, Sevtsuk and his colleagues also released a novel open-source tool, called TILE2NET, to automatically map city sidewalks from aerial imagery. He has even studied interactions on the MIT campus, in a 2022 paper quantifying how spatial relatedness between departments and centers affects communications among them.
“Applying spatial analytics to city design is timely today because when it comes to cutting carbon emissions and energy consumption, or improving public health, or supporting local business on city streets, they relate to how cities are configured,” Sevtsuk says. “Urban designers have historically not been very focused on quantifying those effects. But studying these dynamics can help us understand how social interactions in cities work and how proposed interventions may impact a community.”
For his research and teaching, Sevtsuk received tenure at MIT earlier this year.
Growing and living in cities
Sevtsuk is originally from Tartu, Estonia, where his experiences helped attune him to the street life of cities.
“I do think where I come from enhanced my interest in urban design,” Sevtsuk says. “I grew up in public housing. That very much framed my appreciation for public amenities. Your home was where you slept, but everything else, where you played as a child or found cultural entertainment as a teenager, was in the public sphere of the city.”
Initially interested in studying architecture, Sevtsuk received a BArch degree from the Estonian Academy of Arts, then a BArch from the Ecole d’Architecture de la Ville et des Territoires, in Paris. Over time, he became increasingly interested in city design and planning, and enrolled as a master’s student at MIT, earning his SMArchS degree in 2006 while studying how technology could help us better understand urban social processes.
“MIT had a very strong research orientation for even masters-level students,” Sevtsuk says. “It is famous for that. I came because I was drawn to the opportunity to get hands-on into research around city design.”
Sevtsuk stayed at MIT for his doctoral studies, earning his PhD in 2010, with the late William Mitchell as his principal advisor. “Bill was interested in the influence of technology on cities,” says Sevtsuk, who appreciated the wide-ranging intellectual milieu that sprang up around Mitchell. “A lot of fascinating and intellectually experimental people gravitated around Bill.”
With his PhD in hand, Sevtstuk then joined an MIT collaboration at the new Singapore University of Technology and Design, a couple of years after it first opened.
“That was a lot of fun, building a new university, and we were teaching the first cohort and first courses,” Sevtsuk says. “It was an exciting project.”
Living in Asia also helped open doors for some hands-on research in Singapore and Indonesia, where Sevtsuk worked with city governments and the World Bank on urban planning and design projects in several cities.
“There was not a lot of data, and yet we had to think about how spatial analyses could be deployed to support planning decisions,” Sevtsuk says. “It forced you to think how to apply methods without abundant data in the traditional sense. In retrospect some of the software around pedestrian modeling we developed was influenced by these constraints, from understanding the minimum data inputs needed to capture people’s mobility dynamics in a neighborhood.”
From Melbourne to the Infinite Corridor
Returning to the U.S., Sevtsuk took a faculty position at Harvard University’s Graduate School of Design in 2015. He then joined the MIT faculty in 2019.
Throughout his career, Sevtsuk’s projects have consistently added insight to existing data or created all-new repositories of data for wider use. His team’s work in Melbourne leveraged a rare case of a city with copious pedestrian data of its own. There, Sevtsuk found the model not only explained foot traffic patterns but could also be used to forecast how changes in the built environment, such as new development projects, could affect foot traffic in different parts of the city.
In Beirut, the modeling work on improving community streets is part of post-disaster recovery after the Beirut port explosion of 2020. In New York, Sevtsuk and his colleagues are studying the largest pedestrian network in the U.S., covering all five boroughs of the city. The TILE2NET project, meanwhile, provides information for planners and experts in an area — sidewalk mapping — which most places do not have data on either.
When it came to studying the MIT campus, Sevtsuk brought new a new approach to a subject with an Institute legacy: An earlier campus professor, Thomas Allen of the MIT Sloan School of Management, did pioneering research about workspace design and collaboration. Sevtsuk and his team, however, looked at the larger campus as a network.
Linking spatial relations and email communication, they found that not only does the level of interaction between MIT departments and labs increase when those units are spatially closer to each other, but it also increases when their members are more likely to walk past each other’s offices on their daily routes to work or when they patronize the same eateries on campus.
Urban design for the people
Sevtsuk thinks about his own work as being not just data-driven but part of a larger refashioning of the field of urban design. In American cities, urban design may still be associated with the large-scale redevelopment of neighborhoods that took place in the first few postwar decades: massive freeways tearing through cities and dislocating older business districts, and large housing and office projects undertaken in the name of modernization and tax revenue increases but not in the interests of existing residents and workers. Many of these projects were disastrous for urban communities.
By the 1960s and 1970s, urban planning programs around the country attempted to quell the inadequacy of large-scale urban design and instead focused on the social and economic needs of communities first. The role of urban design was somewhat sidelined in this transition. But instead of giving up on urban design as a tool for community improvement, Sevtsuk thinks that planning and urban design research can help uncover the important ways in which design can support communities in their daily lives as much as community development initiatives and policies can.
“There was a turn in the field of planning away from urban design as a central area of focus, toward more sociologically grounded community-driven approaches,” Sevtsuk says. “And for good reasons. But during these decades, some of the most anti-urban, car-oriented, and resource-intensive built environments in the U.S. were created, which we now need to deal with.”
He adds: “In my work I try to quantify effects of urban design on people, from mobility outcomes, to generating social encounters, to supporting small local businesses on city streets. In my research group we try to connect urban design back to the qualities that people and communities care about. Faced with the profound climate challenges today, we must better understand the influence of urban design on society — on carbon emissions, on health, on social exchange, and even on democracy, because it’s such a critical dimension.”
A dedicated teacher, Sevtsuk works with students with broad backgrounds and interests from across the Institute. One of his main classes, 11.001 (Introduction to Urban Design and Development), draws students from many departments — including computer science, civil engineering, and management — who want to contribute to sustainable and equitable cities. He also teaches an applied class on modeling pedestrian activity, and his research group draws students and researchers from many countries.
“What resonates with students is that when we look closely at the complex organized systems of cities, we can make sense of how they work,” Sevtsuk says. “But we can also figure out how to change them, how to nudge them toward collective improvement. And many MIT students are eager to mobilize their amazing technical skills towards that quest.”
Upcoming Speaking Events
This is a current list of where and when I am scheduled to speak:
- I’m speaking at a joint meeting of the Boston Chapter of the IEEE Computer Society and GBC/ACM, in Boston, Massachusetts, USA, at 7:00 PM ET on Thursday, January 9, 2025. The event will take place at the Massachusetts Institute of Technology in Room 32-G449 (Kiva), as well as online via Zoom. Please register in advance if you plan to attend (whether online or in person).
The list is maintained on this page.
MIT affiliates named 2024 Schmidt Futures AI2050 Fellows
Five MIT faculty members and two additional alumni were recently named to the 2024 cohort of AI2050 Fellows. The honor is announced annually by Schmidt Futures, Eric and Wendy Schmidt’s philanthropic initiative that aims to accelerate scientific innovation.
Conceived and co-chaired by Eric Schmidt and James Manyika, AI2050 is a philanthropic initiative aimed at helping to solve hard problems in AI. Within their research, each fellow will contend with the central motivating question of AI2050: “It’s 2050. AI has turned out to be hugely beneficial to society. What happened? What are the most important problems we solved and the opportunities and possibilities we realized to ensure this outcome?”
This year’s MIT-affiliated AI2050 Fellows include:
David Autor, the Daniel (1972) and Gail Rubinfeld Professor in the MIT Department of Economics, and co-director of the MIT Shaping the Future of Work Initiative and the National Bureau of Economic Research’s Labor Studies Program, has been named a 2024 AI2050 senior fellow. His scholarship explores the labor-market impacts of technological change and globalization on job polarization, skill demands, earnings levels and inequality, and electoral outcomes. Autor’s AI2050 project will leverage real-time data on AI adoption to clarify how new tools interact with human capabilities in shaping employment and earnings. The work will provide an accessible framework for entrepreneurs, technologists, and policymakers seeking to understand, tangibly, how AI can complement human expertise. Autor has received numerous awards and honors, including a National Science Foundation CAREER Award, an Alfred P. Sloan Foundation Fellowship, an Andrew Carnegie Fellowship, and the Heinz 25th Special Recognition Award from the Heinz Family Foundation for his work “transforming our understanding of how globalization and technological change are impacting jobs and earning prospects for American workers.” In 2023, Autor was one of two researchers across all scientific fields selected as a NOMIS Distinguished Scientist.
Sara Beery, an assistant professor in the Department of Electronic Engineering and Computer Science (EECS) and a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL), has been named an early career fellow. Beery’s work focuses on building computer vision methods that enable global-scale environmental and biodiversity monitoring across data modalities and tackling real-world challenges, including strong spatiotemporal correlations, imperfect data quality, fine-grained categories, and long-tailed distributions. She collaborates with nongovernmental organizations and government agencies to deploy her methods worldwide and works toward increasing the diversity and accessibility of academic research in artificial intelligence through interdisciplinary capacity-building and education. Beery earned a BS in electrical engineering and mathematics from Seattle University and a PhD in computing and mathematical sciences from Caltech, where she was honored with the Amori Prize for her outstanding dissertation.
Gabriele Farina, an assistant professor in EECS and a principal investigator in the Laboratory for Information and Decision Systems (LIDS), has been named an early career fellow. Farina’s work lies at the intersection of artificial intelligence, computer science, operations research, and economics. Specifically, he focuses on learning and optimization methods for sequential decision-making and convex-concave saddle point problems, with applications to equilibrium finding in games. Farina also studies computational game theory and recently served as co-author on a Science study about combining language models with strategic reasoning. He is a recipient of a NeurIPS Best Paper Award and was a Facebook Fellow in economics and computer science. His dissertation was recognized with the 2023 ACM SIGecom Doctoral Dissertation Award and one of the two 2023 ACM Dissertation Award Honorable Mentions, among others.
Marzyeh Ghassemi PhD ’17, an associate professor in EECS and the Institute for Medical Engineering and Science, principal investigator at CSAIL and LIDS, and affiliate of the Abdul Latif Jameel Clinic for Machine Learning in Health and the Institute for Data, Systems, and Society, has been named an early career fellow. Ghassemi’s research in the Healthy ML Group creates a rigorous quantitative framework in which to design, develop, and place ML models in a way that is robust and fair, focusing on health settings. Her contributions range from socially aware model construction to improving subgroup- and shift-robust learning methods to identifying important insights in model deployment scenarios that have implications in policy, health practice, and equity. Among other awards, Ghassemi has been named one of MIT Technology Review’s 35 Innovators Under 35; and has been awarded the 2018 Seth J. Teller Award, the 2023 MIT Prize for Open Data, a 2024 NSF CAREER Award, and the Google Research Scholar Award. She founded the nonprofit Association for Health, Inference and Learning (AHLI) and her work has been featured in popular press such as Forbes, Fortune, MIT News, and The Huffington Post.
Yoon Kim, an assistant professor in EECS and a principal investigator in CSAIL, has been named an early career fellow. Kim’s work straddles the intersection between natural language processing and machine learning, and touches upon efficient training and deployment of large-scale models, learning from small data, neuro-symbolic approaches, grounded language learning, and connections between computational and human language processing. Affiliated with CSAIL, Kim earned his PhD in computer science at Harvard University; his MS in data science from New York University; his MA in statistics from Columbia University; and his BA in both math and economics from Cornell University.
Additional alumni Roger Grosse PhD ’14, a computer science associate professor at the University of Toronto, and David Rolnick ’12, PhD ’18, assistant professor at Mila-Quebec AI Institute, were also named senior and early career fellows, respectively.
Artifacts from a half-century of cancer research
Throughout 2024, MIT’s Koch Institute for Integrative Cancer Research has celebrated 50 years of MIT’s cancer research program and the individuals who have shaped its journey. In honor of this milestone anniversary year, on Nov. 19 the Koch Institute celebrated the opening of a new exhibition: Object Lessons: Celebrating 50 Years of Cancer Research at MIT in 10 Items.
Object Lessons invites the public to explore significant artifacts — from one of the earliest PCR machines, developed in the lab of Nobel laureate H. Robert Horvitz, to Greta, a groundbreaking zebra fish from the lab of Professor Nancy Hopkins — in the half-century of discoveries and advancements that have positioned MIT at the forefront of the fight against cancer.
50 years of innovation
The exhibition provides a glimpse into the many contributors and advancements that have defined MIT’s cancer research history since the founding of the Center for Cancer Research in 1974. When the National Cancer Act was passed in 1971, very little was understood about the biology of cancer, and it aimed to deepen our understanding of cancer and develop better strategies for the prevention, detection, and treatment of the disease. MIT embraced this call to action, establishing a center where many leading biologists tackled cancer’s fundamental questions. Building on this foundation, the Koch Institute opened its doors in 2011, housing engineers and life scientists from many fields under one roof to accelerate progress against cancer in novel and transformative ways.
In the 13 years since, the Koch Institute’s collaborative and interdisciplinary approach to cancer research has yielded significant advances in our understanding of the underlying biology of cancer and allowed for the translation of these discoveries into meaningful patient impacts. Over 120 spin-out companies — many headquartered nearby in the Kendall Square area — have their roots in Koch Institute research, with nearly half having advanced their technologies to clinical trials or commercial applications. The Koch Institute’s collaborative approach extends beyond its labs: principal investigators often form partnerships with colleagues at world-renowned medical centers, bridging the gap between discovery and clinical impact.
Current Koch Institute Director Matthew Vander Heiden, also a practicing oncologist at the Dana-Farber Cancer Institute, is driven by patient stories.
“It is never lost on us that the work we do in the lab is important to change the reality of cancer for patients,” he says. “We are constantly motivated by the urgent need to translate our research and improve outcomes for those impacted by cancer.”
Symbols of progress
The items on display as part of Object Lessons take viewers on a journey through five decades of MIT cancer research, from the pioneering days of Salvador Luria, founding director of the Center for Cancer Research, to some of the Koch Institute’s newest investigators, including Francisco Sánchez-Rivera, the Eisen and Chang Career Development Professor and an assistant professor of biology, and Jessica Stark, the Underwood-Prescott Career Development Professor and an assistant professor of biological engineering and chemical engineering.
Among the standout pieces is a humble yet iconic object: Salvador Luria’s ceramic mug, emblazoned with “Luria’s broth.” Lysogeny broth, often called — apocryphally — Luria Broth, is a medium for growing bacteria. Still in use today, the recipe was first published in 1951 by a research associate in Luria’s lab. The artifact, on loan from the MIT Museum, symbolizes the foundational years of the Center for Cancer Research and serves as a reminder of Luria’s influence as an early visionary. His work set the stage for a new era of biological inquiry that would shape cancer research at MIT for generations.
Visitors can explore firsthand how the Koch Institute continues to build on the legacy of its predecessors, translating decades of knowledge into new tools and therapies that have the potential to transform patient care and cancer research.
For instance, the PCR machine designed in the Horvitz Lab in the 1980s made genetic manipulation of cells easier, and gene sequencing faster and more cost-effective. At the time of its commercialization, this groundbreaking benchtop unit marked a major leap forward. In the decades since, technological advances have allowed for the visualization of DNA and biological processes at a much smaller scale, as demonstrated by the handheld BioBits imaging device developed by Stark and on display next door to the Horvitz panel.
“We created BioBits kits to address a need for increased equity in STEM education,” Stark says. “By making hands-on biology education approachable and affordable, BioBits kits are helping inspire and empower the next generation of scientists."
While the exhibition showcases scientific discoveries and marvels of engineering, it also aims to underscore the human element of cancer research through personally significant items, such as a messenger bag and Seq-Well device belonging to Alex Shalek, J. W. Kieckhefer Professor in the Institute for Medical Engineering and Science and the Department of Chemistry.
Shalek investigates the molecular differences between individual cells, developing mobile RNA-sequencing devices. He could often be seen toting the bag around the Boston area and worldwide as he perfected and shared his technology with collaborators near and far. Through his work, Shalek has helped to make single-cell sequencing accessible for labs in more than 30 countries across six continents.
“The KI seamlessly brings together students, staff, clinicians, and faculty across multiple different disciplines to collaboratively derive transformative insights into cancer,” Shalek says. “To me, these sorts of partnerships are the best part about being at MIT.”
Around the corner from Shalek’s display, visitors will find an object that serves as a stark reminder of the real people impacted by Koch Institute research: Steven Keating’s SM ’12, PhD ’16 3D-printed model of his own brain tumor. Keating, who passed away in 2019, became a fierce advocate for the rights of patients to their medical data, and came to know Vander Heiden through his pursuit to become an expert on his tumor type, IDH-mutant glioma. In the years since, Vander Heiden’s work has contributed to a new therapy to treat Keating’s tumor type. In 2024, the drug, called vorasidenib, gained FDA approval, providing the first therapeutic breakthrough for Keating’s cancer in more than 20 years.
As the Koch Institute looks to the future, Object Lessons stands as a celebration of the people, the science, and the culture that have defined MIT’s first half-century of breakthroughs and contributions to the field of cancer research.
“Working in the uniquely collaborative environment of the Koch Institute and MIT, I am confident that we will continue to unlock key insights in the fight against cancer,” says Vander Heiden. “Our community is poised to embark on our next 50 years with the same passion and innovation that has carried us this far.”
Object Lessons is on view in the Koch Institute Public Galleries Monday through Friday, 9 a.m. to 5 p.m., through spring semester 2025.
Speaking Freely: Prasanth Sugathan
Interviewer: David Greene
This interview has been edited for length and clarity.*
Prasanth Sugathan is Legal Director at Software Freedom Law Center, India. (SFLC.in). Prasanth is a lawyer with years of practice in the fields of technology law, intellectual property law, administrative law and constitutional law. He is an engineer turned lawyer and has worked closely with the Free Software community in India. He has appeared in many landmark cases before various Tribunals, High Courts and the Supreme Court of India. He has also deposed before Parliamentary Committees on issues related to the Information Technology Act and Net Neutrality.
David Greene: Why don’t you go ahead and introduce yourself.
Sugathan: I am Prasanth Sugathan, I am the Legal Director at the Software Freedom Law Center, India. We are a nonprofit organization based out of New Delhi, started in the year 2010. So we’ve been working at this for 14 years now, working mostly in the area of protecting rights of citizens in the digital space in India. We do strategic litigation, policy work, trainings, and capacity building. Those are the areas that we work in.
Greene: What was your career path? How did you end up at SFLC?
That’s an interesting story. I am an engineer by training. Then I was interested in free software. I had a startup at one point and I did a law degree along with it. I got interested in free software and got into it full time. Because of this involvement with the free software community, the first time I think I got involved in something related to policy was when there was discussion around software patents. When the patent office came out with a patent manual and there was this discussion about how it could affect the free software community and startups. So that was one discussion I followed, I wrote about it, and one thing led to another and I was called to speak at a seminar in New Delhi. That’s where I met Eben and Mishi from the Software Freedom Law Center. That was before SFLC India was started, but then once Mishi started the organization I joined as a Counsel. It’s been a long relationship.
Greene: Just in a personal sense, what does freedom of expression mean to you?
Apart from being a fundamental right, as evident in all the human rights agreements we have, and in the Indian Constitution, freedom of expression is the most basic aspect for a democratic nation. I mean without free speech you can not have a proper exchange of ideas, which is most important for a democracy. For any citizen to speak what they feel, to communicate their ideas, I think that is most important. As of now the internet is a medium which allows you to do that. So there definitely should be minimum restrictions from the government and other agencies in relation to the free exchange of ideas on this medium.
Greene: Have you had any personal experiences with censorship that have sort of informed or influenced how you feel about free expression?
When SFLC.IN was started in 2010 our major idea was to support the free software community. But then how we got involved in the debates on free speech and privacy on the internet was when in 2011 there were the IT Rules were introduced by the government as a draft for discussion and finally notified. This was on regulation of intermediaries, these online platforms. This was secondary legislation based on the Information Technology Act (IT Act) in India, which is the parent law. So when these discussions happened we got involved in it and then one thing led to another. For example, there was a provision in the IT Act called Section 66-A which criminalized the sending of offensive messages through a computer or other communication devices. It was, ostensibly, introduced to protect women. And the irony was that two women were arrested under this law. That was the first arrest that happened, and it was a case of two women being arrested for the comments that they made about a leader who expired.
This got us working on trying to talk to parliamentarians, trying to talk to other people about how we could maybe change this law. So there were various instances of content being taken down and people being arrested, and it was always done under Section 66-A of the IT Act. We challenged the IT Rules before the Supreme Court. In a judgment in a 2015 case called Shreya Singhal v. Union of India the Supreme Court read down the rules relating to intermediary liability. As for the rules, the platforms could be asked to take down the content. They didn’t have much of an option. If they don’t do that, they lose their safe harbour protection. The Court said it can only be actual knowledge and what actual knowledge means is if someone gets a court order asking them to take down the content. Or let’s say there’s direction from the government. These are the only two cases when content could be taken down.
Greene: You’ve lived in India your whole life. Has there ever been a point in your life when you felt your freedom of expression was restricted?
Currently we are going through such a phase, where you’re careful about what you’re speaking about. There is a lot of concern about what is happening in India currently. This is something we can see mostly impacting people who are associated with civil society. When they are voicing their opinions there is now a kind of fear about how the government sees it, whether they will take any action against you for what you say, and how this could affect your organization. Because when you’re affiliated with an organization it’s not just about yourself. You also need to be careful about how anything that you say could affect the organization and your colleagues. We’ve had many instances of nonprofit organizations and journalists being targeted. So there is a kind of chilling effect when you really don’t want to say something you would otherwise say strongly. There is always a toning down of what you want to say.
Greene: Are there any situations where you think it’s appropriate for governments to regulate online speech?
You don’t have an absolute right to free speech under India’s Constitution. There can be restrictions as stated under Article 19(2) of the Constitution. There can be reasonable restrictions by the government, for instance, for something that could lead to violence or something which could lead to a riot between communities. So mostly if you look at hate speech on the net which could lead to a violent situation or riots between communities, that could be a case where maybe the government could intervene. And I would even say those are cases where platforms should intervene. We have seen a lot of hate speech on the net during India’s current elections as there have been different phases of elections going on for close to two months. We have seen that happening with not just political leaders but with many supporters of political parties publishing content on various platforms which aren’t really in the nature of hate speech but which could potentially create situations where you have at least two communities fighting each other. It’s definitely not a desirable situation. Those are the cases where maybe platforms themselves could regulate or maybe the government needs to regulate. In this case, for example, when it is related to elections, the Election Commission also has its role, but in many cases we don’t see that happening.
Greene: Okay, let’s go back to hate speech for a minute because that’s always been a very difficult problem. Is that a difficult problem in India? Is hate speech well-defined? Do you think the current rules serve society well or are there problems with it?
I wouldn’t say it’s well-defined, but even in the current law there are provisions that address it. So anything which could lead to violence or which could lead to animosity between two communities will fall in the realm of hate speech. It’s not defined as such, but then that is where your free speech rights could be restricted. That definitely could fall under the definition of hate speech.
Greene: And do you think that definition works well?
I mean the definition is not the problem. It’s essentially a question of how it is implemented. It’s a question of how the government or its agency implements it. It’s a question of how platforms are taking care of it. These are two issues where there’s more that needs to be done.
Greene: You also talked about misinformation in terms of elections. How do we reconcile freedom of expression concerns with concerns for preventing misinformation?
I would definitely say it’s a gray area. I mean how do you really balance this? But I don’t think it’s a problem which cannot be addressed. Definitely there’s a lot for civil society to do, a lot for the private sector to do. Especially, for example, when hate speech is reported to the platforms. It should be dealt with quickly, but that is where we’re seeing the worst difference in how platforms act on such reporting in the Global North versus what happens in the Global South. Platforms need to up their act when it comes to handling such situations and handling such content.
Greene: Okay, let’s talk about the platforms then. How do you feel about censorship or restrictions on freedom of expression by the platforms?
Things have changed a lot as to how these platforms work. Now the platforms decide what kind of content gets to your feed and how the algorithms work to promote content which is more viral. In many cases we have seen how misinformation and hate speech goes viral. And content that is debunking the misinformation which is kind of providing the real facts, that doesn’t go as far. The content that debunks misinformation doesn’t go viral or come up in your feed that fast. So that definitely is a problem, the way platforms are dealing with it. In many cases it might be economically beneficial for them to make sure that content which is viral and which puts forth misinformation reaches more eyes.
Greene: Do you think that the platforms that are most commonly used in India—and I know there’s no TikTok in India— serve free speech interests or not?
When the Information Technology Rules were introduced and when the discussions happened, I would say civil society supported the platforms, essentially saying these platforms ensured we can enjoy our free speech rights, people can enjoy their free speech rights and express themselves freely. How the situation changed over a period of time is interesting. Definitely these platforms are still important for us to express these rights. But when it comes to, let’s say, content being regulated, some platforms do push back when the government asks them to take down the content, but we have not seen that much. So whether they’re really the messiahs for free speech, I doubt. Over the years, we have seen that it is most often the case that when the government tells them to do something, it is in their interest to do what the government says. There has not been much pushback except for maybe Twitter challenging it in the court. There have not been many instances where these platforms supported users.
Greene: So we’ve talked about hate speech and misinformation, are there other types of content or categories of online speech that are either problematic in India now or at least that regulators are looking at that you think the government might try to do something with?
One major concern which the government is trying to regulate is about deepfakes, with even the Prime Minister speaking about it. So suddenly that is something of a priority for the government to regulate. So that’s definitely a problem, especially when it comes to public figures and particularly women who are in politics who often have their images manipulated. In India we see that at election time. Even politicians who have been in the field for a long time, their images have been misused and morphed images have been circulated. So that’s definitely something that the platforms need to act on. For example, you cannot have the luxury of, let’s say, taking 48 hours to decide what to do when something like that is posted. This is something which platforms have to deal with as early as possible. We do understand there’s a lot of content and a lot of reporting happening, but in some cases, at least, there should be some prioritization of these reporting related to non-consensual sexual imagery. Maybe then the priority should go up.
Greene: As an engineer, how do you feel about deepfake tech? Should the regulatory concerns be qualitatively different than for other kinds of false information?
When it comes to deepfakes, I would say the problem is that it has become more mainstream. It has become very easy for a person to use these tools that have become more accessible. Earlier you needed to have specialized knowledge, especially when it came to something like editing videos. Now it’s become much easier. These tools are made easily available. The major difference now is how easy it is to access these applications. There can not be a case of fully regulating or fully controlling a technology. It’s not essentially a problem with the technology, because there would be a lot of ethical use cases. Just because something is used for a harmful purpose doesn’t mean that you completely block the technology. There is definitely a case for regulating AI and regulating deepfakes, but that doesn’t mean you put a complete stop to it.
Greene: How do you feel about TikTok being banned in India?
I think that’s less a question of technology or regulation and more of a geopolitical issue. I don’t think it has anything to do with the technology or even the transfer of data for that matter. I think it was just a geopolitical issue related to India/ China relations. The relations have kind of soured with the border disputes and other things, I think that was the trigger for the TikTok ban.
Greene: What is your most significant legal victory from a human rights perspective and why?
The victory that we had in the fight against the 2011 Rules and the portions related to intermediary liability, which was shot down by the Supreme Court. That was important because when it came to platforms and when it came to people expressing their critical views online, all of this could have been taken down very easily. So that was definitely a case of free speech rights being affected without much recourse. So that was a major victory.
Greene: Okay, now we ask everyone this question. Who is your free speech hero and why?
I can’t think of one person, but I think of, for example, when the country went through a bleak period in the 1970s and the government declared a national state of emergency. During that time we had journalists and politicians who fought for free speech rights with respect to the news media. At that time even writing something in the publications was difficult. We had many cases of journalists who were fighting this, people who had gone to jail for writing something, who had gone to jail for opposing the government or publicly criticizing the government. So I don’t think of just one person, but we have seen journalists and political leaders fighting back during that state of emergency. I would say those are the heroes who could fight the government, who could fight law enforcement. Then there was the case of Justice H.R. Khanna, a judge who stood up for citizen’s rights and gave his dissenting opinion against the majority view, which cost him the position of Chief Justice. Maybe I would say he’s a hero, a person who was clear about constitutional values and principles.
Ultralytics Supply-Chain Attack
Last week, we saw a supply-chain attack against the Ultralytics AI library on GitHub. A quick summary:
On December 4, a malicious version 8.3.41 of the popular AI library ultralytics —which has almost 60 million downloads—was published to the Python Package Index (PyPI) package repository. The package contained downloader code that was downloading the XMRig coinminer. The compromise of the project’s build environment was achieved by exploiting a known and previously reported GitHub Actions script injection.
Lots more details at that link. Also ...