Michael Short came to MIT in the fall of 2001 as an 18-year-old first-year who grew up in Boston’s North Shore. He immediately felt at home, so much so that he’s never really left. It’s not that Short has no interest in exploring the world beyond the confines of the Institute, as he is an energetic and venturesome fellow. It’s just that almost everything he hopes to achieve in his scientific career can, in his opinion, be best pursued at this university.
Last year — after collecting four MIT degrees and joining the faculty of the Department of Nuclear Science and Engineering (NSE) in 2013 — he was promoted to the status of tenured associate professor.
Short’s enthusiasm for MIT began early in high school when he attended weekend programs that were mainly taught by undergraduates. “It was a program filled with my kind of people,” he recalls. “My high school was very good, but this was at a different level — at the level I was seeking and hoping to achieve. I felt more at home here than I did in my hometown, and the Saturdays at MIT were the highlight of my week.” He loved his four-year experience as an MIT undergraduate, including the research he carried out in the Uhlig Corrosion Laboratory, and he wasn’t ready for it to end.
After graduating in 2005 with two BS degrees (one in NSE and another in materials science and engineering), he took on some computer programming jobs and worked half time in the Uhlig lab under the supervision of Ronald Ballinger, a professor in both NSE and the Department of Materials Science and Engineering. Short soon realized that computer programming was not for him, and he started graduate studies with Ballinger as his advisor, earning a master’s and a PhD in nuclear science and engineering in 2010.
Even as an undergraduate, Short was convinced that nuclear power was essential to our nation’s (and the world’s) energy future, especially in light of the urgent need to move toward carbon-free sources of power. During his first year, he was told by Ballinger that the main challenge confronting nuclear power was to find materials, and metals in particular, that could last long enough in the face of radiation and the chemically destructive effects of corrosion.
Those words, persuasively stated, led him to his double major. “Materials and radiation damage have been at the core of my research ever since,” Short says. “Remarkably, the stuff I started studying in my first year of college is what I do today, though I’ve extended this work in many directions.”
Corrosion has proven to be an unexpectedly rich subject. “The traditional view is to expose metals to various things and see what happens — ‘cook and look,’ as it’s called,” he says. “A lot of folks view it that way, but it’s actually much more complex. In fact, some members of our own faculty don’t want to touch corrosion because it’s too complicated, too dirty. But that’s what I like about it.”
In a 2020 paper published in Nature Communications, Short, his student Weiyue Zhou, and other colleagues made a surprising discovery. “Most people think radiation is bad and makes everything worse, but that’s not always the case,” Short maintains. His team found a specific set of conditions under which a metal (a nickel-chromium alloy) performs better when it is irradiated while undergoing corrosion in a molten salt mixture. Their finding is relevant, he adds, “because these are the conditions under which people are hoping to run the next generation of nuclear reactors.” Leading candidates for alternatives to today’s water-cooled reactors are molten salt and liquid metal (specifically liquid lead and sodium) cooled reactors. To this end, Short and his colleagues are currently carrying out similar experiments involving the irradiation of metal alloys immersed in liquid lead.
Meanwhile, Short has pursued another multiyear project, trying to devise a new standard to serve as “a measurable unit of radiation damage.” In fact, these were the very words he wrote on his research statement when applying for his first faculty position at MIT, although he admits that he didn’t know then how to realize that goal. But the effort is finally paying off, as Short and his collaborators are about to submit their first big paper on the topic. He’s found that you can’t reduce radiation damage to a single number, which is what people have tried to do in the past, because that’s too simple. Instead, their new standard relates to the density of defects — the number of radiation-induced defects (or unintentional changes to the lattice structure) per unit volume for a given material.
“Our approach is based on a theory that everyone agrees on — that defects have energy,” Short explains. However, many people told him and his team that the amount of energy stored within those defects would be too small to measure. But that just spurred them to try harder, making measurements at the microjoule level, at the very limits of detection.
Short is convinced that their new standard will become “universally useful, but it will take years of testing on many, many materials followed by more years of convincing people using the classic method: Repeat, repeat, repeat, making sure that each time you get the same result. It’s the unglamorous side of science, but that’s the side that really matters.”
The approach has already led Short, in collaboration with NSE proliferation expert Scott Kemp, into the field of nuclear security. Equipped with new insights into the signatures left behind by radiation damage, students co-supervised by Kemp and Short have devised methods for determining how much fissionable material has passed through a uranium enrichment facility, for example, by scrutinizing the materials exposed to these radioactive substances. “I never thought my preliminary work on corrosion experiments as an undergraduate would lead to this,” Short says.
He has also turned his attention to “microreactors” — nuclear reactors with power ratings as small as a single megawatt, as compared to the 1,000-megawatt behemoths of today. Flexibility in the size of future power plants is essential to the economic viability of nuclear power, he insists, “because nobody wants to pay $10 billion for a reactor now, and I don’t blame them.”
But the proposed microreactors, he says, “pose new material challenges that I want to solve. It comes down to cramming more material into a smaller volume, and we don’t have a lot of knowledge about how materials perform at such high densities.” Short is currently conducting experiments with the Idaho National Laboratory, irradiating possible microreactor materials to see how they change using a laser technique, transient grating spectroscopy (TGS), which his MIT group has had a big hand in advancing.
It’s been an exhilarating 20 years at MIT for Short, and he has even more ambitious goals for the next 20 years. “I’d like to be one of those who came up with a way to verify the Iran nuclear deal and thereby helped clamp down on nuclear proliferation worldwide,” he says. “I’d like to choose the materials for our first power-generating nuclear fusion reactors. And I’d like to have influenced perhaps 50 to 100 former students who chose to stay in science because they truly enjoy it.
“I see my job as creating scientists, not science,” he says, “though science is, of course, a convenient byproduct.”
A new MIT study of how a mammalian brain remembers what it sees shows that while individual images are stored in the visual cortex, the ability to recognize a sequence of sights critically depends on guidance from the hippocampus, a deeper structure strongly associated with memory but shrouded in mystery about exactly how.
By suggesting that the hippocampus isn’t needed for basic storage of images so much as identifying the chronological relationship they may have, the new research, published in Current Biology, can bring neuroscientists closer to understanding how the brain coordinates long-term visual memory across key regions.
“This offers the opportunity to actually understand, in a very concrete way, how the hippocampus contributes to memory storage in the cortex,” says senior author Mark Bear, the Picower Professor of Neuroscience in the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences.
Essentially, the hippocampus acts to influence how images are stored in the cortex if they have a sequential relationship, says lead author Peter Finnie, a former postdoc in Bear’s lab.
“The exciting part of this is that the visual cortex seems to be involved in encoding both very simple visual stimuli and also temporal sequences of them, and yet the hippocampus is selectively involved in how that sequence is stored,” Finnie says.
To have hippocampus and have not
To make their findings, the researchers, including former postdoc Rob Komorowski, trained mice with two forms of visual recognition memory discovered in Bear’s lab. The first form of memory, called stimulus selective response plasticity (SRP) involves learning to recognize a nonrewarding, nonthreatening single visual stimulus after it has been presented over and over. As learning occurs, visual cortex neurons produce an increasingly strong electrical response and the mouse ceases paying attention to the once-novel, but now decidedly uninteresting, image. The second form of memory, visual sequence plasticity, involves learning to recognize and predict a sequence of images. Here, too, the once-novel but now-familiar and innocuous sequence comes to evoke an elevated electrical response, and it is much greater than what is observed if the same stimuli are presented in reverse order or at a different speed.
In prior studies Bear’s lab has shown that the images in each form of memory are stored in the visual cortex, and are even specific to which eye beheld them, if only one did.
But the researchers were curious about whether and how the hippocampus might contribute to these forms of memory and cortical plasticity. After all, like some other forms of memory that depend on the hippocampus, SRP only takes hold after a period of “consolidation,” for instance overnight during sleep. To test whether there is a role for the hippocampus, they chemically removed large portions of the structure in a group of mice and looked for differences between groups in the telltale electrical response each kind of recognition memory should evoke.
Mice with or without a hippocampus performed equally well in learning SRP (measured not only electrophysiologically but also behaviorally), suggesting that the hippocampus was not needed for that form of memory. It appears to arise, and even consolidate, entirely within the visual cortex.
Visual sequence plasticity, however, did not occur without an intact hippocampus, the researchers found. Mice without the structure showed no elevated electrical response to the sequences when tested, no ability to recognize them in reverse or when delayed, and no inclination to “fill in the blank” when one was missing. It was as if the visual sequence — and even each image in the sequence — was not familiar.
“Together these findings are consistent with a specific role for the hippocampus in predictive response generation during exposure to familiar temporal patterns of visual stimulation,” the authors wrote.
New finding from a classic approach
The experiments follow in a long tradition of attempting to understand the hippocampus by assessing what happens when it’s damaged. For decades, neuroscientists at MIT and elsewhere were able to learn from a man known as “H.M.,” who had undergone hippocampal removal to relieve epileptic seizures. His memory of his past before the surgery remained intact, but he exhibited an inability to form “declarative” memories of new experiences, such as meeting someone or performing an activity. Over time, however, scientists realized that he could be trained to learn motor tasks better, even though he wouldn’t remember the training itself. The experiments helped to reveal that for many different forms of memory there is a “division of labor” among regions of the brain that may or may not include the hippocampus.
The new study, Bear and Finnie say, produces a clear distinction through the division of labor in visual memory between simple recognition of images and the more complex task of recognizing of sequence structure.
“It’s a nice dividing line,” Bear says. “It’s the same region of the brain, the same method of an animal looking at images on a screen. All we are changing is the temporal structure of the stimulus.”
Previous research in the lab showed that SRP and visual sequence plasticity arise via different molecular mechanisms. SRP can be disrupted by blocking receptors for the neurotransmitter glutamate on involved neurons while sequence plasticity depends on receptors for acetylcholine.
The next question Bear wants to address, therefore, is whether an acetylcholine-producing circuit links the hippocampus to the visual cortex to accomplish sequence learning. Neurons that release acetylcholine in the cortex happen to be among the earliest disrupted in Alzheimer’s disease.
If the circuit for sequence learning indeed runs through those neurons, Bear speculates, then assessing people for differences in SRP and sequence learning could become a way to diagnose early onset of dementia progression.
The National Eye Institute of the National Institutes of Health and the JPB Foundation funded the research.
Nature Climate Change, Published online: 26 July 2021; doi:10.1038/s41558-021-01092-9Changes in extreme heat are often calculated as anomalies above a reference climatology. A different definition—week-day heatwaves surpassing the current record by large margins—shows that their occurrence probabilities depend on warming rate, not level, and are higher than during recent decades.
Eesha Khare has always seen a world of matter. The daughter of a hardware engineer and a biologist, she has an insatiable interest in what substances — both synthetic and biological — have in common. Not surprisingly, that perspective led her to the study of materials.
“I recognized early on that everything around me is a material,” she says. “How our phones respond to touches, how trees in nature to give us both structural wood and foldable paper, or how we are able to make high skyscrapers with steel and glass, it all comes down to the fundamentals: This is materials science and engineering.”
As a rising fourth-year PhD student in the MIT Department of Materials Science and Engineering (DMSE), Khare now studies the metal-coordination bonds that allow mussels to bind to rocks along turbulent coastlines. But Khare’s scientific enthusiasm has also led to expansive interests from science policy to climate advocacy and entrepreneurship.
A material world
A Silicon Valley native, Khare recalls vividly how excited she was about science as a young girl, both at school and at myriad science fairs and high school laboratory internships. One such internship at the University of California at Santa Cruz introduced her to the study of nanomaterials, or materials that are smaller than a single human cell. The project piqued her interest in how research could lead to energy-storage applications, and she began to ponder the connections between materials, science policy, and the environment.
As an undergraduate at Harvard University, Khare pursued a degree in engineering sciences and chemistry while also working at the Harvard Kennedy School Institute of Politics. There, she grew fascinated by environmental advocacy in the policy space, working for then-professor Gina McCarthy, who is currently serving in the Biden administration as the first-ever White House climate advisor.
Following her academic explorations in college, Khare wanted to consider science in a new light before pursuing her doctorate in materials science and engineering. She deferred her program acceptance at MIT in order to attend Cambridge University in the U.K., where she earned a master’s degree in the history and philosophy of science. “Especially in a PhD program, it can often feel like your head is deep in the science as you push new research frontiers, but I wanted take a step back and be inspired by how scientists in the past made their discoveries,” she says.
Her experience at Cambridge was both challenging and informative, but Khare quickly found that her mechanistic curiosity remained persistent — a realization that came in the form of a biological material.
“My very first master’s research project was about environmental pollution indicators in the U.K., and I was looking specifically at lichen to understand the social and political reasons why they were adopted by the public as pollution indicators,” Khare explains. “But I found myself wondering more about how lichen can act as pollution indicators. And I found that to be quite similar for most of my research projects: I was more interested in how the technology or discovery actually worked.”
Enthusiasm for innovation
Fittingly, these bioindicators confirmed for her that studying materials at MIT was the right course. Now Khare works on a different organism altogether, conducting research on the metal-coordination chemical interactions of a biopolymer secreted by mussels.
“Mussels secrete this thread and can adhere to ocean walls. So, when ocean waves come, mussels don’t get dislodged that easily,” Khare says. “This is partly because of how metal ions in this material bind to different amino acids in the protein. There’s no input from the mussel itself to control anything there; all the magic is in this biological material that is not only very sticky, but also doesn’t break very readily, and if you cut it, it can re-heal that interface as well! If we could better understand and replicate this biological material in our own world, we could have materials self-heal and never break and thus eliminate so much waste.”
To study this natural material, Khare combines computational and experimental techniques, experimentally synthesizing her own biopolymers and studying their properties with in silico molecular dynamics. Her co-advisors — Markus Buehler, the Jerry McAfee Professor of Engineering in Civil and Environmental Engineering, and Niels Holten-Andersen, professor of materials science and engineering — have embraced this dual-approach to her project, as well as her abundant enthusiasm for innovation.
Khare likes to take one exploratory course per semester, and a recent offering in the MIT Sloan School of Management inspired her to pursue entrepreneurship. These days she is spending much of her free time on a startup called Taxie, formed with fellow MIT students after taking the course 15.390 (New Enterprises). Taxie attempts to electrify the rideshare business by making electric rental cars available to rideshare drivers. Khare hopes this project will initiate some small first steps in making the ridesharing industry environmentally cleaner — and in democratizing access to electric vehicles for rideshare drivers, who often hail from lower-income or immigrant backgrounds.
“There are a lot of goals thrown around for reducing emissions or helping our environment. But we are slowly getting physical things on the road, physical things to real people, and I like to think that we are helping to accelerate the electric transition,” Khare says. “These small steps are helpful for learning, at the very least, how we can make a transition to electric or to a cleaner industry.”
Alongside her startup work, Khare has pursued a number of other extracurricular activities at MIT, including co-organizing her department’s Student Application Assistance Program and serving on DMSE’s Diversity, Equity, and Inclusion Council. Her varied interests also have led to a diverse group of friends, which suits her well, because she is a self-described “people-person.”
In a year where maintaining connections has been more challenging than usual, Khare has focused on the positive, spending her spring semester with family in California and practicing Bharatanatyam, a form of Indian classical dance, over Zoom. As she looks to the future, Khare hopes to bring even more of her interests together, like materials science and climate.
“I want to understand the energy and environmental sector at large to identify the most pressing technology gaps and how can I use my knowledge to contribute. My goal is to figure out where can I personally make a difference and where it can have a bigger impact to help our climate,” she says. “I like being outside of my comfort zone.”
EFF, ACLU Urge Appeals Court to Revive Challenge to Los Angeles’ Collection of Scooter Location Data
San Francisco—The Electronic Frontier Foundation and the ACLU of Northern and Southern California today asked a federal appeals court to reinstate a lawsuit they filed on behalf of electric scooter riders challenging the constitutionality of Los Angeles’ highly privacy-invasive collection of detailed trip data and real-time locations and routes of scooters used by thousands of residents each day.
The Los Angeles Department of Transportation (LADOT) collects from operators of dockless vehicles like Lyft, Bird, and Lime information about every single scooter trip taken within city limits. It uses software it developed to gather location data through Global Positioning System (GPS) trackers on scooters. The system doesn’t capture the identity of riders directly, but collects with precision riders’ location, routes, and destinations to within a few feet, which can easily be used to reveal the identities of riders.
A lower court erred in dismissing the case, EFF and the ACLU said in a brief filed today in the U.S. Circuit Court of Appeals for the Ninth Circuit. The court incorrectly determined that the practice, unprecedented in both its invasiveness and scope, didn’t violate the Fourth Amendment. The court also abused its discretion, failing to exercise its duty to credit the plaintiff’s allegations as true, by dismissing the case without allowing the riders to amend the lawsuit to fix defects in the original complaint, as federal rules require.
“Location data can reveal detailed, sensitive, and private information about riders, such as where they live, who they work for, who their friends are, and when they visit a doctor or attend political demonstrations,” said EFF Surveillance Litigation Director Jennifer Lynch. “The lower court turned a blind eye to Fourth Amendment principles. And it ignored Supreme Court rulings establishing that, even when location data like scooter riders’ GPS coordinates are automatically transmitted to operators, riders are still entitled to privacy over the information because of the sensitivity of location data.”
The city has never presented a justification for this dragnet collection of location data, including in this case, and has said it’s an “experiment” to develop policies for motorized scooter use. Yet the lower court decided on its own that the city needs the data and disregarded plaintiff Justin Sanchez’s statements that none of Los Angeles’ potential uses for the data necessitates collection of all riders’ granular and precise location information en masse.
“LADOT’s approach to regulating scooters is to collect as much location data as possible, and to ask questions later,” said Mohammad Tajsar, senior staff attorney at the ACLU of Southern California. “Instead of risking the civil rights of riders with this data grab, LADOT should get back to the basics: smart city planning, expanding poor and working people’s access to affordable transit, and tough regulation on the private sector.”
The lower court also incorrectly dismissed Sanchez’s claims that the data collection violates the California Electronic Communications Privacy Act (CalECPA), which prohibits the government from accessing electronic communications information without a warrant or other legal process. The court’s mangled and erroneous interpretation of CalECPA—that only courts that have issued or are in the process of issuing a warrant can decide whether the law is being violated—would, if allowed to stand, severely limit the ability of people subjected to warrantless collection of their data to ever sue the government.
“The Ninth Circuit should overturn dismissal of this case because the lower court made numerous errors in its handling of the lawsuit,” said Lynch. “The plaintiffs should be allowed to file an amended complaint and have a jury decide whether the city is violating riders’ privacy rights.”
Why should you care about data brokers? Reporting this week about a Substack publication outing a priest with location data from Grindr shows once again how easy it is for anyone to take advantage of data brokers’ stores to cause real harm.
This is not the first time Grindr has been in the spotlight for sharing user information with third-party data brokers. The Norwegian Consumer Council singled it out in its 2020 "Out of Control" report, before the Norwegian Data Protection Authority fined Grindr earlier this year. At the time, it specifically warning that the app’s data-mining practices could put users at serious risk in places where homosexuality is illegal.
But Grindr is just one of countless apps engaging in this exact kind of data sharing. The real problem is the many data brokers and ad tech companies that amass and sell this sensitive data without anything resembling real users’ consent.
Apps and data brokers claim they are only sharing so-called “anonymized” data. But that’s simply not possible. Data brokers sell rich profiles with more than enough information to link sensitive data to real people, even if the brokers don’t include a legal name. In particular, there’s no such thing as “anonymous” location data. Data points like one’s home or workplace are identifiers themselves, and a malicious observer can connect movements to these and other destinations. In this case, that includes gay bars and private residents.
Another piece of the puzzle is the ad ID, another so-called “anonymous" label that identifies a device. Apps share ad IDs with third parties, and an entire industry of “identity resolution” companies can readily link ad IDs to real people at scale.
All of this underlines just how harmful a collection of mundane-seeming data points can become in the wrong hands. We’ve said it before and we’ll say it again: metadata matters.
That’s why the U.S. needs comprehensive data privacy regulation more than ever. This kind of abuse is not inevitable, and it must not become the norm.
Thailand has become an economic leader in Southeast Asia in recent decades, but while the country has rapidly industrialized, many Thai citizens have been left behind. As a child growing up in Bangkok, Pavarin Bhandtivej would watch the news and wonder why families in the nearby countryside had next to nothing. He aspired to become a policy researcher and create beneficial change.
But Bhandtivej knew his goal wouldn’t be easy. He was born with a visual impairment, making it challenging for him to see, read, and navigate. This meant he had to work twice as hard in school to succeed. It took achieving the highest grades for Bhandtivej to break through stigmas and have his talents recognized. Still, he persevered, with a determination to uplift others. “I would return to that initial motivation I had as a kid. For me, to make even the smallest contribution to improving my country would be my dream,” he says.
“When I would face these obstacles, I would tell myself that struggling people are waiting for someone to design policies for them to have better lives. And that person could be me. I cannot fall here in front of these obstacles. I must stay motivated and move on.”
Bhandtivej completed his undergraduate degree in economics at Thailand’s top college, Chulalongkorn University. His classes introduced him to many debates about development policy, such as universal basic income. During one debate, after both sides made compelling arguments about how to alleviate poverty, Bhandtivej realized there was no clear winner. “A question came to my mind: Who's right?” he says. “In terms of theory, both sides were correct. But how could we know what approach would work in the real world?”
A new approach to higher education
The search for those answers would lead Bhandtivej to become interested in data analysis. He began investigating online courses, eventually finding the MIT MicroMasters Program in Data, Economics, and Development Policy (DEDP), which was created by MIT’s Department of Economics and the Abdul Latif Jameel Poverty Action Lab (J-PAL). The program requires learners to complete five online courses that teach quantitative methods for evaluating social programs, leading to a MicroMasters credential. Students that pass the courses’ proctored exams are then also eligible to apply for a full-time, accelerated, on-campus master’s program at MIT, led by professors Esther Duflo, Abhijit Banerjee, and Benjamin Olken.
The program’s mission to make higher education more accessible worked well for Bhandtivej. He studied tirelessly, listening and relistening to online lectures and pausing to scrutinize equations. By the end, his efforts paid off — Bhandtivej was the MicroMasters program’s top scorer. He was soon admitted into the second cohort of the highly selective DEDP master’s program.
“You can imagine how time-consuming it was to use text-to-speech to get through a 30-page reading with numerous equations, tables, and graphs,” he explains. “Luckily, Disability and Access Services provided accommodations to timed exams and I was able to push through.”
In the gap year before the master’s program began, Bhandtivej returned to Chulalongkorn University as a research assistant with Professor Thanyaporn Chankrajang. He began applying his newfound quantitative skills to study the impacts of climate change in Thailand. His contributions helped uncover how rising temperatures and irregular rainfall are leading to reduced rice crop yields. “Thailand is the world’s second largest exporter of rice, and the vast majority of Thais rely heavily on rice for its nutritional and commercial value. We need more data to encourage leaders to act now,” says Bhandtivej. “As a Buddhist, it was meaningful to be part of generating this evidence, as I am always concerned about my impact on other humans and sentient beings.”
Staying true to his mission
Now pursuing his master’s on campus, Bhandtivej is taking courses like 14.320 (Econometric Data Science) and studying how to design, conduct, and analyze empirical studies. “The professors I’ve had have opened a whole new world for me,” says Bhandtivej. “They’ve inspired me to see how we can take rigorous scientific practices and apply them to make informed policy decisions. We can do more than rely on theories.”
The final portion of the program requires a summer capstone experience, which Bhandtivej is using to work at Innovations for Poverty Action. He has recently begun to analyze how remote learning interventions in Bangladesh have performed since Covid-19. Many teachers are concerned, since disruptions in childhood education can lead to intergenerational poverty. “We have tried interventions that connect students with teachers, provide discounted data packages, and send information on where to access adaptive learning technologies and other remote learning resources,” he says. “It will be interesting to see the results. This is a truly urgent topic, as I don’t believe Covid-19 will be the last pandemic of our lifetime.”
Enhancing education has always been one of Bhandtivej’s priority interests. He sees education as the gateway that brings a person’s innate talent to light. “There is a misconception in many developing countries that disabled people cannot learn, which is untrue,” says Bhandtivej. “Education provides a critical signal to future employers and overall society that we can work and perform just as well, as long as we have appropriate accommodations.”
In the future, Bhandtivej plans on returning to Thailand to continue his journey as a policy researcher. While he has many issues he would like to tackle, his true purpose still lies in doing work that makes a positive impact on people’s lives. “My hope is that my story encourages people to think of not only what they are capable of achieving themselves, but also what they can do for others.”
“You may think you are just a small creature on a large planet. That you have just a tiny role to play. But I think — even if we are just a small part — whatever we can do to make life better for our communities, for our country, for our planet ... it’s worth it.”
Council of Europe’s Actions Belie its Pledges to Involve Civil Society in Development of Cross Border Police Powers Treaty
As the Council of Europe’s flawed cross border surveillance treaty moves through its final phases of approval, time is running out to ensure cross-border investigations occur with robust privacy and human rights safeguards in place. The innocuously named “Second Additional Protocol” to the Council of Europe’s (CoE) Cybercrime Convention seeks to set a new standard for law enforcement investigations—including those seeking access to user data—that cross international boundaries, and would grant a range of new international police powers.
But the treaty’s drafting process has been deeply flawed, with civil society groups, defense attorneys, and even data protection regulators largely sidelined. We are hoping that CoE's Parliamentary Committee (PACE), which is next in line to review the draft Protocol, will give us the opportunity to present and take our privacy and human rights concerns seriously as it formulates its opinion and recommendations before the CoE’s final body of approval, the Council of Ministers, decides the Protocol’s fate. According to the Terms of Reference for the preparation of the Draft Protocol, the Council of Ministers may consider inviting parties “other than member States of the Council of Europe to participate in this examination.”
The CoE relies on committees to generate the core draft of treaty texts. In this instance, the CoE’s Cybercrime Committee (T-CY) Plenary negotiated and drafted the Protocol’s text with the assistance of a drafting group consisting of representatives of State Parties. The process, however, has been fraught with problems. To begin with, T-CY’s Terms of Reference for the drafting process drove a lengthy, non-inclusive procedure that relied on closed sessions (Article 4.3 T-CY Rules of Procedures). While the Terms of Reference allow the T-CY to invite individual subject matter experts on an ad hoc basis, key voices such as data protection authorities, civil society experts, and criminal defense lawyers were mostly sidelined. Instead, the process has been largely commandeered by law enforcement, prosecutors and public safety officials (see here, and here).
Earlier in the process, in April 2018, EFF, CIPPIC, EDRI and 90 civil society organizations from across the globe requested the COE Secretariat General provide more transparency and meaningful civil society participation as the treaty was being negotiated and drafted—and not just during the CoE’s annual and somewhat exclusive Octopus Conferences. However, since T-CY began its consultation process in July 2018, input from external stakeholders has been limited to Octopus Conference participation and some written comments. Civil society organizations were not included in the plenary groups and subgroups where text development actually occurs, nor was our input meaningfully incorporated.
Compounding matters, the T-CY’s final online consultation, where the near final draft text of the Protocol was first presented to external stakeholders, only provided a 2.5 week window for input. The draft text included many new and complex provisions, including the Protocol’s core privacy safeguards, but excluded key elements such as the explanatory text that would normally accompany these safeguards. As was flagged by civil society, privacy regulators, and even by the CoE’s own data protection committee, two and a half weeks is not enough time to provide meaningful feedback on such a complex international treaty. More than anything, this short consultation window gave the impression that T-CY’s external consultations were truly performative in nature.
Despite these myriad shortcomings, the Council of Ministers (CoE’ final statutory decision-making body, comprising member States’ Foreign Affairs Ministers) responded to our process concerns arguing that external stakeholders had been consulted during the Protocol’s drafting process. Even more oddly, the Council of Ministers’ justified the demonstrably curtailed final consultation period by invoking its desire to complete the Protocol on the 20th anniversary of the CoE’s Budapest Cybercrime Convention (that is, by this November 2021).
With great respect, we kindly disagree with Ministers’ response. If T-CY wished to meet its November 2021 deadline, it had many options open to it. For instance, it could have included external stakeholders from civil society and from privacy regulators in its drafting process, as it had been urged to do on multiple occasions.
More importantly, this is a complex treaty with wide ranging implications for privacy and human rights in countries across the world. It is important to get it right, and ensure that concerns from civil society and privacy regulators are taken seriously and directly incorporated into the text. Unfortunately, as the text stands, it raises many substantive problems, including the lack of systematic judicial oversight in cross-border investigations and the adoption of intrusive identification powers that pose a direct threat to online anonymity. The Protocol also undermines key data protection safeguards relating to data transfers housed in central instruments like the European Union’s Law Enforcement Directive and the General Data Protection Regulation.
The Protocol now stands with CoE’s PACE, which will issue an opinion on the Protocol and might recommend some additional changes to its substantive elements. It will then fall to CoE’s Council of Ministers to decide whether to accept any of PACE’s recommendations and adopt the Protocol, a step which we still anticipate will occur in November. Together with CIPPIC, EDRI, Derechos Digitales and NGOs around the world hope that PACE takes our concerns seriously, and that the Council produces a treaty that puts privacy and human rights first.
MIT has granted tenure to five faculty members in the MIT School of Science in the departments of Brain and Cognitive Sciences, Chemistry, and Physics.
Physicist Joseph Checkelsky investigates exotic electronic states of matter through the synthesis, measurement, and control of solid-state materials. His research aims to uncover new physical phenomena that expand the boundaries of understanding of quantum mechanical condensed matter systems and open doorways to new technologies by realizing emergent electronic and magnetic functionalities. Checkelsky joined the Department of Physics in 2014 after a postdoc appointment at Japan’s Institute for Physical and Chemical Research and a lectureship at the University of Tokyo. He earned a bachelor’s degree in physics from Harvey Mudd College in 2004; and in 2010, he received a doctoral degree in physics from Princeton University.
A molecular neurobiologist and geneticist, Myriam Heiman studies the selective vulnerability and pathophysiology seen in neurodegenerative diseases of the brain’s basal ganglia, including Huntington’s disease and Parkinson’s disease. Using a revolutionary transcriptomic technique called translating ribosome affinity purification, she aims to understand the early molecular changes that eventually lead to cell death in these diseases. Heiman joined the Department of Brain and Cognitive Sciences, the Picower Institute for Learning and Memory, and the Broad Institute of Harvard and MIT in 2011 after completing her postdoctoral training at The Rockefeller University. She holds a PhD from Johns Hopkins University and a BA from Princeton University.
Particle physicist Kerstin Perez is interested in using cosmic particles to look beyond Standard Model physics, in particular evidence of dark matter interactions. Her work focuses on opening sensitivity to unexplored cosmic signatures with impact at the intersection of particle physics, astrophysics, and advanced instrumental techniques. Perez joined the Department of Physics in 2016, after a National Science Foundation astronomy and astrophysics postdoctoral fellowship at Columbia University and a faculty appointment at Haverford College. She earned her BA in physics from Columbia University in 2005, and her PhD from Caltech in 2011.
Alexander Radosevich works at the interface of inorganic and organic chemistry to design new chemical reactions. In particular, his interests concern the invention of compositionally new classes of molecular catalysts based on inexpensive and Earth-abundant elements of the p-block. This research explores the connection between molecular structure and reactivity in an effort to discover new efficient and sustainable approaches to chemical synthesis. Radosevich returned to the MIT Department of Chemistry, where he also held a postdoctoral appointment in 2016, after serving on the faculty at The Pennsylvania State University. He received a BS from the University of Notre Dame in 2002, and a PhD from University of California at Berkeley in 2007.
Alex K. Shalek creates and implements new experimental and computational approaches to identify the cellular and molecular features that inform tissue-level function and dysfunction across the spectrum of human health and disease. This encompasses both the development of broadly enabling technologies, such as Seq-Well, as well as their application to characterize, model, and rationally control complex multicellular systems. In addition to sharing this toolbox to empower mechanistic scientific inquiry across the global research community, Shalek is applying it to uncover principles that inform a wide range of problems in immunology, infectious diseases, and cancer. Shalek joined the Department of Chemistry and the Institute of Medical Engineering and Science in 2014 after postdoctoral training at Harvard University and the Broad Institute. He received his BA in chemical physics at Columbia University in 2004, followed by a PhD from Harvard University in 2011.
Weather is a tricky science — even more so at very high altitudes, with a mix of plasma and neutral particles.
In sudden stratospheric warmings (SSWs) — large meteorological disturbances related to the polar vortex in which the polar stratosphere temperature increases as it is affected by the winds around the pole — the polar vortex is weakened. SSWs also have profound atmospheric effects at great distances, causing changes in the hemisphere opposite from the location of the original SSW — changes that extend all the way to the upper thermosphere and ionosphere.
A study published on July 16 in Geophysical Research Letters by MIT Haystack Observatory’s Larisa Goncharenko and colleagues examines the effects of a recent major Antarctic SSW on the Northern Hemisphere by studying changes observed in the upper atmosphere over North America and Europe.
In an SSW-caused anomaly, changes over the pole cause changes in the opposite hemisphere. This important interhemispheric linkage was identified as drastic shifts at altitudes greater than 100 km — for example, in total electron content (TEC) measurements as well as variations in the thermospheric O/N2 ratio.
SSWs are more frequent over the Arctic; these cause TEC and other related anomalies in the Southern Hemisphere, and thus more observations have been made on this linkage. Since the Antarctic SSWs are less common, there are fewer opportunities to study their effects on the Northern Hemisphere. However, the greater density of TEC observation locations in the Northern Hemisphere allows for precise measurement of these upper atmospheric anomalies when they do occur.
In September 2019, an extreme, record-breaking SSW event occurred over Antarctica. Goncharenko and colleagues found significant resulting changes in the upper atmosphere in mid-latitudes over the Northern Hemisphere following this event; more observations are available for this region than in the Southern Hemisphere. The changes were notable not only in severity, but also because they are limited to a narrow (20–40 degrees) longitude range, differ between North America and Europe, and persist for a long time.
In the figure above, red areas show where TEC levels are shifted over North America and Europe in the afternoon; red indicates an increase of up to 80 percent versus the baseline regular levels, and blue indicates a decrease of up to –40 percent versus regular levels. This TEC shift persisted throughout September 2019 over the western United States, but was short-lived over Europe, indicating different mechanisms at play.
The authors suggest that a change in the thermospheric zonal (east–west) winds are one reason for the variance between regions. Another factor is differences in magnetic declination angles; in areas with greater declination, the zonal winds can more efficiently transport plasma to higher or lower altitudes, leading to the build-up or depletion of plasma density.
More study is needed to determine the precise extent to which these factors affect the linkage between polar stratospheric events and near-Earth space in the opposite hemisphere. These studies remain a challenge, given the relative rarity of Antarctic SSWs and sparse availability of ionospheric data in the Southern Hemisphere.
As part of a larger redesign, the payment app Venmo has discontinued its public “global” feed. That means the Venmo app will no longer show you strangers’ transactions—or show strangers your transactions—all in one place. This is a big step in the right direction. But, as the redesigned app rolls out to users over the next few weeks, it’s unclear what Venmo’s defaults will be going forward. If Venmo and parent company PayPal are taking privacy seriously, the app should make privacy the default, not just an option still buried in the settings.
Currently, all transactions and friends lists on Venmo are public by default, painting a detailed picture of who you live with, where you like to hang out, who you date, and where you do business. It doesn’t take much imagination to come up with all the ways this could cause harm to real users, and the gallery of Venmo privacy horrors is well-documented at this point.
However, Venmo apparently has no plans to make transactions private by default at this point. That would squander the opportunity it has right now to finally be responsive to the concerns of Venmo users, journalists, and advocates like EFF and Mozilla. We hope Venmo reconsiders.
There’s nothing “social” about sharing your credit card statement with your friends.
Even a seemingly positive move from “public” to “friends-only” defaults would maintain much of Venmo’s privacy-invasive status quo. That’s in large part because of Venmo’s track record of aggressively hoovering up users’ phone contacts and Facebook friends to populate their Venmo friends lists. Venmo’s installation process nudges users towards connecting their phone contacts and Facebook friends to Venmo. From there, the auto-syncing can continue silently and persistently, stuffing your Venmo friends list with people you did not affirmatively choose to connect with on the app. In some cases, there is no option to turn this auto-syncing off. There’s nothing “social” about sharing your credit card statement with a random subset of your phone contacts and Facebook friends, and Venmo should not make that kind of disclosure the default.
It’s also unclear if Venmo will continue to offer a “public” setting now that the global feed is gone. Public settings would still expose users’ activities on their individual profile pages and on Venmo’s public API, leaving them vulnerable to the kind of targeted snooping that Venmo has become infamous for.
We were pleased to see Venmo recently take the positive step of giving users settings to hide their friends lists. Throwing out the creepy global feed is another positive step. Venmo still has time to make transactions and friends lists private by default, and we hope it makes the right choice.
If you haven’t already, change your transaction and friends list settings to private by following the steps in this post.
As the Covid-19 pandemic has shown, we live in a richly connected world, facilitating not only the efficient spread of a virus but also of information and influence. What can we learn by analyzing these connections? This is a core question of network science, a field of research that models interactions across physical, biological, social, and information systems to solve problems.
The 2021 Graph Exploitation Symposium (GraphEx), hosted by MIT Lincoln Laboratory, brought together top network science researchers to share the latest advances and applications in the field.
"We explore and identify how exploitation of graph data can offer key technology enablers to solve the most pressing problems our nation faces today," says Edward Kao, a symposium organizer and technical staff in Lincoln Laboratory's AI Software Architectures and Algorithms Group.
The themes of the virtual event revolved around some of the year's most relevant issues, such as analyzing disinformation on social media, modeling the pandemic's spread, and using graph-based machine learning models to speed drug design.
"The special sessions on influence operations and Covid-19 at GraphEx reflect the relevance of network and graph-based analysis for understanding the phenomenology of these complicated and impactful aspects of modern-day life, and also may suggest paths forward as we learn more and more about graph manipulation," says William Streilein, who co-chaired the event with Rajmonda Caceres, both of Lincoln Laboratory.
Several presentations at the symposium focused on the role of network science in analyzing influence operations (IO), or organized attempts by state and/or non-state actors to spread disinformation narratives.
Lincoln Laboratory researchers have been developing tools to classify and quantify the influence of social media accounts that are likely IO accounts, such as those willfully spreading false Covid-19 treatments to vulnerable populations.
"A cluster of IO accounts acts as an echo chamber to amplify the narrative. The vulnerable population is then engaging in these narratives," says Erika Mackin, a researcher developing the tool, called RIO or Reconnaissance of Influence Operations.
To classify IO accounts, Mackin and her team trained an algorithm to detect probable IO accounts in Twitter networks based on a specific hashtag or narrative. One example they studied was #MacronLeaks, a disinformation campaign targeting Emmanuel Macron during the 2017 French presidential election. The algorithm is trained to label accounts within this network as being IO on the basis of several factors, such as the number of interactions with foreign news accounts, the number of links tweeted, or number of languages used. Their model then uses a statistical approach to score an account's level of influence in spreading the narrative within that network.
The team has found that their classifier outperforms existing detectors of IO accounts, because it can identify both bot accounts and human-operated ones. They've also discovered that IO accounts that pushed the 2017 French election disinformation narrative largely overlap with accounts influentially spreading Covid-19 pandemic disinformation today. "This suggests that these accounts will continue to transition to disinformation narratives," Mackin says.
Throughout the Covid-19 pandemic, leaders have been looking to epidemiological models, which predict how disease will spread, to make sound decisions. Alessandro Vespignani, director of the Network Science Institute at Northeastern University, has been leading Covid-19 modeling efforts in the United States, and shared a keynote on this work at the symposium.
Besides taking into account the biological facts of the disease, such as its incubation period, Vespignani's model is especially powerful in its inclusion of community behavior. To run realistic simulations of disease spread, he develops "synthetic populations" that are built by using publicly available, highly detailed datasets about U.S. households. "We create a population that is not real, but is statistically real, and generate a map of the interactions of those individuals," he says. This information feeds back into the model to predict the spread of the disease.
Today, Vespignani is considering how to integrate genomic analysis of the virus into this kind of population modeling in order to understand how variants are spreading. "It's still a work in progress that is extremely interesting," he says, adding that this approach has been useful in modeling the dispersal of the Delta variant of SARS-CoV-2.
As researchers model the virus' spread, Lucas Laird at Lincoln Laboratory is considering how network science can be used to design effective control strategies. He and his team are developing a model for customizing strategies for different geographic regions. The effort was spurred by the differences in Covid-19 spread across U.S. communities, and what the researchers found to be a gap in intervention modeling to address those differences.
As examples, they applied their planning algorithm to three counties in Florida, Massachusetts, and California. Taking into account the characteristics of a specific geographic center, such as the number of susceptible individuals and number of infections there, their planner institutes different strategies in those communities throughout the outbreak duration.
"Our approach eradicates disease in 100 days, but it also is able to do it with much more targeted interventions than any of the global interventions. In other words, you don't have to shut down a full country." Laird adds that their planner offers a "sandbox environment" for exploring intervention strategies in the future.
Machine learning with graphs
Graph-based machine learning is receiving increasing attention for its potential to "learn" the complex relationships between graphical data, and thus extract new insights or predictions about these relationships. This interest has given rise to a new class of algorithms called graph neural networks. Today, graph neural networks are being applied in areas such as drug discovery and material design, with promising results.
"We can now apply deep learning much more broadly, not only to medical images and biological sequences. This creates new opportunities in data-rich biology and medicine," says Marinka Zitnik, an assistant professor at Harvard University who presented her research at GraphEx.
Zitnik's research focuses on the rich networks of interactions between proteins, drugs, disease, and patients, at the scale of billions of interactions. One application of this research is discovering drugs to treat diseases with no or few approved drug treatments, such as for Covid-19. In April, Zitnik's team published a paper on their research that used graph neural networks to rank 6,340 drugs for their expected efficacy against SARS-CoV-2, identifying four that could be repurposed to treat Covid-19.
At Lincoln Laboratory, researchers are similarly applying graph neural networks to the challenge of designing advanced materials, such as those that can withstand extreme radiation or capture carbon dioxide. Like the process of designing drugs, the trial-and-error approach to materials design is time-consuming and costly. The laboratory's team is developing graph neural networks that can learn relationships between a material’s crystalline structure and its properties. This network can then be used to predict a variety of properties from any new crystal structure, greatly speeding up the process of screening materials with desired properties for specific applications.
"Graph representation learning has emerged as a rich and thriving research area for incorporating inductive bias and structured priors during the machine learning process, with broad applications such as drug design, accelerated scientific discovery, and personalized recommendation systems," Caceres says.
A vibrant community
Lincoln Laboratory has hosted the GraphEx Symposium annually since 2010, with the exception of last year's cancellation due to Covid-19. "One key takeaway is that despite the postponement from last year and the need to be virtual, the GraphEx community is as vibrant and active as it's ever been," Streilein says. "Network-based analysis continues to expand its reach and is applied to ever-more important areas of science, society, and defense with increasing impact."
In addition to those from Lincoln Laboratory, technical committee members and co-chairs of the GraphEx Symposium included researchers from Harvard University, Arizona State University, Stanford University, Smith College, Duke University, the U.S. Department of Defense, and Sandia National Laboratories.
MIT physicists have observed signs of a rare type of superconductivity in a material called magic-angle twisted trilayer graphene. In a study appearing today in Nature, the researchers report that the material exhibits superconductivity at surprisingly high magnetic fields of up to 10 Tesla, which is three times higher than what the material is predicted to endure if it were a conventional superconductor.
The results strongly imply that magic-angle trilayer graphene, which was initially discovered by the same group, is a very rare type of superconductor, known as a “spin-triplet,” that is impervious to high magnetic fields. Such exotic superconductors could vastly improve technologies such as magnetic resonance imaging, which uses superconducting wires under a magnetic field to resonate with and image biological tissue. MRI machines are currently limited to magnet fields of 1 to 3 Tesla. If they could be built with spin-triplet superconductors, MRI could operate under higher magnetic fields to produce sharper, deeper images of the human body.
The new evidence of spin-triplet superconductivity in trilayer graphene could also help scientists design stronger superconductors for practical quantum computing.
“The value of this experiment is what it teaches us about fundamental superconductivity, about how materials can behave, so that with those lessons learned, we can try to design principles for other materials which would be easier to manufacture, that could perhaps give you better superconductivity,” says Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT.
His co-authors on the paper include postdoc Yuan Cao and graduate student Jeong Min Park at MIT, and Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
Superconducting materials are defined by their super-efficient ability to conduct electricity without losing energy. When exposed to an electric current, electrons in a superconductor couple up in “Cooper pairs” that then travel through the material without resistance, like passengers on an express train.
In a vast majority of superconductors, these passenger pairs have opposite spins, with one electron spinning up, and the other down — a configuration known as a “spin-singlet.” These pairs happily speed through a superconductor, except under high magnetic fields, which can shift the energy of each electron in opposite directions, pulling the pair apart. In this way, and through mechanisms, high magnetic fields can derail superconductivity in conventional spin-singlet superconductors.
“That’s the ultimate reason why in a large-enough magnetic field, superconductivity disappears,” Park says.
But there exists a handful of exotic superconductors that are impervious to magnetic fields, up to very large strengths. These materials superconduct through pairs of electrons with the same spin — a property known as “spin-triplet.” When exposed to high magnetic fields, the energy of both electrons in a Cooper pair shift in the same direction, in a way that they are not pulled apart but continue superconducting unperturbed, regardless of the magnetic field strength.
Jarillo-Herrero’s group was curious whether magic-angle trilayer graphene might harbor signs of this more unusual spin-triplet superconductivity. The team has produced pioneering work in the study of graphene moiré structures — layers of atom-thin carbon lattices that, when stacked at specific angles, can give rise to surprising electronic behaviors.
The researchers initially reported such curious properties in two angled sheets of graphene, which they dubbed magic-angle bilayer graphene. They soon followed up with tests of trilayer graphene, a sandwich configuration of three graphene sheets that turned out to be even stronger than its bilayer counterpart, retaining superconductivity at higher temperatures. When the researchers applied a modest magnetic field, they noticed that trilayer graphene was able to superconduct at field strengths that would destroy superconductivity in bilayer graphene.
“We thought, this is something very strange,” Jarillo-Herrero says.
A super comeback
In their new study, the physicists tested trilayer graphene’s superconductivity under increasingly higher magnetic fields. They fabricated the material by peeling away atom-thin layers of carbon from a block of graphite, stacking three layers together, and rotating the middle one by 1.56 degrees with respect to the outer layers. They attached an electrode to either end of the material to run a current through and measure any energy lost in the process. Then they turned on a large magnet in the lab, with a field which they oriented parallel to the material.
As they increased the magnetic field around trilayer graphene, they observed that superconductivity held strong up to a point before disappearing, but then curiously reappeared at higher field strengths — a comeback that is highly unusual and not known to occur in conventional spin-singlet superconductors.
“In spin-singlet superconductors, if you kill superconductivity, it never comes back — it’s gone for good,” Cao says. “Here, it reappeared again. So this definitely says this material is not spin-singlet.”
They also observed that after “re-entry,” superconductivity persisted up to 10 Tesla, the maximum field strength that the lab’s magnet could produce. This is about three times higher than what the superconductor should withstand if it were a conventional spin-singlet, according to Pauli’s limit, a theory that predicts the maximum magnetic field at which a material can retain superconductivity.
Trilayer graphene’s reappearance of superconductivity, paired with its persistence at higher magnetic fields than predicted, rules out the possibility that the material is a run-of-the-mill superconductor. Instead, it is likely a very rare type, possibly a spin-triplet, hosting Cooper pairs that speed through the material, impervious to high magnetic fields. The team plans to drill down on the material to confirm its exact spin state, which could help to inform the design of more powerful MRI machines, and also more robust quantum computers.
“Regular quantum computing is super fragile,” Jarillo-Herrero says. “You look at it and, poof, it disappears. About 20 years ago, theorists proposed a type of topological superconductivity that, if realized in any material, could [enable] a quantum computer where states responsible for computation are very robust. That would give infinite more power to do computing. The key ingredient to realize that would be spin-triplet superconductors, of a certain type. We have no idea if our type is of that type. But even if it’s not, this could make it easier to put trilayer graphene with other materials to engineer that kind of superconductivity. That could be a major breakthrough. But it’s still super early.”
This research was supported by the U.S. Department of Energy, the National Science Foundation, the Gordon and Betty Moore Foundation, the Fundacion Ramon Areces, and the CIFAR Quantum Materials Program.
A critical challenge in meeting the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius is to vastly reduce carbon dioxide (CO2) and other greenhouse gas emissions generated by the most energy-intensive industries. According to a recent report by the International Energy Agency, these industries — cement, iron and steel, chemicals — account for about 20 percent of global CO2 emissions. Emissions from these industries are notoriously difficult to abate because, in addition to emissions associated with energy use, a significant portion of industrial emissions come from the process itself.
For example, in the cement industry, about half the emissions come from the decomposition of limestone into lime and CO2. While a shift to zero-carbon energy sources such as solar or wind-powered electricity could lower CO2 emissions in the power sector, there are no easy substitutes for emissions-intensive industrial processes.
Enter industrial carbon capture and storage (CCS). This technology, which extracts point-source carbon emissions and sequesters them underground, has the potential to remove up to 90-99 percent of CO2 emissions from an industrial facility, including both energy-related and process emissions. And that begs the question: Might CCS alone enable hard-to-abate industries to continue to grow while eliminating nearly all of the CO2 emissions they generate from the atmosphere?
The answer is an unequivocal yes in a new study in the journal Applied Energy co-authored by researchers at the MIT Joint Program on the Science and Policy of Global Change, MIT Energy Initiative, and ExxonMobil.
Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model that represents different industrial CCS technology choices — and assuming that CCS is the only greenhouse gas emissions mitigation option available to hard-to-abate industries — the study assesses the long-term economic and environmental impacts of CCS deployment under a climate policy aimed at capping the rise in average global surface temperature at 2 C above preindustrial levels.
The researchers find that absent industrial CCS deployment, the global costs of implementing the 2 C policy are higher by 12 percent in 2075 and 71 percent in 2100, relative to policy costs with CCS. They conclude that industrial CCS enables continued growth in the production and consumption of energy-intensive goods from hard-to-abate industries, along with dramatic reductions in the CO2 emissions they generate. Their projections show that as industrial CCS gains traction mid-century, this growth occurs globally as well as within geographical regions (primarily in China, Europe, and the United States) and the cement, iron and steel, and chemical sectors.
“Because it can enable deep reductions in industrial emissions, industrial CCS is an essential mitigation option in the successful implementation of policies aligned with the Paris Agreement’s long-term climate targets,” says Sergey Paltsev, the study’s lead author and a deputy director of the MIT Joint Program and senior research scientist at the MIT Energy Initiative. “As the technology advances, our modeling approach offers decision-makers a pathway for projecting the deployment of industrial CCS across industries and regions.”
But such advances will not take place without substantial, ongoing funding.
“Sustained government policy support across decades will be needed if CCS is to realize its potential to promote the growth of energy-intensive industries and a stable climate,” says Howard Herzog, a co-author of the study and senior research engineer at the MIT Energy Initiative.
The researchers also find that advanced CCS options such as cryogenic carbon capture (CCC), in which extracted CO2 is cooled to solid form using far less power than conventional coal- and gas-fired CCS technologies, could help expand the use of CCS in industrial settings through further production cost and emissions reductions.
The study was supported by sponsors of the MIT Joint Program and by ExxonMobil through its membership in the MIT Energy Initiative.
Professor Emeritus Justin “Jake” Kerwin, an expert in propeller design and ship hydrodynamics, dies at 90
Justin “Jake” Kerwin ’53, SM ’54, PhD ’61, professor emeritus of naval architecture, passed away at the age of 90 on May 23. Kerwin, who served on MIT’s ocean engineering faculty for four decades, was an internationally recognized expert in propeller design, ship hydrodynamics, and predicting racing yacht performance.
Kerwin had an international upbringing, growing up in the Netherlands, London, and eventually New York. He first arrived at MIT as an undergraduate in 1949. In addition to studying naval architecture, Kerwin was an avid sailor and member of the MIT Sailing Team. His passion for sailing would carry throughout his career.
After receiving his bachelor’s degree in from MIT in 1953 and his master’s degree in 1954, he was named a Fulbright Scholar. For his scholarship, he returned to the Netherlands, where he studied marine propeller hydrodynamics at the Delft University of Technology. Upon completing his Fulbright, Kerwin joined the U.S. Air Force as 1st Lieutenant. During his time in the Air Force, he worked on rescue boats.
Kerwin returned to MIT in 1957 to pursue his doctoral degree in marine propeller hydrodynamics while serving as a full-time lecturer. He was invited to join the then Department of Naval Architecture and Marine Engineering (now part of the Department of Mechanical Engineering) as assistant professor in 1960, one year prior to receiving his PhD.
For 40 years, Kerwin led the marine propeller research program at MIT. He was a pioneer in the use of computational techniques for marine propeller design and helped develop an open-source code used in propeller and turbine design. He also served as director of the Marine Hydrodynamics Water Tunnel, a water tank originally used for testing ship propellers.
In addition to propeller research, Kerwin conducted research on his lifelong passion of sailing. Alongside fellow faculty member Professor J.N. “Nick” Newman, he co-organized the H. Irving Pratt Ocean Racing Handicapping Project. The project greatly improved predictions of the speed of sailing yachts and resulted in the International Measurement System of handicapping yachts during races. He also pursued his passion in his personal life, often sailing and racing his sailboat “Chantey” with his family.
Throughout his long career, Kerwin was celebrated with a number of prestigious awards. He was a member of the Society of Naval Architects and Marine Engineers (SNAME) and received SNAME’s Joseph H. Linnard Prize for exceptional publications four times. Kerwin was awarded the David W. Taylor Medal for outstanding achievements in naval architecture in 1992. Several years later, he was honored with the Gibbs Brothers Medal from the National Academy of Sciences for outstanding contributions in the field of naval architecture and marine engineering. In 2000, he was elected to the National Academy of Engineering.
After retiring as professor emeritus in 2001, Kerwin and his wife Marilyn played jazz alongside fellow retired MIT ocean engineering faculty in a band known as the “Ancient Mariners.” He served as pianist and she played bass. The band was extremely active, playing gigs across New England and throughout the US.
Kerwin’s beloved wife Marilyn passed away just one month after him, on June 21. They are survived by their daughter Melinda and son John. A private celebration of life event has been organized by the Kerwin family.
On June 17th, the best legal minds in the Bay Area gathered together for a night filled with tech law trivia—but there was a twist! With in-person events still on the horizon, EFF's 13th Annual Cyberlaw Trivia Night moved to a new browser-based virtual space, custom built in Gather. This 2D environment allowed guests to interact with other participants using video, audio, and text chat, based on proximity in the room.
EFF's staff joined forces to craft the questions, pulling details from the rich canon of privacy, free speech, and intellectual property law to create four rounds of trivia for this year's seven competing teams.
As the evening began, contestants explored the virtual space and caught-up with each-other, but the time for trivia would soon be at hand! After welcoming everyone to the event, our intrepid Quiz Master Kurt Opsahl introduced our judges Cindy Cohn, Sophia Cope, and Mukund Rathi. Attendees were then asked to meet at their team's private table, allowing them to freely discuss answers without other teams being able to overhear, and so the trivia began!
Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters.
Everyone got off to a great start for the General Round 1 questions, featuring answers that ranged from winged horses to Snapchat filters. For the Intellectual Property Round 2, the questions proved more challenging, but the teams quickly rallied for the Privacy & Free Speech Round 3. With no clear winners so far, teams entered the final 4th round hoping to break away from the pack and secure 1st place.
But a clean win was not to be!
Durie Tangri's team "The Wrath of (Lina) Khan" and Fenwick's team "The NFTs: Notorious Fenwick Trivia" were still tied for first! Always prepared for such an occurrence, the teams headed into a bonus Tie-Breaker round to settle the score. Or so we thought...
After extensive deliberation, the judges arrived at their decision and announced "The Wrath of (Lina) Khan" had the closest to correct answer and were the 1st place winners, with the "The NFTs: Notorious Fenwick Trivia" coming in 2nd, and Ridder, Costa & Johnstone's team "We Invented Email" coming in 3rd. Easy, right? No!
Fenwick appealed to the judges, arguing that under Official "Price is Right" Rules, that the answer closest to correct without going over should receive the tie-breaker point: cue more extensive deliberation (lawyers). Turns out...they had a pretty good point. Motion for Reconsideration: Granted!
But what to do when the winners had already been announced?
Two first place winners, of course! Which also meant that Ridder, Costa & Johnstone's team "We Invented Email" moved into the 2nd place spot, and Facebook's team "Whatsapp" were the new 3rd place winners! Whew! Big congratulations to both winners, enjoy your bragging rights!
EFF's legal interns also joined in the fun, and their team name "EFF the Bluebook" followed the proud tradition of having an amazing team name, despite The Rules stating they were unable to formally compete.
EFF hosts the Cyberlaw Trivia Night to gather those in the legal community who help protect online freedom for their users. Among the many firms that continue to dedicate their time, talent, and resources to the cause, we would especially like to thank Durie Tangri LLP; Fenwick; Ridder, Costa & Johnstone LLP; and Wilson Sonsini Goodrich & Rosati LLP for sponsoring this year’s Bay Area event.
If you are an attorney working to defend civil liberties in the digital world, consider joining EFF's Cooperating Attorneys list. This network helps EFF connect people to legal assistance when we are unable to assist. Interested lawyers reading this post can go here to join the Cooperating Attorneys list.
Are you interested in attending or sponsoring an upcoming Trivia Night? Please email email@example.com for more information.
The Indian government’s new Intermediary Guidelines and Digital Media Ethics Code (“2021 Rules”) pose huge problems for free expression and Internet users’ privacy. They include dangerous requirements for platforms to identify the origins of messages and pre-screen content, which fundamentally breaks strong encryption for messaging tools. Though WhatsApp and others are challenging the rules in court, the 2021 Rules have already gone into effect.
Three UN Special Rapporteurs—the Rapporteurs for Freedom of Expression, Privacy, and Association—heard and in large part affirmed civil society’s criticism of the 2021 Rules, acknowledging that they did “not conform with international human rights norms.” Indeed, the Rapporteurs raised serious concerns that Rule 4 of the guidelines may compromise the right to privacy of every internet user, and called on the Indian government to carry out a detailed review of the Rules and to consult with all relevant stakeholders, including NGOs specializing in privacy and freedom of expression.
2021 Rules contain two provisions that are particularly pernicious: the Rule 4(4) Content Filtering Mandate and the Rule 4(2) Traceability Mandate.Content Filtering Mandate
Rule 4(4) compels content filtering, requiring that providers are able to review the content of communications, which not only fundamentally breaks end-to-end encryption, but creates a system for censorship. Significant social media intermediaries (i.e. Facebook, WhatsApp, Twitter, etc.) must “endeavor to deploy technology-based measures,” including automated tools or other mechanisms, to “proactively identify information” that has been forbidden under the Rules. This cannot be done without breaking the higher-level promises of secure end-to-end encrypted messaging.
Client-side scanning has been proposed as a way to enforce content blocking without technically breaking end-to-end encryption. That is, the user’s own device could use its knowledge of the unencrypted content to enforce restrictions by refusing to transmit, or perhaps to display, certain prohibited information, without revealing to the service provider who was attempting to communicate or view that information. That’s wrong. Client side-scanning requires a robot-spy in the room. A spy in a place where people are talking privately makes it not a private conversation. If that spy is a robot-spy like with client-side scanning, it is still a spy just as much as if it were a human spy.
As we explained last year, client-side scanning inherently breaks the higher-level promises of secure end-to-end encrypted communications. If the provider controls what's in the set of banned materials, they can test against individual statements, so a test against a set of size 1, in practice, is the same as being able to decrypt a message. And with client-side scanning, there's no way for users, researchers, or civil society to audit the contents of the banned materials list.
The Indian government frames the mandate as directed toward terrorism, obscenity, and the scourge of child sexual abuse material, but the mandate is acutally much broader. It also imposes proactive and automatic enforcement of the 2021 Rule’s Section (3)1(d)’s content takedown provisions requiring the proactive blocking of material previously held to be “information which is prohibited under any law,” including specifically laws for the protection of “the sovereignty and integrity of India; security of the State; friendly relations with foreign States; public order; decency or morality; in relation to contempt of court; defamation,” and incitement to any such act. This includes the widely criticized Unlawful Activities Prevention Act, which has reportedly been used to arrest academics, writers and poets for leading rallies and posting political messages on social media.
This broad mandate is all that is necessary to automatically suppress dissent, protest, and political activity that a government does not like, before it can even be transmitted. The Indian government's response to the Rapporteurs dismisses this concern, writing “India's democratic credentials are well recognized. The right to freedom of speech and expression is guaranteed under the Indian Constitution.”
The response misses the point. Even if a democratic state applies this incredible power to preemptively suppress expression only rarely and within the bounds of internationally recognized rights to freedom of expression, Rule(4)4 puts in place the tool kit for an authoritarian crackdown, automatically enforced not only in public discourse, but even in private messages between two people.
Part of a commitment to human rights in a democracy requires civic hygiene, refusing to create the tools of undemocratic power.
Moreover, rules like these give comfort and credence to authoritarian efforts to enlist intermediaries to assist in their crackdowns. If this Rule were available to China, word for word, it could be used to require social media companies to block images of Winnie the Pooh as it happened in China from being transmitted, even in direct “encrypted” messages.
Automated filters also violate due process, reversing the burden of censorship. As the three UN Special Rapporteurs made clear, a
general monitoring obligation that will lead to monitoring and filtering of user-generated content at the point of upload ... would enable the blocking of content without any form of due process even before it is published, reversing the well-established presumption that States, not individuals, bear the burden of justifying restrictions on freedom of expression.Traceability Mandate
The traceability provision, in Rule 4(2), requires any large social media intermediary that provides messaging services to “enable the identification of the first originator of the information on its computer resource” in response to a court order or a decryption request issued under the 2009 Decryption Rules. The Decryption Rules allow authorities to request the interception or monitoring of any decrypted information generated, transmitted, received, or stored in any computer resource..
The Indian government responded to the Rapporteur report, claiming to honor the right to privacy:
“The Government of India fully recognises and respects the right of privacy, as pronounced by the Supreme Court of India in K.S. Puttaswamy case. Privacy is the core element of an individual's existence and, in light of this, the new IT Rules seeks information only on a message that is already in circulation that resulted in an offence.
This narrow view of Rule (4)4 is fundamentally mistaken. Implementing the Rule requires the messaging service to collect information about all messages, even before the content is deemed a problem, allowing the government to conduct surveillance with a time machine. This changes the security model and prevents implementing strong encryption that is a fundamental backstop to protecting human rights in the digital age.The Danger to Encryption
Both the traceability and filtering mandates endanger encryption, calling for companies to know detailed information about each message that their encryption and security designs would otherwise allow users to keep private. Strong end-to-end encryption means that only the sender and the intended recipient know the content of communications between them. Even if the provider only compares two encrypted messages to see if they match, without directly examining the content, this reduces security by allowing more opportunities to guess at the content.
It is no accident that the 2021 Rules are attacking encryption. Riana Pfefferkorn, Research Scholar at the Stanford Internet Observatory, wrote that the rules were intentionally aimed at end-to-end encryption since the government would insist on software changes to defeat encryption protections:
Speaking anonymously to The Economic Times, one government official said the new rules will force large online platforms to “control” what the government deems to be unlawful content: Under the new rules, “platforms like WhatsApp can’t give end-to-end encryption as an excuse for not removing such content,” the official said.
The 2021 Rules’ unstated requirement to break encryption goes beyond the mandate of the Information Technology (IT) Act, which authorized the 2021 Rules. India’s Centre for Internet & Society’s detailed legal and constitutional analysis of the Rules explains: “There is nothing in Section 79 of the IT Act to suggest that the legislature intended to empower the Government to mandate changes to the technical architecture of services, or undermine user privacy.” Both are required to comply with the Rules.
There are better solutions. For example, WhatsApp found a way to discourage massive chain forwarding of messages without itself knowing the content. It has the app note the number of times a message has been forwarded inside the message itself so that the app can then change its behavior based on this. Since the forwarding count is inside the encrypted message, the WhatsApp server and company don’t see it. So your app might not let you forward a chain letter, because the letter’s content shows it was massively forwarded, but the company can’t look at the encrypted message and know it's content.
Likewise, empowering users to report content can mitigate many of the harms that inspired the Indian 2021 Rules. The key principle of end-to-end encryption is that a message gets securely to its destination, without interception by eavesdroppers. This does not prevent the recipient from reporting abusive or unlawful messages, including now-decrypted content and the sender’s information. An intermediary may be able to facilitate user reporting, and still be able to provide the strong encryption necessary for a free society. Furthermore, there are cryptographic techniques for a user to report abuse in a way that identifies the abusive or unlawful content without the possibility of forging a complaint and preserving the privacy of those people not directly involved.
The 2021 Rules endanger encryption, weakening the privacy and security of ordinary people throughout India, while creating tools which could all too easily be misused against fundamental human rights, and which can give inspiration for authoritarian regimes throughout the world. The Rules should be withdrawn, reviewed and reconsidered, bringing the voices of civil society and advocates for international human rights, to ensure the Rules help protect and preserve fundamental rights in the digital age.
“You get the high field, you get the performance.”
Senior Research Scientist Brian LaBombard is summarizing what might be considered a guiding philosophy behind designing and engineering fusion devices at MIT’s Plasma Science and Fusion Center (PSFC). Beginning in 1972 with the Alcator A tokamak, through Alcator C (1978) and Alcator C-Mod (1991), the PSFC has used magnets with high fields to confine the hot plasma in compact, high-performance tokamaks. Joining what was then the Plasma Fusion Center as a graduate student in 1978, just as Alcator A was finishing its run, LaBombard is one of the few who has worked with each iteration of the high-field concept. Now he has turned his attention to the PSFC’s latest fusion venture, a fusion energy project called SPARC.
Designed in collaboration with MIT spinoff Commonwealth Fusion Systems (CFS), SPARC employs novel high temperature superconducting (HTS) magnets at high-field to achieve fusion that will produce net energy gain. Some of these magnets will wrap toroidally around the tokamak’s doughnut-shaped vacuum chamber, confining fusion reactions and preventing damage to the walls of the device.
The PSFC has spent three years researching, developing, and manufacturing a scaled version of these toroidal field (TF) coils — the toroidal field model coil, or TFMC. Before the TF coils can be built for SPARC, LaBombard and his team need to test the model coil under the conditions that it will experience in this tokamak.
HTS magnets need to be cooled in order to remain superconducting, and to be protected from the heat generated by current. For testing, the TFMC will be enclosed in a cryostat, cooled to the low temperatures needed for eventual tokamak operation, and charged with current to produce magnetic field. How the magnet responds as the current is provided to the coil will determine if the technology is in hand to construct the 18 TF coils for SPARC.
A history of achievement
That LaBombard is part of the PSFC’s next fusion project is not unusual; that he is involved in designing, engineering, and testing the magnets is. Until 2018, when he led the R&D research team for one of the magnet designs being considered for SPARC, LaBombard’s 30-plus years of celebrated research had focused on other areas of the fusion question.
As a graduate student, he gained early acclaim for the research he reported in his PhD thesis. Working on Alcator C, he made groundbreaking discoveries about the plasma physics in the “boundary” region of the tokamak, between the edge of the fusing core and the wall of the machine. With typical modesty, LaBombard credits some of his success to the fact that the topic was not well-studied, and that Alcator C provided measurements not possible on other machines.
“People knew about the boundary, but nobody was really studying it in detail. On Alcator C, there were interesting phenomena, such as marfes [multifaceted asymmetric radiation from the edge], being detected for the first time. This pushed me to make boundary layer measurements in great detail that no one had ever seen before. It was all new territory, so I made a big splash.”
That splash established him as a leading researcher in the field of boundary plasmas. After a two-year turn at the University of California at Los Angeles working on a plasma-wall test facility called PISCES, LaBombard, who grew up in New England, was happy to return to MIT to join the PSFC’s new Alcator C-Mod project.
Over the next 28 years of C-Mod’s construction phase and operation, LaBombard continued to make groundbreaking contributions to understanding tokamak edge and divertor plasmas, and to design internal components that can survive the harsh conditions and provide plasma control — including C-Mod’s vertical target plate divertor and a unique divertor cryopump system. That experience led him to conceive of the "X-point target divertor" for handling extreme fusion power exhaust and to propose a national Advanced Divertor tokamak eXperiment (ADX) to test such ideas.
All along, LaBombard’s true passion was in creating revolutionary diagnostics to unfold boundary layer physics and in guiding graduate students to do the same: an Omegatron, to measure impurity concentrations directly in the boundary plasma, resolved by charge-to-mass ratio; fast-scanning Langmuir-Mach probes to measure plasma flows; a Shoelace Antenna to provide insight into plasma fluctuations at the edge; the invention of a Mirror Langmuir Probe for the real-time measurements of plasma turbulence at high bandwidth.
His expertise established, he could have continued this focus on the edge of the plasma through collaborations with other laboratories and at the PSFC. Instead, he finds himself on the other side of the vacuum chamber, immersed in magnet design and technology. Challenged with finding an effective HTS magnet design for SPARC, he and his team were able to propose a winning strategy, one that seemed most likely to achieve the compact high field and high performance that PSFC tokamaks have been known for.
LaBombard is stimulated by his new direction and excited about the upcoming test of the TFMC. His new role takes advantage of his physics background in electricity and magnetism. It also supports his passion for designing and building things, which he honed as high school apprentice to his machinist father and explored professionally building systems for Alcator C-Mod.
“I view my principal role is to make sure the TF coil works electrically, the way it's supposed to,” he says. “So it produces the magnetic field without damaging the coil.”
A successful test would validate the understanding of how the new magnet technology works, and will prepare the team to build magnets for SPARC.
Among those overseeing the hours of TFMC testing will be graduate students, current and former, reminding LaBombard of his own student days working on Alcator C, and of his years supervising students on Alcator C-Mod.
“Those students were directly involved with Alcator C-Mod. They would jump in, make things happen — and as a team. This team spirit really enabled everyone to excel.
“And looking to when SPARC was taking shape, you could see that across the board, from the new folks to the younger folks, they really got engaged by the spirit of Alcator — by recognition of the plasma performance that can be made possible by high magnetic fields.”
He laughs as he looks to the past and to the future.
“And they are taking it to SPARC.”
Years ago, we noted that despite being one of the world’s largest economies, the state of California had no broadband plan for universal, affordable, high-speed access. It is clear that access that meets our needs requires fiber optic infrastructure, yet most Californians were stuck with slow broadband monopolies due to laws supported by the cable monopolies providing us with terrible service. For example, the state was literally putting obsolete copper DSL internet connections instead of building out fiber optics to rural communities under a state law large private ISPs supported in 2017. But all of that is finally coming to an end thanks to your efforts.
Today, Governor Newsom signed into law one of the largest state investments in public fiber in the history of the United States. No longer will the state of California simply defer to the whims of AT&T and cable for broadband access, now every community is being given their shot to choose their broadband destiny.How Did We Get a New Law?
California’s new broadband infrastructure program was made possible through a combination of
persistent statewide activism from all corners, political leadership by people such as Senator Lena Gonzalez, and investment funding from the American Rescue Plan passed by Congress. All of these things were part of what led up to the moment when Governor Newsom introduced his multi-billion broadband budget that is being signed into law today. Make no mistake, every single time you picked up the phone or emailed to tell your legislator to vote for affordable, high-speed access to all people, it made a difference because it set the stage for today.
Arguably, what pushed us to this moment was the image of kids doing homework in fast-food parking lots during the pandemic. It made it undeniable that internet access was neither universal nor adequate in speed and capacity. That moment, captured and highlighted by Monterey County Supervisor Luis Alejo, a former member of the Sacramento Assembly, forced a reckoning with the failures of the current broadband ecosystem. Coupled with the COVID-19 pandemic also forcing schools to burn countless millions of public dollars renting out inferior mobile hotspots, Sacramento finally had enough and voted unanimously to change course.What is California’s New Broadband Infrastructure Program and Why is it a Revolution?
California’s new broadband program approaches the problem on multiple fronts. It empowers local public entities, local private actors, and the state government itself to be the source of the solution. The state government will build open-access fiber capacity to all corners of the state. This will ensure that every community has multi-gigabit capacity available to suit their current and future broadband needs. Low-interest financing under the state’s new $750 million “Loan Loss Reserve” program will enable municipalities and county governments to issue broadband bonds to finance their own fiber. An additional $2 billion is available in grants for unserved pockets of the state for private and public applicants.
The combination of these three programs provides solutions that were off the table before the governor signed this law. For example, a rural community can finance a portion of their own fiber network with low-interest loans and bonds, seek grants for the most expensive unserved pockets, and connect with the state’s own fiber network at affordable prices. In a major city, a small private ISP or local school district can apply for a grant to provide broadband to an unserved low-income neighborhood. Even in high-tech cities such as San Francisco, an estimated 100,000 residents lack broadband access in low-income areas, proving that access is a widespread, systemic problem, not just a rural one, that requires an all hands on deck approach.
The revolution here is the fact that the law does not rely on AT&T, Frontier Communications, Comcast, and Charter to solve the digital divide. Quite simply, the program makes very little of the total $6 billion budget available to these large private ISPs who have already received so much money and still failed to deliver a solution. This is an essential first step towards reaching near universal fiber access, because it was never ever going to happen through the large private ISPs who are tethered to fast profits and short term investor expectations that prevent them from pursuing universal fiber access. What the state needed was to empower local partners in the communities themselves who will take on the long-term infrastructure challenge.
If you live in California, now is the time to talk to your mayor and city council about your future broadband needs. Now is the time to talk to your local small businesses about the future the state has enabled if they need to improve their broadband connectivity. Now is the time to talk to your school district about what they can do to improve community infrastructure for local students. Maybe you yourself have the will and desire to build your own local broadband network through this law.
All of these things are now possible because for the first time in state history there is a law in place that lets you decide the broadband future.
Pegasus Project Shows the Need for Real Device Security, Accountability and Redress for those Facing State-Sponsored Malware
People all around the world deserve the right to have a private conversation. Communication privacy is a human right, a civil liberty and one of the centerpieces of a free society. And while we all deserve basic communications privacy, the journalists, NGO workers and human rights and democracy activists among us are especially at risk, since they are often at odds with powerful governments.
So it is no surprise that people around the world are angry to learn that surveillance software sold by NSO Group to governments has been found on cellphones worldwide. Thousands of NGOs, human rights and democracy activists, along with government employees and many others have been targeted and spied upon. We agree and we are thankful for the work done by Amnesty International, the countless journalists at Forbidden Stories, along with Citizen Lab, to bring this awful situation to light.
"A commitment to giving their own citizens strong security is the true test of a country’s commitment to cybersecurity."
Like many others, EFF has warned for years of the danger of the misuse of powerful state-sponsored malware. Yet the stories just keep coming about malware being used to surveil and track journalists and human rights defenders who are then murdered —including the murders of Jamal Khashoggi or Cecilio Pineda-Birto. Yet we have failed to ensure real accountability for the governments and companies responsible.
What can be done to prevent this? How do we create accountability and ensure redress? It’s heartening that both South Africa and Germany have recently banned dragnet communications surveillance, in part because there was no way to protect the essential private communications of journalists and privileged communications of lawyers. All of us deserve privacy, but lawyers, journalists and human rights defenders are at special risk because of their often adversarial relationship with powerful governments. Of course, the dual-use nature of targeted surveillance like the malware that NSO sells is trickier, since it is allowable under human rights law when it is deployed under proper “necessary and proportionate” limits. But that doesn’t mean we are helpless. In fact, we have suggestions on both prevention and accountability.
First, and beyond question, we need real device security. While all software can be buggy and malware often takes advantage of those bugs, we can do much better. To do better, we need the full support of our governments. It’s just shameful that in 2021 the U.S. government as well as many foreign governments in the Five Eyes and elsewhere are more interested in their own easy, surreptitious access to our devices than they are in the actual security of our devices. A commitment to giving their own citizens strong security is the true test of a country’s commitment to cybersecurity. By this measure, the countries of the world, especially those who view themselves as leaders in cybersecurity, are currently failing.
It now seems painfully obvious that we need international cooperation in support of strong encryption and device security. Countries should be holding themselves and each other to account when they pressure device manufacturers to dumb down or back door our devices and when they hoard zero days and other attacks rather than ensuring that those security holes are promptly fixed. We also need governments to hold each other to the “necessary and proportionate” requirement of international human rights law for evaluating surveillance and these limits must apply whether that surveillance is done for law enforcement or national security purposes. And the US, EU, and others must put diplomatic pressure on the countries where these immoral spyware companies are are headquartered in to stop selling hacking gear to countries who use them to commit human rights abuses. At this point, many of these companies -- Cellebrite, NSO Group, and Candiru/Saitu—are headquartered in Israel and it’s time that both governments and civil society focus attention there.
Second, we can create real accountability by bringing laws and remedies around the world up to date to ensure that those impacted by state-sponsored malware have the ability to bring suit or otherwise obtain a remedy. Those who have been spied upon must be able to get redress from both the governments who do the illegal spying and the companies that knowingly provide them with the specific tools to do so. The companies whose good name are tarnished by this malware deserve to be able to stop it too. EFF has supported all of these efforts, but more is needed. Specifically:
We supported WhatsApp’s litigation against NSO Group to stop it from spoofing WhatsApp as a strategy for infecting unsuspecting victims. The Ninth Circuit is currently considering NSO’s appeal.
We sought direct accountability for foreign governments who spy on Americans in the U.S. in Kidane v. Ethiopia. We argued that foreign countries who install malware on Americans’ devices should be held to account, just as the U.S. government would be if it violated the Wiretap Act or any of the other many applicable laws. We were stymied by a cramped reading of the law in the D.C. Circuit -- the court wrongly decided that the fact that the malware was sent from Ethiopia rather than from inside the U.S. triggered sovereign immunity. That dangerous ruling should be corrected by other courts or Congress should clarify that foreign governments don’t have a free pass to spy on people in America. NSO Group says that U.S. telephone numbers (that start with +1) are not allowed to be tracked by its service, but Americans can and do have foreign-based telephones and regardless, everyone in the world deserves human rights and redress. Countries around the world should step up to make sure their laws cover state sponsored malware attacks that occur in their jurisdiction.
We also have supported those who are seeking accountability from companies directly, including the Chinese religious minority who have been targeted using a specially-built part of the Great Firewall of China created by American tech giant Cisco.
"The truth is, too many democratic or democratic-leaning countries are facilitating the spread of this malware because they want to be able to use it against their own enemies."
Third, we must increase the pressure on these companies to make sure they are not selling to repressive regimes and continue naming and shaming those that do. EFF’s Know Your Customer framework is a good place to start, as was the State Department’s draft guidance (that apparently was never finalized). And these promises must have real teeth. Apparently we were right in 2019 that NSO Group’s unenforceable announcement that it was holding itself to the “highest standards of ethical business,” was largely a toothless public relations move. Yet while NSO is rightfully on the hot seat now, they are not the only player in this immoral market. Companies who sell dangerous equipment of all kinds must take steps to understand and limit misuse and these surveillance. Malware tools used by governments are no different.
Fourth, we support former United Nations Special Rapporteur for Freedom of Expression David Kaye in calling for a moratorium on the governmental use of these malware technologies. While this is a longshot, we agree that the long history of misuse, and the growing list of resulting extrajudicial killings of journalists and human rights defenders, along with other human rights abuses, justifies a full moratorium.
These are just the start of possible remedies and accountability strategies. Other approaches may be reasonable too, but each must recognize that, at least right now, the intelligence and law enforcement communities of many countries are not defining “cybersecurity” to include actually protecting us, much less the journalists and NGOs and activists that do the risky work to keep us informed and protect our rights. We also have to understand that unless done carefully, regulatory responses like further triggering U.S. export restrictions could result in less security for the rest of us while not really addressing the problem. The NSO Group was reportedly able to sell to the Saudi regime with the permission and encouragement of the Israeli government under that country’s export regime. The truth is, too many democratic or democratic-leaning countries are facilitating the spread of this malware because they want to be able to use it against their own enemies.
Until governments around the world get out of the way and actually support security for all of us, including accountability and redress for victims, these outrages will continue. Governments must recognize that intelligence agency and law enforcement hostility to device security is dangerous for their own citizens because a device cannot tell if the malware infecting it is from the good guys or the bad guys. This fact is just not going to go away.
We must have strong security at the start, and strong accountability after the fact if we want to get to a world where all of us can enjoy communications security. Only then will our journalists, human rights defenders and NGOs be able to do their work without fear of being tracked, watched and potentially murdered simply because they use a mobile device.