Feed aggregator

Making the clean energy transition work for everyone

MIT Latest News - Fri, 03/15/2024 - 5:00pm

The clean energy transition is already underway, but how do we make sure it happens in a manner that is affordable, sustainable, and fair for everyone?

That was the overarching question at this year’s MIT Energy Conference, which took place March 11 and 12 in Boston and was titled “Short and Long: A Balanced Approach to the Energy Transition.”

Each year, the student-run conference brings together leaders in the energy sector to discuss the progress and challenges they see in their work toward a greener future. Participants come from research, industry, government, academia, and the investment community to network and exchange ideas over two whirlwind days of keynote talks, fireside chats, and panel discussions.

Several participants noted that clean energy technologies are already cost-competitive with fossil fuels, but changing the way the world works requires more than just technology.

“None of this is easy, but I think developing innovative new technologies is really easy compared to the things we’re talking about here, which is how to blend social justice, soft engineering, and systems thinking that puts people first,” Daniel Kammen, a distinguished professor of energy at the University of California at Berkeley, said in a keynote talk. “While clean energy has a long way to go, it is more than ready to transition us from fossil fuels.”

The event also featured a keynote discussion between MIT President Sally Kornbluth and MIT’s Kyocera Professor of Ceramics Yet-Ming Chiang, in which Kornbluth discussed her first year at MIT as well as a recently announced, campus-wide effort to solve critical climate problems known as the Climate Project at MIT.

“The reason I wanted to come to MIT was I saw that MIT has the potential to solve the world’s biggest problems, and first among those for me was the climate crisis,” Kornbluth said. “I’m excited about where we are, I’m excited about the enthusiasm of the community, and I think we’ll be able to make really impactful discoveries through this project.”

Fostering new technologies

Several panels convened experts in new or emerging technology fields to discuss what it will take for their solutions to contribute to deep decarbonization.

“The fun thing and challenging thing about first-of-a-kind technologies is they’re all kind of different,” said Jonah Wagner, principal assistant director for industrial innovation and clean energy in the U.S. Office of Science and Technology Policy. “You can map their growth against specific challenges you expect to see, but every single technology is going to face their own challenges, and every single one will have to defy an engineering barrier to get off the ground.”

Among the emerging technologies discussed was next-generation geothermal energy, which uses new techniques to extract heat from the Earth’s crust in new places.

A promising aspect of the technology is that it can leverage existing infrastructure and expertise from the oil and gas industry. Many newly developed techniques for geothermal production, for instance, use the same drills and rigs as those used for hydraulic fracturing.

“The fact that we have a robust ecosystem of oil and gas labor and technology in the U.S. makes innovation in geothermal much more accessible compared to some of the challenges we’re seeing in nuclear or direct-air capture, where some of the supply chains are disaggregated around the world,” said Gabrial Malek, chief of staff at the geothermal company Fervo Energy.

Another technology generating excitement — if not net energy quite yet — is fusion, the process of combining, or fusing, light atoms together to form heavier ones for a net energy gain, in the same process that powers the sun. MIT spinout Commonwealth Fusion Systems (CFS) has already validated many aspects of its approach for achieving fusion power, and the company’s unique partnership with MIT was discussed in a panel on the industry’s progress.

“We’re standing on the shoulders of decades of research from the scientific community, and we want to maintain those ties even as we continue developing our technology,” CFS Chief Science Officer Brandon Sorbom PhD ’17 said, noting that CFS is one of the largest company sponsors of research at MIT and collaborates with institutions around the world. “Engaging with the community is a really valuable lever to get new ideas and to sanity check our own ideas.”

Sorbom said that as CFS advances fusion energy, the company is thinking about how it can replicate its processes to lower costs and maximize the technology’s impact around the planet.

“For fusion to work, it has to work for everyone,” Sorbom said. “I think the affordability piece is really important. We can’t just build this technological jewel that only one class of nations can afford. It has to be a technology that can be deployed throughout the entire world.”

The event also gave students — many from MIT — a chance to learn more about careers in energy and featured a startup showcase, in which dozens of companies displayed their energy and sustainability solutions.

“More than 700 people are here from every corner of the energy industry, so there are so many folks to connect with and help me push my vision into reality,” says GreenLIB CEO Fred Rostami, whose company recycles lithium-ion batteries. “The good thing about the energy transition is that a lot of these technologies and industries overlap, so I think we can enable this transition by working together at events like this.”

A focused climate strategy

Kornbluth noted that when she came to MIT, a large percentage of students and faculty were already working on climate-related technologies. With the Climate Project at MIT, she wanted to help ensure the whole of those efforts is greater than the sum of its parts.

The project is organized around six distinct missions, including decarbonizing energy and industry, empowering frontline communities, and building healthy, resilient cities. Kornbluth says the mission areas will help MIT community members collaborate around multidisciplinary challenges. Her team, which includes a committee of faculty advisors, has begun to search for the leads of each mission area, and Kornbluth said she is planning to appoint a vice president for climate at the Institute.

“I want someone who has the purview of the whole Institute and will report directly to me to help make sure this project stays on track,” Kornbluth explained.

In his conversation about the initiative with Kornbluth, Yet-Ming Chiang said projects will be funded based on their potential to reduce emissions and make the planet more sustainable at scale.

“Projects should be very high risk, with very high impact,” Chiang explained. “They should have a chance to prove themselves, and those efforts should not be limited by resources, only by time.”

In discussing her vision of the climate project, Kornbluth alluded to the “short and long” theme of the conference.

“It’s about balancing research and commercialization,” Kornbluth said. “The climate project has a very variable timeframe, and I think universities are the sector that can think about the things that might be 30 years out. We have to think about the incentives across the entire innovation pipeline and how we can keep an eye on the long term while making sure the short-term things get out rapidly.”

3 Questions: What you need to know about audio deepfakes

MIT Latest News - Fri, 03/15/2024 - 4:50pm

Audio deepfakes have had a recent bout of bad press after an artificial intelligence-generated robocall purporting to be the voice of Joe Biden hit up New Hampshire residents, urging them not to cast ballots. Meanwhile, spear-phishers — phishing campaigns that target a specific person or group, especially using information known to be of interest to the target — go fishing for money, and actors aim to preserve their audio likeness.

What receives less press, however, are some of the uses of audio deepfakes that could actually benefit society. In this Q&A prepared for MIT News, postdoc Nauman Dawalatabad addresses concerns as well as potential upsides of the emerging tech. A fuller version of this interview can be seen at the video below.

Q: What ethical considerations justify the concealment of the source speaker's identity in audio deepfakes, especially when this technology is used for creating innovative content?

A: The inquiry into why research is important in obscuring the identity of the source speaker, despite a large primary use of generative models for audio creation in entertainment, for example, does raise ethical considerations. Speech does not contain the information only about “who you are?” (identity) or “what you are speaking?” (content); it encapsulates a myriad of sensitive information including age, gender, accent, current health, and even cues about the upcoming future health conditions. For instance, our recent research paper on “Detecting Dementia from Long Neuropsychological Interviews” demonstrates the feasibility of detecting dementia from speech with considerably high accuracy. Moreover, there are multiple models that can detect gender, accent, age, and other information from speech with very high accuracy. There is a need for advancements in technology that safeguard against the inadvertent disclosure of such private data. The endeavor to anonymize the source speaker's identity is not merely a technical challenge but a moral obligation to preserve individual privacy in the digital age.

Q: How can we effectively maneuver through the challenges posed by audio deepfakes in spear-phishing attacks, taking into account the associated risks, the development of countermeasures, and the advancement of detection techniques?

A: The deployment of audio deepfakes in spear-phishing attacks introduces multiple risks, including the propagation of misinformation and fake news, identity theft, privacy infringements, and the malicious alteration of content. The recent circulation of deceptive robocalls in Massachusetts exemplifies the detrimental impact of such technology. We also recently spoke with the spoke with The Boston Globe about this technology, and how easy and inexpensive it is to generate such deepfake audios.

Anyone without a significant technical background can easily generate such audio, with multiple available tools online. Such fake news from deepfake generators can disturb financial markets and even electoral outcomes. The theft of one's voice to access voice-operated bank accounts and the unauthorized utilization of one's vocal identity for financial gain are reminders of the urgent need for robust countermeasures. Further risks may include privacy violation, where an attacker can utilize the victim’s audio without their permission or consent. Further, attackers can also alter the content of the original audio, which can have a serious impact.

Two primary and prominent directions have emerged in designing systems to detect fake audio: artifact detection and liveness detection. When audio is generated by a generative model, the model introduces some artifact in the generated signal. Researchers design algorithms/models to detect these artifacts. However, there are some challenges with this approach due to increasing sophistication of audio deepfake generators. In the future, we may also see models with very small or almost no artifacts. Liveness detection, on the other hand, leverages the inherent qualities of natural speech, such as breathing patterns, intonations, or rhythms, which are challenging for AI models to replicate accurately. Some companies like Pindrop are developing such solutions for detecting audio fakes. 

Additionally, strategies like audio watermarking serve as proactive defenses, embedding encrypted identifiers within the original audio to trace its origin and deter tampering. Despite other potential vulnerabilities, such as the risk of replay attacks, ongoing research and development in this arena offer promising solutions to mitigate the threats posed by audio deepfakes.

Q: Despite their potential for misuse, what are some positive aspects and benefits of audio deepfake technology? How do you imagine the future relationship between AI and our experiences of audio perception will evolve?

A: Contrary to the predominant focus on the nefarious applications of audio deepfakes, the technology harbors immense potential for positive impact across various sectors. Beyond the realm of creativity, where voice conversion technologies enable unprecedented flexibility in entertainment and media, audio deepfakes hold transformative promise in health care and education sectors. My current ongoing work in the anonymization of patient and doctor voices in cognitive health-care interviews, for instance, facilitates the sharing of crucial medical data for research globally while ensuring privacy. Sharing this data among researchers fosters development in the areas of cognitive health care. The application of this technology in voice restoration represents a hope for individuals with speech impairments, for example, for ALS or dysarthric speech, enhancing communication abilities and quality of life.

I am very positive about the future impact of audio generative AI models. The future interplay between AI and audio perception is poised for groundbreaking advancements, particularly through the lens of psychoacoustics — the study of how humans perceive sounds. Innovations in augmented and virtual reality, exemplified by devices like the Apple Vision Pro and others, are pushing the boundaries of audio experiences towards unparalleled realism. Recently we have seen an exponential increase in the number of sophisticated models coming up almost every month. This rapid pace of research and development in this field promises not only to refine these technologies but also to expand their applications in ways that profoundly benefit society. Despite the inherent risks, the potential for audio generative AI models to revolutionize health care, entertainment, education, and beyond is a testament to the positive trajectory of this research field.

The SAFE Act to Reauthorize Section 702 is Two Steps Forward, One Step Back

EFF: Updates - Fri, 03/15/2024 - 4:48pm

Section 702 of the Foreign Intelligence Surveillance Act (FISA) is one of the most insidious and secretive mass surveillance authorities still in operation today. The Security and Freedom Enhancement (SAFE) Act would make some much-needed and long fought-for reforms, but it also does not go nearly far enough to rein in a surveillance law that the federal government has abused time and time again.

You can read the full text of the bill here.

While Section 702 was first sold as a tool necessary to stop foreign terrorists, it has since become clear that the government uses the communications it collects under this law as a domestic intelligence source. The program was intended to collect communications of people outside of the United States, but because we live in an increasingly globalized world, the government retains a massive trove of communications between people overseas on U.S. persons. Now, it’s this US side of digital conversations that are being routinely sifted through by domestic law enforcement agencies—all without a warrant.

The SAFE Act, like other reform bills introduced this Congress, attempts to roll back some of this warrantless surveillance. Despite its glaring flaws and omissions, in a Congress as dysfunctional as this one it might be the bill that best privacy-conscious people and organizations can hope for. For instance, it does not do as much as the Government Surveillance Reform Act, which EFF supported in November 2023. But imposing meaningful checks on the Intelligence Community (IC) is an urgent priority, especially because the Intelligence Community has been trying to sneak a "clean" reauthorization of Section 702 into government funding bills, and has even sought to have the renewal happen in secret in the hopes of keeping its favorite mass surveillance law intact. The administration is also reportedly planning to seek another year-long extension of the law without any congressional action. All the while, those advocating for renewing Section 702 have toyed with as many talking points as they can—from cybercrime or human trafficking to drug smuggling, terrorism, oreven solidarity activism in the United States—to see what issue would scare people sufficiently enough to allow for a clean reauthorization of mass surveillance.

So let’s break down the SAFE Act: what’s good, what’s bad, and what aspects of it might actually cause more harm in the future. 

What’s Good about the SAFE Act

The SAFE Act would do at least two things that reform advocates have pressured Congress to include in any proposed bill to reauthorize Section 702. This speaks to the growing consensus that some reforms are absolutely necessary if this power is to remain operational.

The first and most important reform the bill would make is to require the government to obtain a warrant before accessing the content of communications for people in the United States. Currently, relying on Section 702, the government vacuums up communications from all over the world, and a huge number of those intercepted communications are to or from US persons. Those communications sit in a massive database. Both intelligence agencies and law enforcement have conducted millions of queries of this database for US-based communications—all without a warrant—in order to investigate both national security concerns and run-of-the-mill criminal investigations. The SAFE Act would prohibit “warrantless access to the communications and other information of United States persons and persons located in the United States.” While this is the bare minimum a reform bill should do, it’s an important step. It is crucial to note, however, that this does not stop the IC or law enforcement from querying to see if the government has collected communications from specific individuals under Section 702—it merely stops them from reading those communications without a warrant.

The second major reform the SAFE Act provides is to close the “data brooker loophole,” which EFF has been calling attention to for years. As one example, mobile apps often collect user data to sell it to advertisers on the open market. The problem is law enforcement and intelligence agencies increasingly buy this private user data, rather than obtain a warrant for it. This bill would largely prohibit the government from purchasing personal data they would otherwise need a warrant to collect. This provision does include a potentially significant exception for situations where the government cannot exclude Americans’ data from larger “compilations” that include foreigners’ data. This speaks not only to the unfair bifurcation of rights between Americans and everyone else under much of our surveillance law, but also to the risks of allowing any large scale acquisition from data brokers at all. The SAFE Act would require the government to minimize collection, search, and use of any Americans’ data in these compilations, but it remains to be seen how effective these prohibitions will be. 

What’s Missing from the SAFE Act

The SAFE Act is missing a number of important reforms that we’ve called for—and which the Government Surveillance Reform Act would have addressed. These reforms include ensuring that individuals harmed by warrantless surveillance are able to challenge it in court, both in civil lawsuits like those brought by EFF in the past, and in criminal cases where the government may seek to shield its use of Section 702 from defendants. After nearly 14 years of Section 702 and countless court rulings slamming the courthouse door on such legal challenges, it’s well past time to ensure that those harmed by Section 702 surveillance can have the opportunity to challenge it.

New Problems Potentially Created by the SAFE Act

While there may often be good reason to protect the secrecy of FISA proceedings, unofficial disclosures about these proceedings has from the very beginning played an indispensable role in reforming uncontested abuses of surveillance authorities. From the Bush administration’s warrantless wiretapping program through the Snowden disclosures up to the present, when reporting about FISA applications appears on the front page of the New York Times, oversight of the intelligence community would be extremely difficult, if not impossible, without these disclosures.

Unfortunately, the SAFE Act contains at least one truly nasty addition to current law: an entirely new crime that makes it a felony to disclose “the existence of an application” for foreign intelligence surveillance or any of the application’s contents. In addition to explicitly adding to the existing penalties in the Espionage Act—itself highly controversial— this new provision seems aimed at discouraging leaks by increasing the potential sentence to eight years in prison. There is no requirement that prosecutors show that the disclosure harmed national security, nor any consideration of the public interest. Under the present climate, there’s simply no reason to give prosecutors even more tools like this one to punish whistleblowers who are seen as going through improper channels.

EFF always aims to tell it like it is. This bill has some real improvements, but it’s nowhere near the surveillance reform we all deserve. On the other hand, the IC and its allies in Congress continue to have significant leverage to push fake reform bills, so the SAFE Act may well be the best we’re going to get. Either way, we’re not giving up the fight.  

Thousands of Young People Told Us Why the Kids Online Safety Act Will Be Harmful to Minors

EFF: Updates - Fri, 03/15/2024 - 3:37pm

With KOSA passed, the information i can access as a minor will be limited and censored, under the guise of "protecting me", which is the responsibility of my parents, NOT the government. I have learned so much about the world and about myself through social media, and without the diverse world i have seen, i would be a completely different, and much worse, person. For a country that prides itself in the free speech and freedom of its peoples, this bill goes against everything we stand for! - Alan, 15  

___________________

If information is put through a filter, that’s bad. Any and all points of view should be accessible, even if harmful so everyone can get an understanding of all situations. Not to mention, as a young neurodivergent and queer person, I’m sure the information I’d be able to acquire and use to help myself would be severely impacted. I want to be free like anyone else. - Sunny, 15 

 ___________________

How young people feel about the Kids Online Safety Act (KOSA) matters. It will primarily affect them, and many, many teenagers oppose the bill. Some have been calling and emailing legislators to tell them how they feel. Others have been posting their concerns about the bill on social media. These teenagers have been baring their souls to explain how important social media access is to them, but lawmakers and civil liberties advocates, including us, have mostly been the ones talking about the bill and about what’s best for kids, and often we’re not hearing from minors in these debates at all. We should be — these young voices should be essential when talking about KOSA.

So, a few weeks ago, we asked some of the young advocates fighting to stop the Kids Online Safety Act a few questions:  

- How has access to social media improved your life? What do you gain from it? 

- What would you lose if KOSA passed? How would your life be different if it was already law? 

Within a week we received over 3,000 responses. As of today, we have received over 5,000.

These answers are critical for legislators to hear. Below, you can read some of these comments, sorted into the following themes (though they often overlap):  

These comments show that thoughtful young people are deeply concerned about the proposed law's fallout, and that many who would be affected think it will harm them, not help them. Over 700 of those who responded reported that they were currently sixteen or under—the age under which KOSA’s liability is applicable. The average age of those who answered the survey was 20 (of those who gave their age—the question was optional, and about 60% of people responded).  In addition to these two questions, we also asked those taking the survey if they were comfortable sharing their email address for any journalist who might want to speak with them; unfortunately much coverage usually only mentions one or two of the young people who would be most affected. So, journalists: We have contact info for over 300 young people who would be happy to speak to you about why social media matters to them, and why they oppose KOSA.

Individually, these answers show that social media, despite its current problems, offer an overall positive experience for many, many young people. It helps people living in remote areas find connection; it helps those in abusive situations find solace and escape; it offers education in history, art, health, and world events for those who wouldn’t otherwise have it; it helps people learn about themselves and the world around them. (Research also suggests that social media is more helpful than harmful for young people.) 

And as a whole, these answers tell a story that is 180° different from that which is regularly told by politicians and the media. In those stories, it is accepted as fact that the majority of young people’s experiences on social media platforms are harmful. But from these responses, it is clear that many, many young people also experience help, education, friendship, and a sense of belonging there—precisely because social media allows them to explore, something KOSA is likely to hinder. These kids are deeply engaged in the world around them through these platforms, and genuinely concerned that a law like KOSA could take that away from them and from other young people.  

Here are just a few of the thousands of reasons they’re worried.  

Note: We are sharing individuals’ opinions, without editing. We do not necessarily endorse them or their interpretation of KOSA.

KOSA Will Harm Rights That Young People Know They Ought to Have 

One of the most important things that would be lost is the freedom of speech - a given right that is crucial to a healthy, functioning environment. Not every speech is morally okay, but regulating what speech is deemed "acceptable" constricts people's rights; a clear violation of the First Amendment. Those who need or want to access certain information are not allowed to - not because the information will harm them or others, but for the reason that a certain portion of people disagree with the information. If the country only ran on what select people believed, we would be a bland, monotonous place. This country thrives on diversity, whether it be race, gender, sex, or any other personal belief. If KOSA was passed, I would lose my safe spaces, places where I can go to for mental health, places that make me feel more like a human than just some girl. No more would I be able to fight for ideas and beliefs I hold, nor enjoy my time on the internet either. - Anonymous, 16 

 ___________________

I, and many of my friends, grew up in an Internet where remaining anonymous was common sense, and where revealing your identity was foolish and dangerous, something only to be done sparingly, with a trusted ally at your side, meeting at a common, crowded public space like a convention or a college cafeteria. This bill spits in the face of these very practical instincts, forces you to dox yourself, and if you don’t want to be outed, you must be forced to withdraw from your communities. From your friends and allies. From the space you have made for yourself, somewhere you can truly be yourself with little judgment, where you can find out who you really are, alongside people who might be wildly different from you in some ways, and exactly like you in others. I am fortunate to have parents who are kind and accepting of who I am. I know many people are nowhere near as lucky as me. - Maeve, 25 

 ___________________ 

I couldn't do activism through social media and I couldn't connect with other queer individuals due to censorship and that would lead to loneliness, depression other mental health issues, and even suicide for some individuals such as myself. For some of us the internet is the only way to the world outside of our hateful environments, our only hope. Representation matters, and by KOSA passing queer children would see less of age appropriate representation and they would feel more alone. Not to mention that KOSA passing would lead to people being uninformed about things and it would start an era of censorship on the internet and by looking at the past censorship is never good, its a gateway to genocide and a way for the government to control. – Sage, 15 

  ___________________

Privacy, censorship, and freedom of speech are not just theoretical concepts to young people. Their rights are often already restricted, and they see the internet as a place where they can begin to learn about, understand, and exercise those freedoms. They know why censorship is dangerous; they understand why forcing people to identify themselves online is dangerous; they know the value of free speech and privacy, and they know what they’ve gained from an internet that doesn’t have guardrails put up by various government censors.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Could Impact Young People’s Artistic Education and Opportunities 

I found so many friends and new interests from social media. Inspirations for my art I find online, like others who have an art style I admire, or models who do poses I want to draw. I can connect with my friends, send them funny videos and pictures. I use social media to keep up with my favorite YouTubers, content creators, shows, books. When my dad gets drunk and hard to be around or my parents are arguing, I can go on YouTube or Instagram and watch something funny to laugh instead. It gives me a lot of comfort, being able to distract myself from my sometimes upsetting home life. I get to see what life is like for the billions of other people on this planet, in different cities, states, countries. I get to share my life with my friends too, freely speaking my thoughts, sharing pictures, videos, etc.  
I have found my favorite YouTubers from other social media platforms like tiktok, this happened maybe about a year ago, and since then I think this is the happiest I have been in a while. Since joining social media I have become a much more open minded person, it made me interested in what others lives are like. It also brought awareness and educated me about others who are suffering in the world like hunger, poor quality of life, etc. Posting on social media also made me more confident in my art, in the past year my drawing skills have immensely improved and I’m shocked at myself. Because I wanted to make better fan art, inspire others, and make them happy with my art. I have been introduce to many styles of clothing that have helped develop my own fun clothing style. It powers my dreams and makes me want to try hard when I see videos shared by people who have worked hard and made it. - Anonymous, 15 

  ___________________

As a kid I was able to interact in queer and disabled and fandom spaces, so even as a disabled introverted child who wasn’t popular with my peers I still didn’t feel lonely. The internet is arguably a safer way to interact with other fans of media than going to cons with strangers, as long as internet safety is really taught to kids. I also get inspiration for my art and writing from things I’ve only discovered online, and as an artist I can’t make money without the internet and even minors do commissions. The issue isn’t that the internet is unsafe, it’s that internet safety isn’t taught anymore. - Rachel, 19 

  ___________________

i am an artist, and sharing my things online makes me feel happy and good about myself. i love seeing other people online and knowing that they like what i make. when i make art, im always nervous to show other people. but when i post it online i feel like im a part of something, and that im in a community where i feel that i belong. – Anonymous, 15 

 ___________________ 

Social media has saved my life, just like it has for many young people. I have found safe spaces and motivation because of social media, and I have never encountered anything negative or harmful to me. With social media I have been able to share my creativity (writing, art, and music) and thoughts safely without feeling like I'm being held back or oppressed. My creations have been able to inspire and reach so many people, just like how other people's work have reached me. Recently, I have also been able to help the library I volunteer at through the help of social media. 
What I do in life and all my future plans (career, school, volunteer projects, etc.) surrounds social media, and without it I wouldn't be able to share what I do and learn more to improve my works and life. I wouldn't be able to connect with wonderful artists, musicians, and writers like I do now. I would be lost and feel like I don't have a reason to do what I do. If KOSA is passed, I wouldn't be able to get the help I need in order to survive. I've made so many friends who have been saved because of social media, and if this bill gets passed they will also be affected. Guess what? They wouldn't be able to get the help they need either. 
If KOSA was already a law when I was just a bit younger, I wouldn't even be alive. I wouldn't have been able to reach help when I needed it. I wouldn't have been able to share my mind with the world. Social media was the reason I was able to receive help when I was undergoing abuse and almost died. If KOSA was already a law, I would've taken my life, or my abuser would have done it before I could. If KOSA becomes a law now, I'm certain that the likeliness of that happening to kids of any age will increase. – Anonymous, 15 

  ___________________

A huge number of young artists say they use social media to improve their skills, and in many cases, the avenue by which they discovered their interest in a type of art or music. Young people are rightfully worried that the magic moment where you first stumble upon an artist or a style that changes your entire life will be less and less common for future generations if KOSA passes. We agree: KOSA would likely lead platforms to limit that opportunity for young people to experience unexpected things, forcing their online experiences into a much smaller box under the guise of protecting them.  

Also, a lot of young people told us they wanted to, or were developing, an online business—often an art business. Under KOSA, young people could have less opportunities in the online communities where artists share their work and build a customer base, and a harder time navigating the various communities where they can share their art.  

KOSA Will Hurt Young People’s Ability to Find Community Online 

Social media has allowed me to connect with some of my closest friends ever, probably deeper than some people in real life. i get to talk about anything i want unimpeded and people accept me for who i am. in my deepest and darkest moments, knowing that i had somewhere to go was truly more relieving than anything else. i've never had the courage to commit suicide, but still, if it weren't for social media, i probably wouldn't be here, mentally & emotionally at least. 
i'd lose the space that accepts me. i'd lose the only place where i can be me. in life, i put up a mask to appease my parents and in some cases, my friends. with how extreme the u.s. is becoming these days, i could even lose my life. i would live my days in fear. i'm terrified of how fast this country is changing and if this bill passes, saying i would fall into despair would be an understatement. people say to "be yourself", but they don't understand that if i were to be my true self tomorrow, i could be killed. – march, 14 

 ___________________ 

Without the internet, and especially the rhythm gaming community which I found through Discord, I would've most likely killed myself at 13. My time on here has not been perfect, as has anyone's but without the internet I wouldn't have been the person I am today. I wouldn't have gotten help recognizing that what my biological parents were doing to me was abuse, the support I've received for my identity (as queer youth) and the way I view things, with ways to help people all around the world and be a more mindful ally, activist, and thinker, and I wouldn't have met my mom. 
I love my chosen mom. We met at a Dance Dance Revolution tournament in April of last year and have been friends ever since. When I told her that she was the first person I saw as a mother figure in my life back in November, I was bawling my eyes out. I'm her mije, and she's my mom. love her so much that saying that doesn't even begin to express exactly how much I love her.  
I love all my chosen family from the rhythm gaming community, my older sisters and siblings, I love them all. I have a few, some I talk with more regularly than others. Even if they and I may not talk as much as we used to, I still love them. They mean so much to me. – X86, 15 

  ___________________

i spent my time in public school from ages 9-13 getting physically and emotionally abused by special ed aides, i remember a few months after i left public school for good, i saw a post online that made me realize that what i went through wasn’t normal. if it wasn’t for the internet, i wouldn’t have come to terms with my autism, i would have still hated myself due to not knowing that i was genderqueer, my mental health would be significantly worse, and i would probably still be self harming, which is something i stopped doing at 13. besides the trauma and mental health side of things, something important to know is that spaces for teenagers to hang out have been eradicated years ago, minors can’t go to malls unless they’re with their parents, anti loitering laws are everywhere, and schools aren’t exactly the best place for teenagers to hang out, especially considering queer teens who were murdered by bullies (such as brianna ghey or nex benedict), the internet has become the third space that teenagers have flocked to as a result. – Anonymous, 17 

  ___________________

KOSA is anti-community. People online don’t only connect over shared interests in art and music—they also connect over the difficult parts of their lives. Over and over again, young people told us that one of the most valuable parts of social media was learning that they were not alone in their troubles. Finding others in similar circumstances gave them a community, as well as ideas to improve their situations, and even opportunities to escape dangerous situations.  

KOSA will make this harder. As platforms limit the types of recommendations and public content they feel safe sharing with young people, those who would otherwise find communities or potential friends will not be as likely to do so. A number of young people explained that they simply would never have been able to overcome some of the worst parts of their lives alone, and they are concerned that KOSA’s passage would stop others from ever finding the help they did. 

KOSA Could Seriously Hinder People’s Self-Discovery  

I am a transgender person, and when I was a preteen, looking down the barrel of the gun of puberty, I was miserable. I didn't know what was wrong I just knew I'd rather do anything else but go through puberty. The internet taught me what that was. They told me it was okay. There were things like haircuts and binders that I could use now and medical treatment I could use when I grew up to fix things. The internet was there for me too when I was questioning my sexuality and again when my mental health was crashing and even again when I was realizing I'm not neurotypical. The internet is a crucial source of information for preteens and beyond and you cannot take it away. You cannot take away their only realistically reachable source of information for what the close-minded or undereducated adults around them don't know. - Jay, 17 

   ___________________

Social media has improved my life so much and led to how I met my best friend, I’ve known them for 6+ years now and they mean so much to me. Access to social media really helps me connect with people similar to me and that make me feel like less of an outcast among my peers, being able to communicate with other neurodivergent queer kids who like similar interests to me. Social media makes me feel like I’m actually apart of a community that won’t judge me for who I am. I feel like I can actually be myself and find others like me without being harassed or bullied, I can share my art with others and find people like me in a way I can’t in other spaces. The internet & social media raised me when my parents were busy and unavailable and genuinely shaped the way I am today and the person I’ve become. – Anonymous, 14 

   ___________________

The censorship likely to come from this bill would mean I would not see others who have similar struggles to me. The vagueness of KOSA allows for state attorney generals to decide what is and is not appropriate for children to see, a power that should never be placed in the hands of one person. If issues like LGBT rights and mental health were censored by KOSA, I would have never realized that I AM NOT ALONE. There are problems with children and the internet but KOSA is not the solution. I urge the senate to rethink this bill, and come up with solutions that actually protect children, not put them in more danger, and make them feel ever more alone. - Rae, 16 

  ___________________ 

KOSA would effectively censor anything the government deems "harmful," which could be anything from queerness and fandom spaces to anything else that deviates from "the norm." People would lose support systems, education, and in some cases, any way to find out about who they are. I'll stop beating around the bush, if it wasn't for places online, I would never have discovered my own queerness. My parents and the small circle of adults I know would be my only connection to "grown-up" opinions, exposing me to a narrow range of beliefs I would likely be forced to adopt. Any kids in positions like mine would have no place to speak out or ask questions, and anything they bring up would put them at risk. Schools and families can only teach so much, and in this age of information, why can't kids be trusted to learn things on their own? - Anonymous, 15 

   ___________________

Social media helped me escape a very traumatic childhood and helped me connect with others. quite frankly, it saved me from being brainwashed. – Milo, 16 

   ___________________

Social media introduced me to lifelong friends and communities of like-minded people; in an abusive home, online social media in the 2010s provided a haven of privacy, safety, and information. I honed my creativity, nurtured my interests and developed my identity through relating and talking to people to whom I would otherwise have been totally isolated from. Also, unrestricted internet access actually taught me how to spot shady websites and inappropriate content FAR more effectively than if censorship had been at play like it is today. 
A couple of the friends I made online, as young as thirteen, were adults; and being friends with adults who knew I was a child, who practiced safe boundaries with me yet treated me with respect, helped me recognise unhealthy patterns in predatory adults. I have befriended mothers and fathers online through games and forums, and they were instrumental in preventing me being groomed by actual pedophiles. Had it not been for them, I would have wound up terribly abused by an "in real life" adult "friend". Instead, I recognised the differences in how he was treating me (infantilising yet praising) vs how my adult friends had treated me (like a human being), and slowly tapered off the friendship and safely cut contact. 
As I grew older, I found a wealth of resources on safe sex and sexual health education online. Again, if not for these discoveries, I would most certainly have wound up abused and/or pregnant as a teenager. I was never taught about consent, safe sex, menstruation, cervical health, breast health, my own anatomy, puberty, etc. as a child or teenager. What I found online-- typically on Tumblr and written with an alarming degree of normalcy-- helped me understand my body and my boundaries far more effectively than "the talk" or in-school sex ed ever did. I learned that the things that made me panic were actually normal; the ins and outs of puberty and development, and, crucially, that my comfort mattered most. I was comfortable and unashamed of being a virgin my entire teen years because I knew it was okay that I wasn't ready. When I was ready, at twenty-one, I knew how to communicate with my partner and establish safe boundaries, and knew to check in and talk afterwards to make sure we both felt safe and happy. I knew there was no judgement for crying after sex and that it didn't necessarily mean I wasn't okay. I also knew about physical post-sex care; e.g. going to the bathroom and cleaning oneself safely. 
AGAIN, I would NOT have known any of this if not for social media. AT ALL. And seeing these topics did NOT turn me into a dreaded teenage whore; if anything, they prevented it by teaching me safety and self-care. 
I also found help with depression, anxiety, and eating disorders-- learning to define them enabled me to seek help. I would not have had this without online spaces and social media. As aforementioned too, learning, sometimes through trial of fire, to safely navigate the web and differentiate between safe and unsafe sites was far more effective without censored content. Censorship only hurts children; it has never, ever helped them. How else was I to know what I was experiencing at home was wrong? To call it "abuse"? I never would have found that out. I also would never have discovered how to establish safe sexual AND social boundaries, or how to stand up for myself, or how to handle harassment, or how to discover my own interests and identity through media. The list goes on and on and on. – June, 21 

   ___________________

One of the claims that KOSA’s proponents make is that it won’t stop young people from finding the things they already want to search for. But we read dozens and dozens of comments from people who didn’t know something about themselves until they heard others discussing it—a mental health diagnosis, their sexuality, that they were being abused, that they had an eating disorder, and much, much more.  

Censorship that stops you from looking through a library is still dangerous even if it doesn’t stop you from checking out the books you already know. It’s still a problem to stop young people in particular from finding new things that they didn’t know they were looking for.   

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Could Stop Young People from Getting Accurate News and Valuable Information 

Social media taught me to be curious. It taught me caution and trust and faith and that simply being me is enough. It brought me up where my parents failed, it allowed me to look into stories that assured me I am not alone where I am now. I would be fucking dead right now if it weren't for the stories of my fellow transgender folk out there, assuring me that it gets better.  
I'm young and I'm not smart but I know without social media, myself and plenty of the people I hold dear in person and online would not be alive. We wouldn't have news of the atrocities happening overseas that the news doesn't report on, we wouldn't have mentors to help teach us where our parents failed. - Anonymous, 16 

  ___________________ 

Through social media, I've learned about news and current events that weren't taught at school or home, things like politics or controversial topics that taught me nuance and solidified my concept of ethics. I learned about my identity and found numerous communities filled with people I could socialize with and relate to. I could talk about my interests with people who loved them just as much as I did. I found out about numerous different perspectives and cultures and experienced art and film like I never had before. My empathy and media literacy greatly improved with experience. I was also able to gain skills in gathering information and proper defences against misinformation. More technically, I learned how to organize my computer and work with files, programs, applications, etc; I could find guides on how to pursue my hobbies and improve my skills (I'm a self-taught artist, and I learned almost everything I know from things like YouTube or Tumblr for free). - Anonymous, 15 

  ___________________ 

A huge portion of my political identity has been shaped by news and information I could only find on social media because the mainstream news outlets wouldn’t cover it. (Climate Change, International Crisis, Corrupt Systems, etc.) KOSA seems to be intentionally working to stunt all of this. It’s horrifying. So much of modern life takes place on the internet, and to strip that away from kids is just another way to prevent them from formulating their own thoughts and ideas that the people in power are afraid of. Deeply sinister. I probably would have never learned about KOSA if it were in place! That’s terrifying! - Sarge, 17 

  ___________________

I’ve met many of my friends from [social media] and it has improved my mental health by giving me resources. I used to have an eating disorder and didn’t even realize it until I saw others on social media talking about it in a nuanced way and from personal experience. - Anonymous, 15 

   ___________________

Many young people told us that they’re worried KOSA will result in more biased news online, and a less diverse information ecosystem. This seems inevitable—we’ve written before that almost any content could fit into the categories that politicians believe will cause minors anxiety or depression, and so carrying that content could be legally dangerous for a platform. That could include truthful news about what’s going on in the world, including wars, gun violence, and climate change. 

“Preventing and mitigating” depression and anxiety isn’t a goal of any other outlet, and it shouldn’t be required for social media platforms. People have a right to access information—both news and opinion— in an open and democratic society, and sometimes that information is depressing or anxiety-inducing. To truly “prevent and mitigate” self-destructive behaviors, we must look beyond the media to systems that allow all humans to have self-respect, a healthy environment, and healthy relationships—not hiding truthful information that is disappointing.  

Young People’s Voices Matter 

While KOSA’s sponsors intend to help these young people, those who responded to the survey don’t see it that way. You may have noticed that it’s impossible to limit these complex and detailed responses into single categories—many childhood abuse victims found help as well as arts education on social media; many children connected to communities that they otherwise couldn’t and learned something essential about themselves in doing so. Many understand that KOSA would endanger their privacy, and also know it could harm marginalized kids the most.  

In reading thousands of these comments, it becomes clear that social media itself was not in itself a solution to the issues they experienced. What helped these young people was other people. Social media was where they were able to find and stay connected with those friends, communities, artists, activists, and educators. When you look at it this way, of course KOSA seems absurd: social media has become an essential element of young peoples’ lives, and they are scared to death that if the law passes, that part of their lives will disappear. Older teens and twenty-somethings, meanwhile, worry that if the law had been passed a decade ago, they never would have become the person that they did. And all of these fears are reasonable.  

There were thousands more comments like those above. We hope this helps balance the conversation, because if young people’s voices are suppressed now—and if KOSA becomes law—it will be much more difficult for them to elevate their voices in the future.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

Analyzing KOSA’s Constitutional Problems In Depth 

EFF: Updates - Fri, 03/15/2024 - 3:35pm
Why EFF Does Not Think Recent Changes Ameliorate KOSA’s Censorship 

The latest version of the Kids Online Safety Act (KOSA) did not change our critical view of the legislation. The changes have led some organizations to drop their opposition to the bill, but we still believe it is a dangerous and unconstitutional censorship bill that would empower state officials to target services and online content they do not like. We respect that different groups can come to their own conclusions about how KOSA will affect everyone’s ability to access lawful speech online. EFF, however, remains steadfast in our long-held view that imposing a vague duty of care on a broad swath of online services to mitigate specific harms based on the content of online speech will result in those services imposing age verification and content restrictions. At least one group has characterized EFF’s concerns as spreading “disinformation.” We are not. But to ensure that everyone understands why EFF continues to oppose KOSA, we wanted to break down our interpretation of the bill in more detail and compare our views to those of others—both advocates and critics.  

Below, we walk through some of the most common criticisms we’ve gotten—and those criticisms the bill has received—to help explain our view of its likely impacts.  

KOSA’s Effectiveness  

First, and most importantly: We have serious and important disagreements with KOSA’s advocates on whether it will prevent future harm to children online. We are deeply saddened by the stories so many supporters and parents have shared about how their children were harmed online. And we want to keep talking to those parents, supporters, and lawmakers about ways in which EFF can work with them to prevent harm to children online, just as we will continue to talk with people who advocate for the benefits of social media. We believe, and have advocated for, comprehensive privacy protections as a better way to begin to address harms done to young people (and old) who have been targeted by platforms’ predatory business practices.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

EFF does not think KOSA is the right approach to protecting children online, however. As we’ve said before, we think that in practice, KOSA is likely to exacerbate the risks of children being harmed online because it will place barriers on their ability to access lawful speech about addiction, eating disorders, bullying, and other important topics. We also think those restrictions will stifle minors who are trying  to find their own communities online.  We do not think that language added to KOSA to address that censorship concern solves the problem. We also don’t think that focusing KOSA’s regulation on design elements of online services addresses the First Amendment problems of the bill, either. 

Our views of KOSA’s harmful consequences are grounded in EFF’s 34-year history of both making policy for the internet and seeing how legislation plays out once it’s passed. This is also not our first time seeing the vast difference between how a piece of legislation is promoted and what it does in practice. Recently we saw this same dynamic with FOSTA/SESTA, which was promoted by politicians and the parents of  child sex trafficking victims as the way to prevent future harms. Sadly, even the politicians who initially championed it now agree that this law was not only ineffective at reducing sex trafficking online, but also created additional dangers for those same victims as well as others.   

KOSA’s Duty of Care  

KOSA’s core component requires an online platform or service that is likely to be accessed by young people to “exercise reasonable care in the creation and implementation of any design feature to prevent and mitigate” various harms to minors. These enumerated harms include: 

  • mental health disorders (anxiety, depression, eating disorders, substance use disorders, and suicidal behaviors) 
  • patterns of use that indicate or encourage addiction-like behaviors  
  • physical violence, online bullying, and harassment 

Based on our understanding of the First Amendment and how all online platforms and services regulated by KOSA will navigate their legal risk, we believe that KOSA will lead to broad online censorship of lawful speech, including content designed to help children navigate and overcome the very same harms KOSA identifies.  

A line of U.S. Supreme Court cases involving efforts to prevent book sellers from disseminating certain speech, which resulted in broad, unconstitutional censorship, shows why KOSA is unconstitutional. 

In Smith v. California, the Supreme Court struck down an ordinance that made it a crime for a book seller to possess obscene material. The court ruled that even though obscene material is not protected by the First Amendment, the ordinance’s imposition of liability based on the mere presence of that material had a broader censorious effect because a book seller “will tend to restrict the books he sells to those he has inspected; and thus the State will have imposed a restriction upon the distribution of constitutionally protected, as well as obscene literature.” The court recognized that the “ordinance tends to impose a severe limitation on the public’s access to constitutionally protected material” because a distributor of others’ speech will react by limiting access to any borderline content that could get it into legal trouble.  

Online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor

In Bantam Books, Inc. v. Sullivan, the Supreme Court struck down a government effort to limit the distribution of material that a state commission had deemed objectionable to minors. The commission would send notices to book distributors that identified various books and magazines they believed were objectionable and sent copies of their lists to local and state law enforcement. Book distributors reacted to these notices by stopping the circulation of the materials identified by the commission. The Supreme Court held that the commission’s efforts violated the First Amendment and once more recognized that by targeting a distributor of others’ speech, the commission’s “capacity for suppression of constitutionally protected publications” was vast.  

KOSA’s duty of care creates a more far-reaching censorship threat than those that the Supreme Court struck down in Smith and Bantam Books. KOSA makes online services that host our digital speech liable should they fail to exercise reasonable care in removing or restricting minors’ access to lawful content on the topics KOSA identifies. KOSA is worse than the ordinance in Smith because the First Amendment generally protects speech about addiction, suicide, eating disorders, and the other topics KOSA singles out.  

We think that online services will react to KOSA’s new liability in much the same way as the bookstore in Smith and the book distributer in Bantam Books: They will limit minors’ access to or simply remove any speech that might touch on the topics KOSA identifies, even when much of that speech is protected by the First Amendment. Worse, online services have even less ability to read through the millions (or sometimes billions) of pieces of content on their services than a bookseller or distributor who had to review hundreds or thousands of books.  To comply, we expect that platforms will deploy blunt tools, either by gating off entire portions of their site to prevent minors from accessing them (more on this below) or by deploying automated filters that will over-censor speech, including speech that may be beneficial to minors seeking help with addictions or other problems KOSA identifies. (Regardless of their claims, it is not possible for a service to accurately pinpoint the content KOSA describes with automated tools.) 

But as the Supreme Court ruled in Smith and Bantam Books, the First Amendment prohibits Congress from enacting a law that results in such broad censorship precisely because it limits the distribution of, and access to, lawful speech.  

Moreover, the fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. The government bears the burden of showing that KOSA’s content restrictions advance a compelling government interest, are narrowly tailored to that interest, and are the least speech-restrictive means of advancing that interest. KOSA cannot satisfy this exacting standard.  

The fact that KOSA singles out certain legal content—for example, speech concerning bullying—means that the bill creates content-based restrictions that are presumptively unconstitutional. 

EFF agrees that the government has a compelling interest in protecting children from being harmed online. But KOSA’s broad requirement that platforms and services face liability for showing speech concerning particular topics to minors is not narrowly tailored to that interest. As said above, the broad censorship that will result will effectively limit access to a wide range of lawful speech on topics such as addiction, bullying, and eating disorders. The fact that KOSA will sweep up so much speech shows that it is far from the least speech-restrictive alternative, too.  

Why the Rule of Construction Doesn’t Solve the Censorship Concern 

In response to censorship concerns about the duty of care, KOSA’s authors added a rule of construction stating that nothing in the duty of care “shall be construed to require a covered platform to prevent or preclude:”  

  • minors from deliberately or independently searching for content, or 
  • the platforms or services from providing resources that prevent or mitigate the harms KOSA identifies, “including evidence-based information and clinical resources." 

We understand that some interpret this language as a safeguard for online services that limits their liability if a minor happens across information on topics that KOSA identifies, and consequently, platforms hosting content aimed at mitigating addiction, bullying, or other identified harms can take comfort that they will not be sued under KOSA. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

But EFF does not believe the rule of construction will limit KOSA’s censorship, in either a practical or constitutional sense. As a practical matter, it’s not clear how an online service will be able to rely on the rule of construction’s safeguards given the diverse amount of content it likely hosts.  

Take for example an online forum in which users discuss drug and alcohol abuse. It is likely to contain a range of content and views by users, some of which might describe addiction, drug use, and treatment, including negative and positive views on those points. KOSA’s rule of construction might protect the forum from a minor’s initial search for content that leads them to the forum. But once that minor starts interacting with the forum, they are likely to encounter the types of content KOSA proscribes, and the service may face liability if there is a later claim that the minor was harmed. In short, KOSA does not clarify that the initial search for the forum precludes any liability should the minor interact with the forum and experience harm later. It is also not clear how a service would prove that the minor found the forum via a search. 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared

Further, the rule of construction’s protections for the forum, should it provide only resources regarding preventing or mitigating drug and alcohol abuse based on evidence-based information and clinical resources, is unlikely to be helpful. That provision assumes that the forum has the resources to review all existing content on the forum and effectively screen all future content to only permit user-generated content concerning mitigation or prevention of substance abuse. The rule of construction also requires the forum to have the subject-matter expertise necessary to judge what content is or isn’t clinically correct and evidence-based. And even that assumes that there is broad scientific consensus about all aspects of substance abuse, including its causes (which there is not). 

Given that practical uncertainty and the potential hazard of getting anything wrong when it comes to minors’ access to that content, we think that the substance abuse forum will react much like the bookseller and distributor in the Supreme Court cases did: It will simply take steps to limit the ability for minors to access the content, a far easier and safer alternative than  making case-by-case expert decisions regarding every piece of content on the forum. 

EFF also does not believe that the Supreme Court’s decisions in Smith and Bantam Books would have been different if there had been similar KOSA-like safeguards incorporated into the regulations at issue. For example, even if the obscenity ordinance at issue in Smith had made an exception letting bookstores  sell scientific books with detailed pictures of human anatomy, the bookstore still would have to exhaustively review every book it sold and separate the obscene books from the scientific. The Supreme Court rejected such burdens as offensive to the First Amendment: “It would be altogether unreasonable to demand so near an approach to omniscience.” 

The near-impossible standard required to review such a large volume of content, coupled with liability for letting any harmful content through, is precisely the scenario that the Supreme Court feared. “The bookseller's self-censorship, compelled by the State, would be a censorship affecting the whole public, hardly less virulent for being privately administered,” the court wrote in Smith. “Through it, the distribution of all books, both obscene and not obscene, would be impeded.” 

Those same First Amendment concerns are exponentially greater for online services hosting everyone’s speech. That is why we do not believe that KOSA’s rule of construction will prevent the broader censorship that results from the bill’s duty of care. 

Finally, we do not believe the rule of construction helps the government overcome its burden on strict scrutiny to show that KOSA is narrowly tailored or restricts less speech than necessary. Instead, the rule of construction actually heightens KOSA’s violation of the First Amendment by preferencing certain viewpoints over others. The rule of construction here creates a legal preference for viewpoints that seek to mitigate the various identified harms, and punishes viewpoints that are neutral or even mildly positive of those harms. While EFF agrees that such speech may be awful, the First Amendment does not permit the government to make these viewpoint-based distinctions without satisfying strict scrutiny. It cannot meet that heavy burden with KOSA.  

KOSA's Focus on Design Features Doesn’t Change Our First Amendment Concerns 

KOSA supporters argue that because the duty of care and other provisions of KOSA concern an online service or platforms’ design features, the bill raises no First Amendment issues. We disagree.  

It’s true enough that KOSA creates liability for services that fail to “exercise reasonable care in the creation and implementation of any design feature” to prevent the bill’s enumerated harms. But the features themselves are not what KOSA's duty of care deems harmful. Rather, the provision specifically links the design features to minors’ access to the enumerated content that KOSA deems harmful. In that way, the design features serve as little more than a distraction. The duty of care provision is not concerned per se with any design choice generally, but only those design choices that fail to mitigate minors’ access to information about depression, eating disorders, and the other identified content. 

Once again, the Supreme Court’s decision in Smith shows why it’s incorrect to argue that KOSA’s regulation of design features avoids the First Amendment concerns. If the ordinance at issue in Smith regulated the way in which bookstores were designed, and imposed liability based on where booksellers placed certain offending books in their stores—for example, in the front window—we  suspect that the Supreme Court would have recognized, rightly, that the design restriction was little more than an indirect effort to unconstitutionally regulate the content. The same holds true for KOSA.  

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

KOSA Doesn’t “Mandate” Age-Gating, But It Heavily Pushes Platforms to Do So and Provides Few Other Avenues to Comply 

KOSA was amended in May 2023 to include language that was meant to ease concerns about age verification; in particular, it included explicit language that age verification is not required under the “Privacy Protections” section of the bill. The bill now states that a covered platform is not required to implement an age gating or age verification functionality to comply with KOSA.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification. Yet it's hard to see this change as anything other than a technical dodge that will be contradicted in practice.  

KOSA creates liability for any regulated platform or service that presents certain content to minors that the bill deems harmful to them. To comply with that new liability, those platforms and services’ options are limited. As we see them, the options are either to filter content for known minors or to gate content so only adults can access it. In either scenario, the linchpin is the platform knowing every user’s age  so it can identify its minor users and either filter the content they see or  exclude them from any content that could be deemed harmful under the law.  

EFF acknowledges the text of the bill and has been clear in our messaging that nothing in the proposal explicitly requires services to implement age verification.

There’s really no way to do that without implementing age verification. Regardless of what this section of the bill says, there’s no way for platforms to block either categories of content or design features for minors without knowing the minors are minors.  

We also don’t think KOSA lets platforms  claim ignorance if they take steps to never learn the ages of their users. If a 16-year-old user misidentifies herself as an adult and the platform does not use age verification, it could still be held liable because it should have “reasonably known” her age. The platform’s ignorance thus could work against it later, perversely incentivizing the services to implement age verification at the outset. 

EFF Remains Concerned About State Attorneys General Enforcing KOSA 

Another change that KOSA’s sponsors made  this year was to remove the ability of state attorneys general to enforce KOSA’s duty of care standard. We respect that some groups believe this addresses  concerns that some states would misuse KOSA to target minors’ access to any information that state officials dislike, including LGBTQIA+ or sex education information. We disagree that this modest change prevents this harm. KOSA still lets state attorneys general  enforce other provisions, including a section requiring certain “safeguards for minors.” Among the safeguards is a requirement that platforms “limit design features” that lead to minors spending more time on a service, including the ability to scroll through content, be notified of other content or messages, or auto playing content.  

But letting an attorney general  enforce KOSA’s requirement of design safeguards could be used as a proxy for targeting services that host content certain officials dislike.  The attorney general would simply target the same content or service it disfavored, butinstead of claiming that it violated KOSA’s duty to care, the official instead would argue that the service failed to prevent harmful design features that minors in their state used, such as notifications or endless scrolling. We think the outcome will be the same: states are likely to use KOSA to target speech about sexual health, abortion, LBGTQIA+ topics, and a variety of other information. 

KOSA Applies to Broad Swaths of the Internet, Not Just the Big Social Media Platforms 

Many sites, platforms, apps, and games would have to follow KOSA’s requirements. It applies to “an online platform, online video game, messaging application, or video streaming service that connects to the internet and that is used, or is reasonably likely to be used, by a minor.”  

There are some important exceptions—it doesn’t apply to services that only provide direct or group messages only, such as Signal, or to schools, libraries, nonprofits, or to ISP’s like Comcast generally. This is good—some critics of KOSA have been concerned that it would apply to websites like Archive of Our Own (AO3), a fanfiction site that allows users to read and share their work, but AO3 is a nonprofit, so it would not be covered.  

But  a wide variety of niche online services that are for-profit  would still be regulated by KOSA. Ravelry, for example, is an online platform focused on knitters, but it is a business.   

And it is an open question whether the comment and community portions of major mainstream news and sports websites are subject to KOSA. The bill exempts news and sports websites, with the huge caveat that they are exempt only so long as they are “not otherwise an online platform.” KOSA defines “online platform” as “any public-facing website, online service, online application, or mobile application that predominantly provides a community forum for user generated content.” It’s easily arguable that the New York Times’ or ESPN’s comment and forum sections are predominantly designed as places for user-generated content. Would KOSA apply only to those interactive spaces or does the exception to the exception mean the entire sites are subject to the law? The language of the bill is unclear. 

Not All of KOSA’s Critics Are Right, Either 

Just as we don’t agree on KOSA’s likely outcomes with many of its supporters, we also don’t agree with every critic regarding KOSA’s consequences. This isn’t surprising—the law is broad, and a major complaint is that it remains unclear how its vague language would be interpreted. So let’s address some of the more common misconceptions about the bill. 

Large Social Media May Not Entirely Block Young People, But Smaller Services Might 

Some people have concerns that KOSA will result in minors not being able to use social media at all. We believe a more likely scenario is that the major platforms would offer different experiences to different age groups.  

They already do this in some ways—Meta currently places teens into the most restrictive content control setting on Instagram and Facebook. The company specifically updated these settings for many of the categories included in KOSA, including suicide, self-harm, and eating disorder content. Their update describes precisely what we worry KOSA would require by law: “While we allow people to share content discussing their own struggles with suicide, self-harm and eating disorders, our policy is not to recommend this content and we have been focused on ways to make it harder to find.” TikTok also has blocked some videos for users under 18. To be clear, this content filtering as a result of KOSA will be harmful and would violate the First Amendment.  

Though large platforms will likely react this way, many smaller platforms will not be capable of this kind of content filtering. They very well may decide blocking young people entirely is the easiest way to protect themselves from liability. We cannot know how every platform will react if KOSA is enacted, but smaller platforms that do not already use complex automated content moderation tools will likely find it financially burdensome to implement both age verification tools and content moderation tools.  

KOSA Won’t Necessarily Make Your Real Name Public by Default 

One recurring fear that critics of KOSA have shared is that they will no longer to be able to use platforms anonymously. We believe this is true, but there is some nuance to it. No one should have to hand over their driver's license—or, worse, provide biometric information—just to access lawful speech on websites. But there's nothing in KOSA that would require online platforms to publicly tie your real name to your username.  

Still, once someone shares information to verify their age, there’s no way for them to be certain that the data they’re handing over is not going to be retained and used by the website, or further shared or even sold. As we’ve said, KOSA doesn't technically require age verification but we think it’s the most likely outcome. Users still will be forced to trust that the website they visit, or its third-party verification service, won’t misuse their private data, including their name, age, or biometric information. Given the numerous  data privacy blunders we’ve seen from companies like Meta in the past, and the general concern with data privacy that Congress seems to share with the general public (and with EFF), we believe this outcome to be extremely dangerous. Simply put: Sharing your private info with a company doesn’t necessarily make it public, but it makes it far more likely to become public than if you hadn’t shared it in the first place.   

We Agree With Supporters: Government Should Study Social Media’s Effects on Minors 

We know tensions are high; this is an incredibly important topic, and an emotional one. EFF does not have all the right answers regarding how to address the ways in which young people can be harmed online. Which is why we agree with KOSA’s supporters that the government should conduct much greater research on these issues. We believe that comprehensive fact-finding is the first step to both identifying the problems and legislative solutions. A provision of KOSA does require the National Academy of Sciences to research these issues and issue reports to the public. But KOSA gets this process backwards. It creates solutions to general concerns about young people being harmed without first doing the work necessary to show that the bill’s provisions address those problems. As we have said repeatedly, we do not think KOSA will address harms to young people online. We think it will exacerbate them.  

Even if your stance on KOSA is different from ours, we hope we are all working toward the same goal: an internet that supports freedom, justice, and innovation for all people of the world. We don’t believe KOSA will get us there, but neither will ad hominem attacks. To that end,  we look forward to more detailed analyses of the bill from its supporters, and to continuing thoughtful engagement from anyone interested in working on this critical issue. 

TAKE ACTION

TELL CONGRESS: OPPOSE THE KIDS ONLINE SAFETY ACT

San Diego City Council Breaks TRUST

EFF: Updates - Fri, 03/15/2024 - 2:54pm

In a stunning reversal against the popular Transparent & Responsible Use of Surveillance Technology (TRUST) ordinance, the San Diego city council voted earlier this year to cut many of the provisions that sought to ensure public transparency for law enforcement surveillance technologies. 

Similar to other Community Control Of Police Surveillance (CCOPS) ordinances, the TRUST ordinance was intended towould ensure that each police surveillance technology would be subject to basic democratic oversight in the form of public disclosures and city council votes. The TRUST ordinances was fought for by a coalition of community organizations– including several members of the Electronic Frontier Alliance– responding to surprise smart streetlight surveillance that was not put under public or city council review.  

The TRUST ordinance was passed one and a half years ago, but law enforcement advocates immediately set up roadblocks to implementation. Police unions, for example, insisted that some of the provisions around accountability for misuse of surveillance needed to be halted after passage to ensure they didn’t run into conflict with union contracts. The city kept the ordinance unapplied and untested, and then in the late summer of 2023, a little over a year after passage, the mayor proposed a package of changes that would gut the ordinance. This included exemption of a long list of technologies, including ARJIS databases and record management system data storage. These changes were later approved this past January.  

But use of these databases should require, for example, auditing to protect data security for city residents. There also should be limits on how police share data with federal agencies and other law enforcement agencies, which might use that data to criminalize San Diego residents for immigration status, gender-affirming health care, or exercise of reproductive rights that are not criminalized in the city or state. The overall TRUST Ordinance stands, but partly defanged with many carve-outs for technologies the San Diego police will not need to bring before democratically-elected lawmakers and the public. 

Now, opponents of the TRUST ordinance are emboldened with their recent victory and vowing to introduce even more amendments to further erode the gains of this ordinance so that San Diegans won’t have a chance to know how their local law enforcement surveils them, and no democratic body will be required to consent to the technologies, new or old. The members of the TRUST Coalition are not standing down, however, and will continue to fight to defend the standing portions of the TRUST ordinance, and to regain the wins for public oversight that were lost. 

As Lilly Irani, from Electronic Frontier Alliance member and TRUST Coalition member Tech Workers Coalition San Diegohas said

“City Council members and the mayor still have time to make this right. And we, the people, should hold our elected representatives accountable to make sure they maintain the oversight powers we currently enjoy — powers the mayor’s current proposal erodes.” 

If you live or work in San Diego, it’s important to make it clear to city officials that San Diegans don’t want to give police a blank check to harass and surveil them. Such dangerous technology needs basic transparency and democratic oversight to preserve our privacy, our speech, and our personal safety. 

5 Big Unanswered Questions About the TikTok Bill

EFF: Updates - Fri, 03/15/2024 - 2:30pm

With strong bipartisan support, the U.S. House voted 352 to 65 to pass HR 7521 this week, a bill that would ban TikTok nationwide if its Chinese owner doesn’t sell the popular video app. The TikTok bill’s future in the U.S. Senate isn’t yet clear, but President Joe Biden has said he would sign it into law if it reaches his desk. 

The speed at which lawmakers have moved to advance a bill with such a significant impact on speech is alarming. It has given many of us — including, seemingly, lawmakers themselves — little time to consider the actual justifications for such a law. In isolation, parts of the argument might sound somewhat reasonable, but lawmakers still need to clear up their confused case for banning TikTok. Before throwing their support behind the TikTok bill, Americans should be able to understand it fully, something that they can start doing by considering these five questions. 

1. Is the TikTok bill about privacy or content?

Something that has made HR 7521 hard to talk about is the inconsistent way its supporters have described the bill’s goals. Is this bill supposed to address data privacy and security concerns? Or is it about the content TikTok serves to its American users? 

From what lawmakers have said, however, it seems clear that this bill is strongly motivated by content on TikTok that they don’t like. When describing the "clear threat" posed by foreign-owned apps, the House report on the bill  cites the ability of adversary countries to "collect vast amounts of data on Americans, conduct espionage campaigns, and push misinformation, disinformation, and propaganda on the American public."

This week, the bill’s Republican sponsor Rep. Mike Gallagher told PBS Newshour that the “broader” of the two concerns TikTok raises is “the potential for this platform to be used for the propaganda purposes of the Chinese Communist Party." On that same program, Representative Raja Krishnamoorthi, a Democratic co-sponsor of the bill, similarly voiced content concerns, claiming that TikTok promotes “drug paraphernalia, oversexualization of teenagers” and “constant content about suicidal ideation.”

2. If the TikTok bill is about privacy, why aren’t lawmakers passing comprehensive privacy laws? 

It is indeed alarming how much information TikTok and other social media platforms suck up from their users, information that is then collected not just by governments but also by private companies and data brokers. This is why the EFF strongly supports comprehensive data privacy legislation, a solution that directly addresses privacy concerns. This is also why it is hard to take lawmakers at their word about their privacy concerns with TikTok, given that Congress has consistently failed to enact comprehensive data privacy legislation and this bill would do little to stop the many other ways adversaries (foreign and domestic) collect, buy, and sell our data. Indeed, the TikTok bill has no specific privacy provisions in it at all.

It has been suggested that what makes TikTok different from other social media companies is how its data can be accessed by a foreign government. Here, too, TikTok is not special. China is not unique in requiring companies in the country to provide information to them upon request. In the United States, Section 702 of the FISA Amendments Act, which is up for renewal, authorizes the mass collection of communication data. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches through Section 702. The U.S. government can also demand user information from online providers through National Security Letters, which can both require providers to turn over user information and gag them from speaking about it. While the U.S. cannot control what other countries do, if this is a problem lawmakers are sincerely concerned about, they could start by fighting it at home.

3. If the TikTok bill is about content, how will it avoid violating the First Amendment? 

Whether TikTok is banned or sold to new owners, millions of people in the U.S. will no longer be able to get information and communicate with each other as they presently do. Indeed, one of the given reasons to force the sale is so TikTok will serve different content to users, specifically when it comes to Chinese propaganda and misinformation.

The First Amendment to the U.S. Constitution rightly makes it very difficult for the government to force such a change legally. To restrict content, U.S. laws must be the least speech-restrictive way of addressing serious harms. The TikTok bill’s supporters have vaguely suggested that the platform poses national security risks. So far, however, there has been little public justification that the extreme measure of banning TikTok (rather than addressing specific harms) is properly tailored to prevent these risks. And it has been well-established law for almost 60 years that U.S. people have a First Amendment right to receive foreign propaganda. People in the U.S. deserve an explicit explanation of the immediate risks posed by TikTok — something the government will have to do in court if this bill becomes law and is challenged.

4. Is the TikTok bill a ban or something else? 

Some have argued that the TikTok bill is not a ban because it would only ban TikTok if owner ByteDance does not sell the company. However, as we noted in the coalition letter we signed with the American Civil Liberties Union, the government generally cannot “accomplish indirectly what it is barred from doing directly, and a forced sale is the kind of speech punishment that receives exacting scrutiny from the courts.” 

Furthermore, a forced sale based on objections to content acts as a backdoor attempt to control speech. Indeed, one of the very reasons Congress wants a new owner is because it doesn’t like China’s editorial control. And any new ownership will likely bring changes to TikTok. In the case of Twitter, it has been very clear how a change of ownership can affect the editorial policies of a social media company. Private businesses are free to decide what information users see and how they communicate on their platforms, but when the U.S. government wants to do so, it must contend with the First Amendment. 

5. Does the U.S. support the free flow of information as a fundamental democratic principle? 

Until now, the United States has championed the free flow of information around the world as a fundamental democratic principle and called out other nations when they have shut down internet access or banned social media apps and other online communications tools. In doing so, the U.S. has deemed restrictions on the free flow of information to be undemocratic.

In 2021, the U.S. State Department formally condemned a ban on Twitter by the government of Nigeria. “Unduly restricting the ability of Nigerians to report, gather, and disseminate opinions and information has no place in a democracy,” a department spokesperson wrote. “Freedom of expression and access to information both online and offline are foundational to prosperous and secure democratic societies.”

Whether it’s in Nigeria, China, or the United States, we couldn’t agree more. Unfortunately, if the TikTok bill becomes law, the U.S. will lose much of its moral authority on this vital principle.

TAKE ACTION

TELL CONGRESS: DON'T BAN TIKTOK

Location Data Tracks Abortion Clinic Visits. Here’s What to Know

EFF: Updates - Fri, 03/15/2024 - 1:59pm

Our concerns about the selling and misuse of location data for those seeking reproductive and gender healthcare are escalating amid a recent wave of cases and incidents demonstrating that the digital trail we leave is being used by anti-abortion activists.

The good news is some states and tech companies are taking steps to better protect location data privacy, including information that endangers people needing or seeking information about reproductive and gender-affirming healthcare. But we know more must be done—by pharmacies, our email providers, and lawmakers—to plug gaping holes in location data protection.

Location data is highly sensitive, as it paints a picture of our daily lives—where we go, who we visit, when we seek medical care, or what clinics we visit. That’s what makes it so attractive to data brokers and law enforcement in states outlawing abortion and gender-affirming healthcare and those seeking to exploit such data for ideological or commercial purposes.

What we’re seeing is deeply troubling. Sen. Ron Wyden recenty disclosed that vendor Near Intelligence allegedly gathered location data of people’s visits to nearly 600 Planned Parenthood locations across 48 states, without consent. It sold that data to an anti-abortion group, which used it in a massive anti-abortion ad campaign.The Wisconsin-based group used the geofenced data to send mobile ads to people who visited the clinics.

It’s hardly a leap to imagine that law enforcement and bounty hunters in anti-abortion states would gladly buy the same data to find out who is visiting Planned Parenthood clinics and try to charge and imprison women, their families, doctors, and caregivers. That’s the real danger of an unregulated data broker industry; anyone can buy what’s gathered from warrantless surveillance, for whatever nefarious purpose they choose.

For example, police in Idaho, where abortion is illegal, used cell phone data in an investigation against an Idaho woman and her son charged with kidnapping. The data showed that they had taken the son’s minor girlfriend to Oregon, where abortion is legal, to obtain an abortion.

The exploitation of location data is not the only problem. Information about prescription medicines we take is not protected against law enforcement requests. The nation’s eight largest pharmacy chains, including CVS, Walgreens, and Rite Aid, have routinely turned over prescription records of thousands of Americans to law enforcement agencies or other government entities secretly without a warrant, according to a congressional inquiry.

Many people may not know that their prescription records can be obtained by law enforcement without too much trouble. There’s not much standing between someone’s self-managed abortion medication and a law enforcement records demand. In April the U.S. Health and Human Services Department proposed a rule that would prevent healthcare providers and insurers from giving information to state officials trying to prosecute some seeking or providing a legal abortion. A final rule has not yet been published.

Exploitation of location and healthcare data to target communities could easily expand to other groups working to protect bodily autonomy, especially those most likely to suffer targeted harassment and bigotry. With states passing and proposing bills restricting gender-affirming care and state law enforcement officials pursuing medical records of transgender youth across state lines, it’s not hard to imagine them buying or using location data to find people to prosecute.

To better protect people against police access to sensitive health information, lawmakers in a few states have taken action. In 2022, California enacted two laws protecting abortion data privacy and preventing California companies from sharing abortion data with out-of-state entities.

Then, last September the state enacted a shield law prohibiting California-based companies, including social media and tech companies, from disclosing patients’ private communications regarding healthcare that is legally protected in the state.

Massachusetts lawmakers have proposed the Location Shield Act, which would prohibit the sale of cellphone location information to data brokers. The act would make it harder to trace the path of those traveling to Massachusetts for abortion services.

Of course, tech companies have a huge role to play in location data privacy. EFF was glad when Google said in 2022 it would delete users’ location history for visits to medical facilities, including abortion clinics and counseling and fertility centers. Google pledged that when the location history setting on a device was turned on, it would delete entries for particularly personal places like reproductive health clinics soon after such a visit.

But a study by AccountableTech testing Google’s pledge said the company wasn’t living up to its promises and continued to collect and retain location data from individuals visiting abortion clinics. Accountable Tech reran the study in late 2023 and the results were again troubling—Google still retained location search query data for some visits to Planned Parenthood clinics. It appears users will have to manually delete location search history to remove information about the routes they take to visiting sensitive locations. It doesn’t happen automatically.

Late last year, Google announced plans to move saved Timeline entries in Google Maps to users’ devices. Users who want to keep the entries could choose to back up the data to the cloud, where it would be automatically encrypted and out of reach even to Google.

These changes would appear to make it much more difficult—if not impossible—for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years. But when these features are coming is uncertain—though Google said in December they’re “coming soon.”

Google should implement the changes sooner as opposed to later. In the meantime, those seeking reproductive and gender information and healthcare can find tips on how to protect themselves in our Surveillance Self Defense guide. 

How to Figure Out What Your Car Knows About You (and Opt Out of Sharing When You Can)

EFF: Updates - Fri, 03/15/2024 - 12:56pm

Cars collect a lot of our personal data, and car companies disclose a lot of that data to third parties. It’s often unclear what’s being collected, and what's being shared and with whom. A recent New York Times article highlighted how data is shared by G.M. with insurance companies, sometimes without clear knowledge from the driver. If you're curious about what your car knows about you, you might be able to find out. In some cases, you may even be able to opt out of some of that sharing of data.

Why Your Car Collects and Shares Data

A car (and its app, if you installed one on your phone) can collect all sorts of data in the background with and without you realizing it. This in turn may be shared for a wide variety of purposes, including advertising and risk-assessment for insurance companies. The list of data collected is long and dependent on the car’s make, model, and trim.  But if you look through any car maker’s privacy policy, you'll see some trends:

  • Diagnostics data, sometimes referred to as “vehicle health data,” may be used internally for quality assurance, research, recall tracking, service issues, and similar unsurprising car-related purposes. This type of data may also be shared with dealers or repair companies for service.
  • Location information may be collected for emergency services, mapping, and to catalog other environmental information about where a car is operated. Some cars may give you access to the vehicle’s location in the app.
  • Some usage data may be shared or used internally for advertising. Your daily driving or car maintenance habits, alongside location data, is a valuable asset to the targeted advertising ecosystem. 
  • All of this data could be shared with law enforcement.
  • Information about your driving habits, sometimes referred to as “Driving data” or “Driver behavior information,” may be shared with insurance companies and used to alter your premiums.  This can range from odometer readings to braking and acceleration statistics and even data about what time of day you drive.. 

Surprise insurance sharing is the thrust of The New York Times article, and certainly not the only problem with car data. We've written previously about how insurance companies offer discounts for customers who opt into a usage-based insurance program. Every state except California currently allows the use of telematics data for insurance rating, but privacy protections for this data vary widely across states.

When you sign up directly through an insurer, these opt-in insurance programs have a pretty clear tradeoff and sign up processes, and they'll likely send you a physical device that you plug into your car's OBD port that then collects and transmits data back to the insurer.

But some cars have their own internal systems for sharing information with insurance companies that can piggy back off an app you may have installed, or the car’s own internet connection. Many of these programs operate behind dense legalese. You may have accidentally “agreed” to such sharing without realizing it, while buying a new car—likely in a state of exhaustion and excitement after finally completing a gauntlet of finance and legal forms.

This gets more confusing: car-makers use different terms for their insurance sharing programs. Some, like Toyota's “Insure Connect,” are pretty obviously named. But others, like Honda, tuck information about sharing with a data broker (that then shares with insurance companies) inside a privacy policy after you enable its “Driver Feedback” feature. Others might include the insurance sharing opt-in alongside broader services you might associate more with safety or theft, like G.M.’s OnStar, Subaru’s Starlink, and Volkswagen’s Car-Net.

The amount of data shared differs by company, too. Some car makers might share only small amounts of data, like an odometer reading, while others might share specific details about driving habits.

That's just the insurance data sharing. There's little doubt that many cars sell other data for behavioral advertising, and like the rest of that industry, it's nearly impossible to track exactly where your data goes and how it's used.

See What Data Your Car Has (and Stop the Sharing)

This is a general guide to see what your car collects and who it shares it with. It does not include information about specific scenarios—like intimate partner violence— that may raise distinctive driver privacy issues.

See How Your Car Handles (Data)
Start by seeing what your car is equipped to collect using Privacy4Cars’ Vehicle Privacy Report. Once you enter your car’s VIN, the site provides a rough idea of what sorts of data your car collects. It's also worth reading about your car manufacturer’s more general practices on Mozilla's Privacy Not Included site.

Check the Privacy Options In Your Car’s Apps and Infotainment System
If you use an app for your car, head into the app’s settings, and look for any sort of data sharing options. Look for settings like “Data Privacy” or “Data Usage.” When possible, opt out of sharing any data with third-parties, or for behavioral advertising. As annoying as it may be, it’s important to read carefully here so you don’t accidentally disable something you want, like a car’s SOS feature. Be mindful that, at least according to Mozilla’s report on Tesla, opting out of certain data sharing might someday make the car undriveable. Now’s also a good time to disable ad tracking on your phone.

When it comes to sharing with insurance companies, you’re looking for an option that may be something obvious, like Toyota’s “Insure Connect,” or less obvious, like Kia’s “Driving Score.” If your car’s app has any sort of driver scoring or feedback option—some other names include GM’s ”Smart Driver,” Honda’s “Driver Feedback,” or Mitsubishi’s “Driving Score”—there’s a chance it’s sharing that data with an insurance company. Check for these options in both the app and the car’s infotainment system.

If you did accidentally sign up for sharing data with insurance companies, you may want to call your insurance company to see how doing so may affect your premiums. Depending on your driving habits, your premiums might go up or down, and in either case you don’t want a surprise bill.

File a Privacy Request with the Car Maker
Next, file a privacy request with the car manufacturer so you can see exactly what data the company has collected about you. Some car makers will provide this to anyone who asks. Others might only respond to requests from residents of states with a consumer data privacy law that requires their response. The International Association of Privacy Professionals has published this list of states with such laws.

In these states, you have a “right to know” or “right to access” your data, which requires the company to send you a copy of what personal information it collected about you. Some of these states also guarantee “data portability,” meaning the right to access your data in a machine-readable format. File one of these requests, and you should receive a copy of your data. In some states, you can also file a request for the car maker to not sell or share your information, or to delete it. While the car maker might not be legally required to respond to your request if you're not from a state with these privacy rights, it doesn’t hurt to ask anyway.

Every company tends to word these requests a little differently, but you’re looking for options to get a copy of your data, and ask them to stop sharing it. This typically requires filling out a separate request form for each type of request.

Here are the privacy request pages for the major car brands:

Sometimes, you will need to confirm the request in an email, so be sure to keep an eye on your inbox.

Check for Data On Popular Data Brokers Known to Share with Insurers
Finally, request your data from data brokers known to hand car data to insurers. For example, do so with the two companies mentioned in The New York Times’ article: 

Now, you wait. In most states, within 45 to 90 days you should receive an email from the car maker, and another from the data brokers, which will often include a link to your data. You will typically get a CSV file, though it may also be a PDF, XLS, or even a folder with a whole webpage and an HTML file. If you don't have any sort of spreadsheet software on your computer, you might struggle to open it up, but most of the files you get can be opened in free programs, like Google Sheets or LibreOffice.

Without a national law that puts privacy first, there is little that most people can do to stop this sort of data sharing. Moreover, the steps above clearly require far too much effort for most people to take. That’s why we need much more than these consumer rights to know, to delete, and to opt-out of disclosure: we also need laws that automatically require corporations to minimize the data they process about us, and to get our opt-in consent before processing our data. As to car insurers, we've outlined exactly what sort of guardrails we'd like to see here

As The New York Times' reporting revealed, many people were surprised to learn how their data is collected, disclosed, and used, even if there was an opt-in consent screen. This is a clear indication that car makers need to do better. 

Improving C++

Schneier on Security - Fri, 03/15/2024 - 7:05am

C++ guru Herb Sutter writes about how we can improve the programming language for better security.

The immediate problem “is” that it’s Too Easy By Default™ to write security and safety vulnerabilities in C++ that would have been caught by stricter enforcement of known rules for type, bounds, initialization, and lifetime language safety.

His conclusion:

We need to improve software security and software safety across the industry, especially by improving programming language safety in C and C++, and in C++ a 98% improvement in the four most common problem areas is achievable in the medium term. But if we focus on programming language safety alone, we may find ourselves fighting yesterday’s war and missing larger past and future security dangers that affect software written in any language...

Trump allies plan to root out climate science at ‘woke’ Pentagon

ClimateWire News - Fri, 03/15/2024 - 6:45am
A blueprint for a second Trump term would put the White House in charge of the National Defense Strategy, rather than the Defense Department.

Senator slams Ex-Im Bank over $500M loan to Bahrain oil project

ClimateWire News - Fri, 03/15/2024 - 6:44am
The bank approved the financing Thursday despite efforts by the White House to rein in fossil fuel loans.

Chamber joins legal salvo against SEC over climate disclosure rule

ClimateWire News - Fri, 03/15/2024 - 6:43am
A lawsuit filed Thursday by the U.S. Chamber of Commerce is the sixth challenge to the new regulation.

Senators accuse US Chamber of ‘lying’ about climate action

ClimateWire News - Fri, 03/15/2024 - 6:42am
The Democrats say a Chamber of Commerce climate task force may have been a mirage. The group denies the claims.

Google pledges $35M for carbon removal ‘challenge’

ClimateWire News - Fri, 03/15/2024 - 6:42am
The search giant is the first company to participate in a federal program that aims to boost a nascent industry that sucks CO2 from the air.

French oil major faces climate lawsuit

ClimateWire News - Fri, 03/15/2024 - 6:41am
A farmer in Belgium says TotalEnergies shares part of the blame for rising temperatures that are ruining his crops.

Arizona county confirms 645 heat-associated deaths in 2023

ClimateWire News - Fri, 03/15/2024 - 6:40am
The report said two-thirds of Maricopa County's heat-related deaths in 2023 were people 50 years or older, and 71 percent were on days with heat warnings.

Mass. beach town spends $600K on ‘sacrificial dunes’ that washed away

ClimateWire News - Fri, 03/15/2024 - 6:36am
A group in Salisbury brought in 14,000 tons of sand for a project completed just three days before Sunday's storm clobbered southern New England.

VW wants EU to soften emissions targets the carmaker can’t hit

ClimateWire News - Fri, 03/15/2024 - 6:35am
“It doesn’t make sense that the industry has to pay penalties when the framework conditions for the EV ramp up aren’t in place,” the company CEO said.

Farmers tie up Belgium port traffic to protest environmental rules

ClimateWire News - Fri, 03/15/2024 - 6:34am
The tractor protests in the EU have had an impact on political decisionmaking. Several concessions have already been made.

Pages