EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 2 hours 31 min ago

Brazil’s Internet Intermediary Liability Rules Under Trial: What Are the Risks?

Wed, 12/11/2024 - 9:00am

The Brazilian Supreme Court is on the verge of deciding whether digital platforms can be held liable for third-party content even without a judicial order requiring removal. A panel of eleven justices is examining two cases jointly, and one of them directly challenges whether Brazil’s internet intermediary liability regime for user-generated content aligns with the country’s Federal Constitution or fails to meet constitutional standards. The outcome of these cases can seriously undermine important free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates. 

The court’s examination revolves around Article 19 of Brazil’s Civil Rights Framework for the Internet (“Marco Civil da Internet”, Law n. 12.965/2014). The provision establishes that an internet application provider can only be held liable for third-party content if it fails to comply with a judicial order to remove the content. A notice-and-takedown exception to the provision applies in cases of copyright infringement, unauthorized disclosure of private images containing nudity or sexual activity, and content involving child sexual abuse. The first two exceptions are in Marco Civil, while the third one comes from a prior rule included in the Brazilian child protection law.

The decision the court takes will set a precedent for lower courts regarding two main topics: whether Marco Civil’s internet intermediary liability regime is aligned with Brazil's Constitution and whether internet application providers have the obligation to monitor online content they host and remove it when deemed offensive, without judicial intervention. Moreover, it can have a regional and cross-regional impact as lawmakers and courts look across borders at platform regulation trends amid global coordination initiatives.

After a public hearing held last year, the Court's sessions about the cases started in late November and, so far, only Justice Dias Toffoli, who is in charge of Marco Civil’s constitutionality case, has concluded the presentation of his vote. The justice declared Article 19 unconstitutional and established the notice-and-takedown regime set in Article 21 of Marco Civil, which relates to unauthorized disclosure of private images, as the general rule for intermediary liability. According to his vote, the determination of liability must consider the activities the internet application provider has actually carried out and the degree of interference of these activities.

However, platforms could be held liable for certain content regardless of notification, leading to a monitoring duty. Examples include content considered criminal offenses, such as crimes against the democratic state, human trafficking, terrorism, racism, and violence against children and women. It also includes the publication of notoriously false or severely miscontextualized facts that lead to violence or have the potential to disrupt the electoral process. If there’s reasonable doubt, the notice-and-takedown rule under Marco Civil’s Article 21 would be the applicable regime.

The court session resumes today, but it’s still uncertain whether all eleven justices will reach a judgement by year’s end.  

Some Background About Marco Civil’s Intermediary Liability Regime

The legislative intent back in 2014 to establish Article 19 as the general rule for internet application providers' liability for user-generated content reflected civil society’s concerns over platform censorship. Faced with the risk of being held liable for user content, internet platforms generally prioritize their economic interests and security over preserving users’ protected expression and over-remove content to avoid legal battles and regulatory scrutiny. The enforcement overreach of copyright rules online was already a problem when the legislative discussion of Marco Civil took place. Lawmakers chose to rely on courts to balance the different rights at stake in removing or keeping user content online. The approval of Marco Civil had wide societal support and was considered a win for advancing users’ rights online.

The provision was in line with the Special Rapporteurs for Freedom of Expression from the United Nations and the Inter-American Commission on Human Rights (IACHR). In that regard, the then IACHR’s Special Rapporteur had clearly remarked that a strict liability regime creates strong incentives for private censorship, and would run against the State’s duty to favor an institutional framework that protects and guarantees free expression under the American Convention on Human Rights. Notice-and-takedown regimes as the general rule also raised concerns of over-removal and the weaponization of notification mechanisms to censor protected speech.

A lot has happened since 2014. Big Tech platforms have consolidated their dominance, the internet ecosystem is more centralized, and algorithmic mediation of content distribution online has intensified, increasingly relying on a corporate surveillance structure. Nonetheless, the concerns Marco Civil reflects remain relevant just as the balance its intermediary liability rule has struck persists as a proper way of tackling these concerns. Regarding current challenges, changes to the liability regime suggested in Dias Toffoli's vote will likely reinforce rather than reduce corporate surveillance, Big Tech’s predominance, and digital platforms’ power over online speech.

The Cases Under Trial and The Reach of the Supreme Court’s Decision

The two individual cases under analysis by the Supreme Court are more than a decade old. Both relate to the right to honor. In the first one, the plaintiff, a high school teacher, sued Google Brasil Internet Ltda to remove an online community created by students to offend her on the now defunct Orkut platform. She asked for the deletion of the community and compensation for moral damages, as the platform didn't remove the community after an extrajudicial notification. Google deleted the community following the decision of the lower court, but the judicial dispute about the compensation continued.

In the second case, the plaintiff sued Facebook after the company didn’t remove an offensive fake account impersonating her. The lawsuit sought to shut down the fake account, obtain the identification of the account’s IP address, and compensation for moral damages. As Marco Civil had already passed, the judge denied the moral compensation request. Yet, the appeals court found that Facebook could be liable for not removing the fake account after an extrajudicial notification, finding Marco Civil’s intermediary liability regime unconstitutional vis-à-vis Brazil’s constitutional protection to consumers. 

Both cases went all the way through the Supreme Court in two separate extraordinary appeals, now examined jointly. For the Supreme Court to analyze extraordinary appeals, it must identify and approve a “general repercussion” issue that unfolds from the individual case. As such, the topics under analysis of the Brazilian Supreme Court in these appeals are not only the individual cases, but also the court’s understanding about the general repercussion issues involved. What the court stipulates in this regard will orient lower courts’ decisions in similar cases. 

The two general repercussion issues under scrutiny are, then, the constitutionality of Marco Civil’s internet intermediary liability regime and whether internet application providers have the obligation to monitor published content and take it down when considered offensive, without judicial intervention. 

There’s a lot at stake for users’ rights online in the outcomes of these cases. 

The Many Perils and Pitfalls on the Way

Brazil’s platform regulation debate has heated up in the last few years. Concerns over the gigantic power of Big Tech platforms, the negative effects of their attention-driven business model, and revelations of plans and actions from the previous presidential administration to remain in power arbitrarily inflamed discussions of regulating Big Tech. As its main vector, draft bill 2630 (PL 2630), didn’t move forward in the Brazilian Congress, the Supreme Court’s pending cases gained traction as the available alternative for introducing changes. 

We’ve written about intermediary liability trends around the globe, how to move forward, and the risks that changes in safe harbors regimes end up reshaping intermediaries’ behavior in ways that ultimately harm freedom of expression and other rights for internet users. 

One of these risks is relying on strict liability regimes to moderate user expression online. Holding internet application providers liable for user-generated content regardless of a notification means requiring them to put in place systems of content monitoring and filtering with automated takedowns of potential infringing content. 

While platforms like Facebook, Instagram, X (ex-Twitter), Tik Tok, and YouTube already use AI tools to moderate and curate the sheer volume of content they receive per minute, the resources they have for doing so are not available for other, smaller internet application providers that host users’ expression. Making automated content monitoring a general obligation will likely intensify the concentration of the online ecosystem in just a handful of large platforms. Strict liability regimes also inhibit or even endanger the existence of less-centralized content moderation models, contributing yet again to entrenching Big Tech’s dominance and business model.

But the fact that Big Tech platforms already use AI tools to moderate and restrict content doesn’t mean they do it well. Automated content monitoring is hard at scale and platforms constantly fail at purging content that violates its rules without sweeping up protected content. In addition to historical issues with AI-based detection of copyright infringement that have deeply undermined fair use rules, automated systems often flag and censor crucial information that should stay online.  

Just to give a few examples, during the wave of protests in Chile, internet platforms wrongfully restricted content reporting police's harsh repression of demonstrations, having deemed it violent content. In Brazil, we saw similar concerns when Instagram censored images of Jacarezinho’s community’s massacre in 2021, which was the most lethal police operation in Rio de Janeiro’s history. In other geographies, the quest to restrict extremist content has removed videos documenting human rights violations in conflicts in countries like Syria and Ukraine.

These are all examples of content similar to what could fit into Justice Toffoli’s list of speech subject to a strict liability regime. And while this regime shouldn’t apply in cases of reasonable doubt, platform companies won’t likely risk keeping such content up out of concern that a judge decides later that it wasn’t a reasonable doubt situation and orders them to pay damages.  Digital platforms have, then, a strong incentive to calibrate their AI systems to err on the side of censorship. And depending on how these systems operate, it means a strong incentive for conducting prior censorship potentially affecting protected expression, which defies Article 13 of the American Convention.  

Setting the notice-and-takedown regime as the general rule for an intermediary’s liability also poses risks. While the company has the chance to analyze and decide whether to keep content online, again the incentive is to err on the side of taking it down to avoid legal costs.

Brazil's own experience in courts shows how tricky the issue can be. InternetLab's research based on rulings involving free expression online indicated that Brazilian courts of appeals denied content removal requests in more than 60% of cases. The Brazilian Association of Investigative Journalism (ABRAJI) has also highlighted data showing that at some point in judicial proceedings, judges agreed with content removal requests in around half of the cases, and some were reversed later on. This is especially concerning in honor-related cases. The more influential or powerful the person involved, the higher the chances of arbitrary content removal, flipping the public-interest logic of preserving access to information. We should not forget companies that thrived by offering reputation management services built upon the use of takedown mechanisms to disappear critical content online.

It's important to underline that this ruling comes in the absence of digital procedural justice guarantees. While Justice Toffoli’s vote asserts platforms’ duty to provide specific notification channels, preferably electronic, to receive complaints about infringing content, there are no further specifications to avoid the misuse of notification systems. Article 21 of Marco Civil sets that notices must allow the specific identification of the contested content (generally understood as the URL) and elements to verify that the complainant is the person offended. Except for that, there is no further guidance on which details and justifications the notice should contain, and whether the content’s author would have the opportunity, and the proper mechanism, to respond or appeal to the takedown request. 

As we said before, we should not mix platform accountability with reinforcing digital platforms as points of control over people's online expression and actions. This is a dangerous path considering the power big platforms already have and the increasing intermediation of digital technologies in everything we do. Unfortunately, the Supreme Court seems to be taking a direction that will emphasize such a role and dominant position, creating also additional hurdles for smaller platforms and decentralized models to compete with the current digital giants. 

Introducing EFF’s New Video Series: Gate Crashing

Tue, 12/10/2024 - 2:56pm

The promise of the internet—at least in the early days—was that it would lower the barriers to entry for any number of careers. Traditionally, the spheres of novel writing, culture criticism, and journalism were populated by well-off straight white men, with anyone not meeting one of those criteria being an outlier. Add in giant corporations acting as gatekeepers to those spheres and it was a very homogenous culture. The internet has changed that. 

There is a lot about the internet that needs fixing, but the one thing we should preserve and nurture is the nontraditional paths to success it creates. In this series of interviews, called “Gate Crashing,” we look to highlight those people and learn from their examples. In an ideal world, lawmakers will be guided by lived experiences like these when thinking about new internet legislation or policy. 

In our first video, we look at creators who honed their media criticism skills in fandom spaces. Please join Gavia Baker-Whitelaw and Elizabeth Minkel, co-creators of the Rec Center newsletter, in a wide-ranging discussion about how they got started, where it has led them, and what they’ve learned about internet culture and policy along the way. 

%3Ciframe%20title%3D%22YouTube%20video%20player%22%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FaeplIxvskx8%3Fsi%3DJJtXxSdTkjYiTrTT%26autoplay%3D1%26mute%3D1%22%20width%3D%22560%22%20height%3D%22315%22%20frameborder%3D%220%22%20allowfullscreen%3D%22allowfullscreen%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

Speaking Freely: Tomiwa Ilori

Tue, 12/10/2024 - 1:40pm

Interviewer: David Greene

*This interview has been edited for length and clarity.

Tomiwa Ilori is an expert researcher and a policy analyst with focus on digital technologies and human rights. Currently, he is an advisor for the B-Tech Africa Project at UN Human Rights and  a Senior ICFP Fellow at HURIDOCS.  His postgraduate qualifications include masters and doctorate degrees from the Centre for Human Rights, Faculty of Law, University of Pretoria. All views and opinions expressed in this interview are personal. 

Greene: Why don’t you start by introducing yourself?

Tomiwa Ilori: My name is Tomiwa Ilori. I’m a legal consultant with expertise in digital rights and policy. I work with a lot of organizations on digital rights and policy including information rights, business and human rights, platform governance, surveillance studies, data protection and other aspects. 

Greene: Can you tell us more about the B-Tech project? 

The B-Tech project is a project by the UN human rights office and the idea behind it is to mainstream the UN Guiding Principles on Business and Human Rights (UNGPs) into the tech sector. The project looks at, for example, how  social media platforms can apply human rights due diligence frameworks or processes to their products and services more effectively. We also work on topical issues such as Generative AI and its impacts on human rights. For example, how do the UNGPs apply to Generative AI? What guidance can the UNGPs provide for the regulation of Generative AI and what can actors and policymakers look for when regulating Generative AI and other new and emerging technologies? 

Greene: Great. This series is about freedom of expression. So my first question for you is what does freedom of expression mean to you personally? 

I think freedom of expression is like oxygen, more or less like the air we breathe. There is nothing about being human that doesn’t involve expression, just like drawing breath. Even beyond just being a right, it’s an intrinsic part of being human. It’s embedded in us from the start. You have this natural urge to want to express yourself right from being an infant. So beyond being a human right, it is something you can almost not do without in every facet of life. Just to put it as simply as possible, that’s what it means to me. 

Greene: Is there a single experience or several experiences that shaped your views about freedom of expression? 

Yes. For context, I’m Nigerian and I also grew up in the Southwestern part of the country where most of the Yorùbá people live. As a Yoruba person and as someone who grew up listening and speaking the Yoruba language, language has a huge influence on me, my philosophy and my ideas. I have a mother who loves to speak in proverbs and mostly in Yorùbá. Most of these proverbs which are usually profound show that free speech is the cornerstone of being human, being part of a community, and exercising your right to life and existence. Sharing expression and growing up in that kind of community shaped my worldview about my right to be. Closely attached to my right to be is my right to express myself. More importantly, it also shaped my view about how my right to be does not necessarily interrupt someone else’s right to be. So, yes, my background and how I grew up really shaped me. Then, I was fortunate that I also grew up and furthered my studies. My graduate studies including my doctorate focused on freedom of expression. So I got both the legal and traditional background grounded in free speech studies and practices in unique and diverse ways. 

Greene: Can you talk more about whether there is something about  Yorùbá language or culture that is uniquely supportive of freedom of expression? 

There’s a proverb that goes, “A kìí pa ohùn mọ agogo lẹ́nu” and what that means in a loose English translation is that you cannot shut the clapperless bell up, it is the bell’s right to speak, to make a sound. So you have no right to stop a bell from doing what it’s meant to do, it suggests that it is everyone’s right to express themselves. It suffices to say that according to that proverb, you have no right to stop people from expressing themselves. There’s another proverb that is a bit similar which is,“Ọmọdé gbọ́n, àgbà gbọ́n, lafí dá ótù Ifẹ̀” which when loosely translated refers to how both the old and the young collaborate to make the most of a society by expressing their wisdom. 

Greene: Have you ever had a personal experience with censorship? 

Yes and I will talk about two experiences. First, and this might not fit the technical definition of censorship, but there was a time when I lived in Kampala and I had to pay tax to access the internet which I think is prohibitive for those who are unable to pay it. If people have to make a choice between buying bread to eat and paying a tax to access the internet, especially when one item is an opportunity cost for the other, it makes sense that someone would choose bread over paying that tax. So you could say it’s a way of censoring internet users. When you make access prohibitive through taxation, it is also a way of censoring people. Even though I was able to pay the tax, I could not stop thinking about those who were unable to afford it and for me that is problematic and qualifies as a kind of censorship. 

Another one was actually very recent. Even though the internet service provider insisted that they did not shut down or throttle the internet,, I remember that during the recent protests in Nairobi, Kenya in June of 2024, I experienced an internet shutdown for the first time. According to the internet service provider, the shut down was as a result of an undersea cable cut. Suddenly my emails just stopped working and my Twitter (now X) feed won’t load. The connection appeared to work for a few seconds, and then all of a sudden it would stop, then work for some time, then all of a sudden nothing. I felt incapacitated and helpless. That’s the way I would describe it. I felt like, “Wow, I have written, thought, spoken about this so many times and this is it.” For the first time I understood what it means to actually experience an internet shutdown and it’s not just the experience, it’s the helplessness that comes with it too. 

Greene: Do you think there is ever a time when the government can justify an internet shutdown? 

The simple answer is no. In my view, those who carry out internet shutdowns, especially state actors, believe that since freedom of expression and some other associated rights are not absolute, they have every right to restrict them without measure. I think what many actors that are involved in internet shutdowns use as justification is a mask for their limited capacity to do the right thing. Actors involved in shutting down the internet say that they usually do not have a choice. For example, they say that hate speech, misinformation, and online violence are being spread online in such a way that it could spill over into offline violence. Some have even gone as far as saying that they’re shutting down the internet because they want to curtail examination fraud. When these are the kind of excuses used by actors, it demonstrates the limited understanding of actors on what international human rights standards prescribe and what can actually be done to address the online harms that are used to justify internet shutdowns. 

Let me use an example: international human rights standards provide clear processes for instances where state actors must address online harms or where private actors must address harms to forestall offline violence. The perception is that these standards do not even give room for addressing harms, which is not the case. The process requires that whatever action you take must be legal i.e. be provided clearly in a law, must not be vague, must be unequivocal and show in detail the nature of the right that is limited. Another requirement says that whatever action to be taken to limit a right must be proportional. If you are trying to fight hate speech online, don’t you think it is disproportionate to shut down the entire network just to fight one section of people spreading such speech? Another requirement is that its necessity must be justified i.e. to protect clearly defined public interest or order which must be specific and not the blanket term ‘national security.’ Additionally international human rights law is clear that these requirements must be cumulative i.e. you can not fulfill the requirement of legality and not fulfill that of proportionality or necessity. 

This shows that when trying to regulate online harms, it needs to be very specific. So, for example, state actors can actually claim that a particular content or speech is causing harm which the state actors must prove according to the requirements above. You can make a request such that just that content alone is restricted. Also these must be put in context. Using hate speech as an example. There’s the RabatAction Plan on Hate Speech which was developed by the UN, and it’s very clear on the conditions that must be met before the speech can be categorized as hate speech. So are these conditions met by state actors before, for example, they ask platforms to remove particular hate content? There are steps and processes involved  in the regulation of problematic content, but state actors never go simply for targeted removal that comply with international human rights standards, they usually go for the entire network. 

I’d also like to add that I find it problematic and ironic that most state actors who are supposedly champions of digital transformation are also the ones quick to shut down the internet during political events. There is no digital transformation that does not include a free, accessible and interoperable internet. These are some of the challenges and problematic issues that I think we need to address in more detail so we can hear each other better, especially when it comes to regulating online speech and fighting internet shutdowns. 

Greene: So shutdowns are then inherently disproportionate and not authorized by law. You talked about the types of speech that might be limited. Can you give us a sense of what types of online speech you think might be appropriately regulated by governments? 

For categories of speech that can be regulated, of course, that includes hate speech. It’s under international law as provided for underArticle 20 of the International Covenant on Civil and Political Rights (ICCPR) prohibits propagation of war, etc. The International Convention on the Elimination of All Forms of Racial Discrimination (ICERD) also provides for this. However, these applicable provisions are not carte blanche for state actors. The major conditions that must be met before avspeech qualifies as hate speech must be fulfilled before it can be regarded as one. This is done in order to address instances where powerful actors define what constitutes hate speech and violate human rights under the guise of combating it. There are still laws that criminalize disaffection against the state which are used to prosecute dissent. 

Greene: In Nigeria or in Kenya or just on the continent in general? 

Yes, there are countries that still have lèse-majesté laws in criminal laws and penal codes. We’ve had countries like Nigeria that were  trying to come up with a version of such laws for the online space, but which have been fought down by mostly civil society actors. 

So hate speech does qualify as speech that could be limited, but with caveats. There are several conditions that must be made before speech qualifies as hate speech. There must be context around the speech. For example, what kind of power does the person who makes the speech wield? What is the likelihood of that speech leading to violence? What audience has the speech been made to? These are some of the criteria that must be fulfilled before you say, “okay, this qualifies as hate speech.” 

There’s also other clearly problematic content, child sexual abuse material for example, that are prima facie illegal and must be censored or removed or disallowed. That goes without saying. It’s customary international human rights law especially as it applies to platform governance. Another category of speech could also be non-consensual sharing of intimate images which could qualify as online gender-based violence. So these are some of the categories that could come under regulation by states. 

I also must sound a note that there are contexts to applying speech laws. It is also the reason why speech laws are one of the most difficult regulations to come up with because they are usually context-dependent especially when they are to be balanced against international human rights standards. Of course, some of the biggest fears in platform  regulation that touch on freedom of expression is how state actors could weaponize those laws to track or to attack dissent and how businesses platform speech mainly for profit. 

Greene: Is misinformation something the government should have a role in regulating or is that something that needs to be regulated by the companies or by the speakers? If it’s something we need to worry about, who has a role in regulating it? 

State actors have a role. But in my opinion I don’t think it’s regulation. The fact that you have a hammer does not mean that everything must look like a nail. The fact that a state actor has the power to make laws does not mean that it must always make laws on all social problems. I believe non-legal and multi-stakeholder solutions are required for combatting online harms. State actors have tried to do what they do best by coming up with laws that regulate misinformation. But where has that led us? The arrest and harassment of journalists, human rights defenders and activists. So it has really not solved any problems. 

When your approach is not solving any problems, I think it’s only right to re-evaluate. That’s the reason I said state actors have a role. In my view, state actors need to step back in a sense that you don’t necessarily need to leave the scene, but step back and allow for a more holistic dialogue among stakeholders involved in the information ecosystem. You could achieve a whole lot more through digital literacy and skills than you will with criminalizing misinformation. You can do way more by supporting journalists with fact-checking skills than you will ever achieve by passing overbroad laws that limit access to information. You can do more by working with stakeholders in the information ecosystem like platforms to label problematic content than you will ever by shutting down the internet. These are some of the non-legal methods that could be used to combat misinformation and actually get results. So, state actors have a role, but it is mainly facilitatory in the sense that it should bring stakeholders together to brainstorm on what the contexts are and the kinds of useful solutions that could be applied effectively. 

Greene: What do you feel the role of the companies should be? 

Companies also have an important role, one of which is to respect human rights in the course of providing services. What I always say for technology companies is that, if a certain jurisdiction or context is good enough to make money from, it is good enough to pay attention to and respect human rights there.

One of the perennial issues that platforms face in addressing online harms is aligning their community standards with international human rights standards. But oftentimes what happens is that corporate-speak is louder than the human rights language in many of these standards. 

That said, some of the practical things that platforms could do is to step out of the corporate talk of, “Oh, we’re companies, there’s not much we can do.” There’s a lot they can do. Companies need to get more involved, step into the arena and walk with key state actors, including civil society, to  educate and develop capacity on how their  platforms actually work. For example, what are the processes involved, for example, in taking down a piece of content? What are the processes involved in getting appeals? What are the processes involved in actually getting redress when a piece of content has been wrongly taken down? What are the ways platforms can accurately—and I say accurately emphatically because I’m not speaking about using automated tools—label content? Platforms also have responsibilities in being totally invested in the contexts they do business in. What are the triggers for misinformation in a particular country? Elections, conflict, protests? These are like early warning sign systems that platforms need to start paying attention to to be able to understand their contexts and be able to address the harms on their platforms better. 

Greene: What’s the most pressing free speech issue in the region in which you work? 

Well, for me, I think of a few key issues. Number one, which has been going on for the longest time, is the government’s use of laws to stifle free speech. Most of the laws that are used are cybercrime laws, electronic communication laws, and old press codes and criminal codes. They were never justified and they’re still not justified. 

A second issue is the privatization of speech by companies regarding the kind of speech that gets promoted or demoted. What are the guidelines on, for example, political advertisements? What are the guidelines on targeted advertisement? How are people’s data curated? What is it like in the algorithm black box? Platforms’ roles on who says what, how,  when and where also is a burning free speech issue. And we are moving towards a future where speech is being commodified and privatized. Public media, for example, are now being relegated to the background. Everyone wants to be on social media and I’m not saying that’s a terrible thing, but it gives us a lot to think about, a lot to chew on. 

Greene: And finally, who is your free speech hero? 

His name is Felá Aníkúlápó Kútì. Fela was a political musician and the originator of Afrobeat not afrobeats with an “s” but the original Afrobeat which that one came from. Fela never started out as a political musician, but his music became highly political and highly popular among the people for obvious reasons. His music also became timely because, as a political musician in Nigeria who lived during the brutal military era, it resonated with a lot of people. He was a huge thorn in the flesh of despotic Nigerian and African leaders. So, for me, Fela is my free speech hero. He said quite a lot with his music that many people in his generation would never dare to say because of the political climate at that time. Taking such risks even in the face of brazen violence and even death was remarkable.

Fela was not just a political musician who understood the power of expression. He was also someone who understood the power of visual expression. He’s unique in his own way and expresses himself through music, through his lyrics. He’s someone who has inspired a lot of people including musicians, politicians and a lot of new generation activists.

A Fundamental-Rights Centered EU Digital Policy: EFF’s Recommendations 2024-2029

Tue, 12/10/2024 - 12:22pm

The European Union (EU) is a hotbed for tech regulation that often has ramifications for users globally.  The focus of our work in Europe is to ensure that EU tech policy is made responsibly and lives up to its potential to protect users everywhere. 

As the new mandate of the European institution begins – a period where newly elected policymakers set legislative priorities for the coming years – EFF today published recommendations for a European tech policy agenda that centers on fundamental rights, empowers users, and fosters fair competition. These principles will guide our work in the EU over the next five years. Building on our previous work and success in the EU, we will continue to advocate for users and work to ensure that technology supports freedom, justice, and innovation for all people of the world. 

Our policy recommendations cover social media platform intermediary liability, competition and interoperability, consumer protection, privacy and surveillance, and AI regulation. Here’s a sneak peek:  

  • The EU must ensure that the enforcement of platform regulation laws like the Digital Services Act and the European Media Freedom Act are centered on the fundamental rights of users in the EU and beyond.
  • The EU must create conditions of fair digital markets that foster choice innovation and fundamental rights. Achieving this requires enforcing the user-rights centered provisions of the Digital Markets Act, promoting app store freedom, user choice, and interoperability, and countering AI monopolies. 
  • The EU must adopt a privacy-first approach to fighting online harms like targeted ads and deceptive design and protect children online without reverting to harmful age verification methods that undermine the fundamental rights of all users. 
  • The EU must protect users’ rights to secure, encrypted, and private communication, protect against surveillance everywhere, stay clear of new data retention mandates, and prioritize the rights-respecting enforcement of the AI Act. 

Read on for our full set of recommendations.

Pages