Normal view

There are new articles available, click to refresh the page.
Yesterday — 17 May 2024Main stream

Black Basta Ransomware Struck More Than 500 Organizations Worldwide – Source: www.techrepublic.com

black-basta-ransomware-struck-more-than-500-organizations-worldwide-–-source:-wwwtechrepublic.com

Source: www.techrepublic.com – Author: Cedric Pernet A joint cybersecurity advisory from the Federal Bureau of Investigation, Cybersecurity and Infrastructure Security Agency, Department of Health and Human services and Multi-State Information Sharing and Analysis Center was recently released to provide more information about the Black Basta ransomware. Black Basta affiliates have targeted organizations in the U.S., […]

La entrada Black Basta Ransomware Struck More Than 500 Organizations Worldwide – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

Everyone’s Getting Involved review – tepid all-star Talking Heads tribute

17 May 2024 at 02:10

(A24 Music)
Cult film company A24’s tie-in merch to its rerelease of seminal documentary Stop Making Sense sounds either like karaoke or disconnected from the source material

Responsible for indie hits like Midsommar, Moonlight and Everything Everywhere All at Once, American film company A24 has created a vast lifestyle brand around its cultish reputation, flogging everything from branded shorts ($48) to a Hereditary gingerbread kit ($62). Now, following its rerelease of Jonathan Demme’s seminal Talking Heads documentary Stop Making Sense, its tie-in merch includes a tribute album, featuring all 16 tracks from the film’s soundtrack covered by appropriately vogueish musicians.

The tracks largely use one of two distinct approaches. The acts choosing a karaoke-esque run-through include Toro y Moi (Genius of Love), the National (Heaven) and Paramore, whose faithful version of Burning Down the House includes a barnstorming vocal from Hayley Williams, but isn’t particularly compelling.

Continue reading...

💾

© Photograph: Lauren Tepfer

💾

© Photograph: Lauren Tepfer

Before yesterdayMain stream

The Fall of the National Vulnerability Database – Source: www.darkreading.com

the-fall-of-the-national-vulnerability-database-–-source:-wwwdarkreading.com

Source: www.darkreading.com – Author: Brian Fox Brian Fox, CTO & Co-Founder, Sonatype May 16, 2024 5 Min Read Source: Stu Gray via Alamy Stock Photo COMMENTARY In the realm of cybersecurity, understanding your biggest vulnerabilities is essential. The National Institute of Standards and Technology (NIST) initially established the National Vulnerability Database (NVD) to provide a […]

La entrada The Fall of the National Vulnerability Database – Source: www.darkreading.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

How Can Businesses Defend Themselves Against Common Cyberthreats? – Source: www.techrepublic.com

how-can-businesses-defend-themselves-against-common-cyberthreats?-–-source:-wwwtechrepublic.com

Source: www.techrepublic.com – Author: Fiona Jackson TechRepublic consolidated expert advice on how businesses can defend themselves against the most common cyberthreats, including zero-days, ransomware and deepfakes. Today, all businesses are at risk of cyberattack, and that risk is constantly growing. Digital transformations are resulting in more sensitive and valuable data being moved onto online systems […]

La entrada How Can Businesses Defend Themselves Against Common Cyberthreats? – Source: www.techrepublic.com se publicó primero en CISO2CISO.COM & CYBER SECURITY GROUP.

On World Press Freedom Day (and Every Day), We Fight for an Open Internet

3 May 2024 at 11:47

Today marks World Press Freedom Day, an annual celebration instituted by the United Nations in 1993 to raise awareness of press freedom and remind governments of their duties under Article 19 of the Universal Declaration of Human Rights. This year, the day is dedicated to the importance of journalism and freedom of expression in the context of the current global environmental crisis.

Journalists everywhere face challenges in reporting on climate change and other environmental issues. Whether lawsuits, intimidation, arrests, or disinformation campaigns, these challenges are myriad. For instance, journalists and human rights campaigners attending the COP28 Summit held in Dubai last autumn faced surveillance and intimidation. The Committee to Protect Journalists (CPJ) has documented arrests of environmental journalists in Iran and Venezuela, among other countries. And in 2022, a Guardian journalist was murdered while on the job in the Brazilian Amazon.

The threats faced by journalists are the same as those faced by ordinary internet users around the world. According to CPJ, there are 320 journalists jailed worldwide for doing their job. And ranked among the top jailers of journalists last year were China, Myanmar, Belarus, Russia, Vietnam, Israel, and Iran; countries in which internet users also face censorship, intimidation, and in some cases, arrest. 

On this World Press Freedom Day, we honor the journalists, human rights defenders, and internet users fighting for a better world. EFF will continue to fight for the right to freedom of expression and a free and open internet for every internet user, everywhere.



Hunters Ransomware Claims Two: Rocky Mountain Sales, SSS Australia Targeted

Hunters Group

The notorious Hunters group has allegedly added two new victims to their dark web portal: Rocky Mountain Sales in the United States and SSS Australia. While the extent of the cyberattack, data compromise, and motive behind the attack remain undisclosed by the ransomware group, the implications of such an attack on these prominent organizations could be far-reaching.

Rocky Mountain Sales, Inc., with a revenue of US$5 million, is an outsourced sales and service organization committed to providing leading customer service, sales, and support to all strategic partners. Meanwhile, SSS Australia, boasting a revenue of US$17 million, has been synonymous with the highest standards of quality and value in medical supplies for over 45 years. Given the vastness of these organizations, if the cyberattack on Rocky Mountain Sales and cyberattack on SSS Australia claim is proven true, the consequences could be severe. Not only could it disrupt their operations, but it could also result in substantial financial losses, tarnishing their reputations and undermining customer trust. The potential compromise of sensitive data, such as customer information, financial records, and proprietary business data, could have long-lasting repercussions for both organizations. However, as of now, no foul play can be sensed upon accessing the official websites of both organizations, as they were fully functional. To verify the claim further, The Cyber Express team reached out to officials, but as of writing this news report, no official response has been received, leaving the claim unverified.

Hunters International Ransomware Group's Previous Claims

This recent incident follows a string of cyberattacks by the Hunters International group. In April, SpaceX, the aerospace manufacturer and space transport services company founded by Elon Musk, allegedly suffered a cybersecurity incident involving a data breach by the Hunters group, who reportedly posted samples of the breached data. Prior to that, Central Power Systems & Services, a major distributor of industrial and power generation products in Kansas, Western Missouri, and Northern Oklahoma, fell victim to the notorious ransomware group. Before these incidents, the group targeted various organizations across different sectors and countries. In 2024 alone, the Hunters International group claimed responsibility for cyberattacks on the Dalmahoy Hotel & Country Club in the UK, Double Eagle Energy Holdings IV, LLC in the US, and Gallup-McKinley County Schools in New Mexico, among others. The cyberattacks by the Hunters International group highlight the need for organizations to prioritize cybersecurity measures and invest in strong defense mechanisms to safeguard their digital assets. Moreover, international cooperation and information sharing among cybersecurity agencies are crucial in combating such threats effectively.

Unverified Hunters Group Claims

While the Hunters International group has claimed responsibility for the cyberattacks on Rocky Mountain Sales and SSS Australia, the lack of verified information about the extent of the attacks emphasizes the challenges in responding to such incidents. Without official confirmation or detailed information from the targeted organizations, the full impact of the cyberattacks remains uncertain. As cybersecurity threats continue to evolve and ransomware attacks become increasingly sophisticated, organizations must remain vigilant and proactive in protecting their networks and data. The recent incidents involving Hunters International serve as a reminder of the potential consequences of inadequate cybersecurity measures. Media Disclaimer: This report is based on internal and external research obtained through various means. The information provided is for reference purposes only, and users bear full responsibility for their reliance on it. The Cyber Express assumes no liability for the accuracy or consequences of using this information.

In Historic Victory for Human Rights in Colombia, Inter-American Court Finds State Agencies Violated Human Rights of Lawyers Defending Activists

3 April 2024 at 15:22

In a landmark ruling for fundamental freedoms in Colombia, the Inter-American Court of Human Rights found that for over two decades the state government harassed, surveilled, and persecuted members of a lawyer’s group that defends human rights defenders, activists, and indigenous people, putting the attorneys’ lives at risk. 

The ruling is a major victory for civil rights in Colombia, which has a long history of abuse and violence against human rights defenders, including murders and death threats. The case involved the unlawful and arbitrary surveillance of members of the Jose Alvear Restrepo Lawyers Collective (CAJAR), a Colombian human rights organization defending victims of political persecution and community activists for over 40 years.

The court found that since at least 1999, Colombian authorities carried out a constant campaign of pervasive secret surveillance of CAJAR members and their families. That state violated their rights to life, personal integrity, private life, freedom of expression and association, and more, the Court said. It noted the particular impact experienced by women defenders and those who had to leave the country amid threat, attacks, and harassment for representing victims.  

The decision is the first by the Inter-American Court to find a State responsible for violating the right to defend human rights. The court is a human rights tribunal that interprets and applies the American Convention on Human Rights, an international treaty ratified by over 20 states in Latin America and the Caribbean. 

In 2022, EFF, Article 19, Fundación Karisma, and Privacy International, represented by Berkeley Law’s International Human Rights Law Clinic, filed an amicus brief in the case. EFF and partners urged the court to rule that Colombia’s legal framework regulating intelligence activity and the surveillance of CAJAR and their families violated a constellation of human rights and forced them to limit their activities, change homes, and go into exile to avoid violence, threats, and harassment. 

Colombia's intelligence network was behind abusive surveillance practices in violation of the American Convention and did not prevent authorities from unlawfully surveilling, harassing, and attacking CAJAR members, EFF told the court. Even after Colombia enacted a new intelligence law, authorities continued to carry out unlawful communications surveillance against CAJAR members, using an expansive and invasive spying system to target and disrupt the work of not just CAJAR but other human rights defenders and journalists

In examining Colombia’s intelligence law and surveillance actions, the court elaborated on key Inter-American and other international human rights standards, and advanced significant conclusions for the protection of privacy, freedom of expression, and the right to defend human rights. 

The court delved into criteria for intelligence gathering powers, limitations, and controls. It highlighted the need for independent oversight of intelligence activities and effective remedies against arbitrary actions. It also elaborated on standards for the collection, management, and access to personal data held by intelligence agencies, and recognized the protection of informational self-determination by the American Convention. We highlight some of the most important conclusions below.

Prior Judicial Order for Communications Surveillance and Access to Data

The court noted that actions such as covert surveillance, interception of communications, or collection of personal data constitute undeniable interference with the exercise of human rights, requiring precise regulations and effective controls to prevent abuse from state authorities. Its ruling recalled European Court of Human Rights’ case law establishing thatthe mere existence of legislation allowing for a system of secret monitoring […] constitutes a threat to 'freedom of communication among users of telecommunications services and thus amounts in itself to an interference with the exercise of rights'.” 

Building on its ruling in the case Escher et al. vs Brazil, the Inter-American Court stated that

“[t]he effective protection of the rights to privacy and freedom of thought and expression, combined with the extreme risk of arbitrariness posed by the use of surveillance techniques […] of communications, especially in light of existing new technologies, leads this Court to conclude that any measure in this regard (including interception, surveillance, and monitoring of all types of communication […]) requires a judicial authority to decide on its merits, while also defining its limits, including the manner, duration, and scope of the authorized measure.” (emphasis added) 

According to the court, judicial authorization is needed when intelligence agencies intend to request personal information from private companies that, for various legitimate reasons, administer or manage this data. Similarly, prior judicial order is required for “surveillance and tracking techniques concerning specific individuals that entail access to non-public databases and information systems that store and process personal data, the tracking of users on the computer network, or the location of electronic devices.”  

The court said that “techniques or methods involving access to sensitive telematic metadata and data, such as email and metadata of OTT applications, location data, IP address, cell tower station, cloud data, GPS and Wi-Fi, also require prior judicial authorization.” Unfortunately, the court missed the opportunity to clearly differentiate between targeted and mass surveillance to explicitly condemn the latter.

The court had already recognized in Escher that the American Convention protects not only the content of communications but also any related information like the origin, duration, and time of the communication. But legislation across the region provides less protection for metadata compared to content. We hope the court's new ruling helps to repeal measures allowing state authorities to access metadata without a previous judicial order.

Indeed, the court emphasized that the need for a prior judicial authorization "is consistent with the role of guarantors of human rights that corresponds to judges in a democratic system, whose necessary independence enables the exercise of objective control, in accordance with the law, over the actions of other organs of public power.” 

To this end, the judicial authority is responsible for evaluating the circumstances around the case and conducting a proportionality assessment. The judicial decision must be well-founded and weigh all constitutional, legal, and conventional requirements to justify granting or denying a surveillance measure. 

Informational Self-Determination Recognized as an Autonomous Human Right 

In a landmark outcome, the court asserted that individuals are entitled to decide when and to what extent aspects of their private life can be revealed, which involves defining what type of information, including their personal data, others may get to know. This relates to the right of informational self-determination, which the court recognized as an autonomous right protected by the American Convention. 

“In the view of the Inter-American Court, the foregoing elements give shape to an autonomous human right: the right to informational self-determination, recognized in various legal systems of the region, and which finds protection in the protective content of the American Convention, particularly stemming from the rights set forth in Articles 11 and 13, and, in the dimension of its judicial protection, in the right ensured by Article 25.”  

The protections that Article 11 grant to human dignity and private life safeguard a person's autonomy and the free development of their personality. Building on this provision, the court affirmed individuals’ self-determination regarding their personal information. In combination with the right to access information enshrined in Article 13, the court determined that people have the right to access and control their personal data held in databases. 

The court has explained that the scope of this right includes several components. First, people have the right to know what data about them are contained in state records, where the data came from, how it got there, the purpose for keeping it, how long it’s been kept, whether and why it’s being shared with outside parties, and how it’s being processed. Next is the right to rectify, modify, or update their data if it is inaccurate, incomplete, or outdated. Third is the right to delete, cancel, and suppress their data in justified circumstances. Fourth is the right to oppose the processing of their data also in justified circumstances, and fifth is the right to data portability as regulated by law. 

According to the court, any exceptions to the right of informational self-determination must be legally established, necessary, and proportionate for intelligence agencies to carry out their mandate. In elaborating on the circumstances for full or partial withholding of records held by intelligence authorities, the court said any restrictions must be compatible with the American Convention. Holding back requested information is always exceptional, limited in time, and justified according to specific and strict cases set by law. The protection of national security cannot serve as a blanket justification for denying access to personal information. “It is not compatible with Inter-American standards to establish that a document is classified simply because it belongs to an intelligence agency and not on the basis of its content,” the court said.  

The court concluded that Colombia violated CAJAR members’ right to informational self -determination by arbitrarily restricting their ability to access and control their personal data within public bodies’ intelligence files.

The Vital Protection of the Right to Defend Human Rights

The court emphasized the autonomous nature of the right to defend human rights, finding that States must ensure people can freely, without limitations or risks of any kind, engage in activities aimed at the promotion, monitoring, dissemination, teaching, defense, advocacy, or protection of universally recognized human rights and fundamental freedoms. The ruling recognized that Colombia violated the CAJAR members' right to defend human rights.

For over a decade, human rights bodies and organizations have raised alarms and documented the deep challenges and perils that human rights defenders constantly face in the Americas. In this ruling, the court importantly reiterated their fundamental role in strengthening democracy. It emphasized that this role justifies a special duty of protection by States, which must establish adequate guarantees and facilitate the necessary means for defenders to freely exercise their activities. 

Therefore, proper respect for human rights requires States’ special attention to actions that limit or obstruct the work of defenders. The court has emphasized that threats and attacks against human rights defenders, as well as the impunity of perpetrators, have not only an individual but also a collective effect, insofar as society is prevented from knowing the truth about human rights violations under the authority of a specific State. 

Colombia’s Intelligence Legal Framework Enabled Arbitrary Surveillance Practices 

In our amicus brief, we argued that Colombian intelligence agents carried out unlawful communications surveillance of CAJAR members under a legal framework that failed to meet international human rights standards. As EFF and allies elaborated a decade ago on the Necessary and Proportionate principles, international human rights law provides an essential framework for ensuring robust safeguards in the context of State communications surveillance, including intelligence activities. 

In the brief, we bolstered criticism made by CAJAR, Centro por la Justicia y el Derecho Internacional (CEJIL), and the Inter-American Commission on Human Rights, challenging Colombia’s claim that the Intelligence Law enacted in 2013 (Law n. 1621) is clear and precise, fulfills the principles of legality, proportionality, and necessity, and provides sufficient safeguards. EFF and partners highlighted that even after its passage, intelligence agencies have systematically surveilled, harassed, and attacked CAJAR members in violation of their rights. 

As we argued, that didn’t happen despite Colombia’s intelligence legal framework, rather it was enabled by its flaws. We emphasized that the Intelligence Law gives authorities wide latitude to surveil human rights defenders, lacking provisions for prior, well-founded, judicial authorization for specific surveillance measures, and robust independent oversight. We also pointed out that Colombian legislation failed to provide the necessary means for defenders to correct and erase their data unlawfully held in intelligence records. 

The court ruled that, as reparation, Colombia must adjust its intelligence legal framework to reflect Inter-American human rights standards. This means that intelligence norms must be changed to clearly establish the legitimate purposes of intelligence actions, the types of individuals and activities subject to intelligence measures, the level of suspicion needed to trigger surveillance by intelligence agencies, and the duration of surveillance measures. 

The reparations also call for Colombia to keep files and records of all steps of intelligence activities, “including the history of access logs to electronic systems, if applicable,” and deliver periodic reports to oversight entities. The legislation must also subject communications surveillance measures to prior judicial authorization, except in emergency situations. Moreover, Colombia needs to pass regulations for mechanisms ensuring the right to informational self-determination in relation to intelligence files. 

These are just some of the fixes the ruling calls for, and they represent a major win. Still, the court missed the opportunity to vehemently condemn state mass surveillance (which can occur under an ill-defined measure in Colombia’s Intelligence Law enabling spectrum monitoring), although Colombian courts will now have the chance to rule it out.

In all, the court ordered the state to take 16 reparation measures, including implementing a system for collecting data on violence against human rights defenders and investigating acts of violence against victims. The government must also publicly acknowledge responsibility for the violations. 

The Inter-American Court's ruling in the CAJAR case sends an important message to Colombia, and the region, that intelligence powers are only lawful and legitimate when there are solid and effective controls and safeguards in place. Intelligence authorities cannot act as if international human rights law doesn't apply to their practices.  

When they do, violations must be fiercely investigated and punished. The ruling elaborates on crucial standards that States must fulfill to make this happen. Only time will tell how closely Colombia and other States will apply the court's findings to their intelligence activities. What’s certain is the dire need to fix a system that helped Colombia become the deadliest country in the Americas for human rights defenders last year, with 70 murders, more than half of all such murders in Latin America. 

Ola Bini Faces Ecuadorian Prosecutors Seeking to Overturn Acquittal of Cybercrime Charge

1 April 2024 at 12:21

Ola Bini, the software developer acquitted last year of cybercrime charges in a unanimous verdict in Ecuador, was back in court last week in Quito as prosecutors, using the same evidence that helped clear him, asked an appeals court to overturn the decision with bogus allegations of unauthorized access of a telecommunications system.

Armed with a grainy image of a telnet session—which the lower court already ruled was not proof of criminal activity—and testimony of an expert witness to the lower court—who never had access to the devices and systems involved in the alleged intrusion—prosecutors presented the theory that, by connecting to a router, Bini made partial unauthorized access in an attempt to break into a  system  provided by Ecuador’s national telecommunications company (CNT) to a presidency's
contingency center.

If this all sounds familiar, that’s because it is. In an unfounded criminal case plagued by irregularities, delays, and due process violations, Ecuadorian prosecutors have for the last five years sought to prove Bini violated the law by allegedly accessing an information system without authorization.

Bini, who resides in Ecuador, was arrested at the Quito airport in 2019 without being told why. He first learned about the charges from a TV news report depicting him as a criminal trying to destabilize the country. He spent 70 days in jail and cannot leave Ecuador or use his bank accounts.

Bini prevailed in a trial last year before a three-judge panel. The core evidence the Prosecutor’s Office and CNT’s lawyer presented to support the accusation of unauthorized access to a computer, telematic, or telecommunications system was a printed image of a telnet session allegedly taken from Bini’s mobile phone.

The image shows the user requesting a telnet connection to an open server using their computer’s command line. The open server warns that unauthorized access is prohibited and asks for a username. No username is entered. The connection then times out and closes. Rather than demonstrating that Bini intruded into the Ecuadorean telephone network system, it shows the trail of someone who paid a visit to a publicly accessible server—and then politely obeyed the server's warnings about usage and access.

Bini’s acquittal was a major victory for him and the work of security researchers. By assessing the evidence presented, the court concluded that both the Prosecutor’s Office and CNT failed to demonstrate a crime had occurred. There was no evidence that unauthorized access had ever happened, nor anything to sustain the malicious intent that article 234 of Ecuador’s Penal Code requires to characterize the offense of unauthorized access.

The court emphasized the necessity of proper evidence to prove that an alleged computer crime occurred and found that the image of a telnet session presented in Bini’s case is not fit for this purpose. The court explained that graphical representations, which can be altered, do not constitute evidence of cybercrime since an image cannot verify whether the commands illustrated in it were actually executed. Building on technical experts' testimonies, the court said that what does not emerge, or what can't be verified from digital forensics, is not proper digital evidence.

Prosecutors appealed the verdict and are back in court using the same image that didn’t prove any crime was committed. At the March 26 hearing, prosecutors said their expert witness’s analysis of the telnet image shows there was connectivity to the router. The witness compared it to entering the yard of someone’s property to see if the gate to the property is open or closed. Entering the yard is analogous to connecting to the router, the witness said.

Actually, no.
Our interpretation of the image, which was leaked to the media before Bini’s trial, is that it’s the internet equivalent of seeing an open gate, walking up to it, seeing a “NO TRESPASSING” sign, and walking away. If this image could prove anything it is that no unauthorized access happened.

Yet, no expert analysis was conducted in the systems allegedly affected. The  expert witness’s testimony was based on his analysis of a CNT report—he didn’t have access to the CNT router to verify its configuration. He didn’t digitally validate whether what was shown in the report actually happened and he was never asked to verify the existence of an IP address owned or managed by CNT.

That’s not the only problem with the appeal proceedings. Deciding the appeal is a panel of three judges, two of whom ruled to keep Bini in detention after his arrest in 2019 because there were allegedly sufficient elements to establish a suspicion against him. The detention was later considered illegal and arbitrary because of a lack of such elements. Bini filed a lawsuit against the Ecuadorian state, including the two judges, for violating his rights. Bini’s defense team has sought to remove these two judges from the appeals case, but his requests were denied.

The appeals court panel is expected to issue a final ruling in the coming days.  

Meta Oversight Board’s Latest Policy Opinion a Step in the Right Direction

26 March 2024 at 15:11

EFF welcomes the latest and long-awaited policy advisory opinion from Meta’s Oversight Board calling on the company to end its blanket ban on the use of the Arabic-language term “shaheed” when referring to individuals listed under Meta’s policy on dangerous organizations and individuals and calls on Meta to fully implement the Board’s recommendations.

Since the Meta Oversight Board was created in 2020 as an appellate body designed to review select contested content moderation decisions made by Meta, we’ve watched with interest as the Board has considered a diverse set of cases and issued expert opinions aimed at reshaping Meta’s policies. While our views on the Board's efficacy in creating long-term policy change have been mixed, we have been happy to see the Board issue policy recommendations that seek to maximize free expression on Meta properties.

The policy advisory opinion, issued Tuesday, addresses posts referring to individuals as 'shaheed' an Arabic term that closely (though not exactly) translates to 'martyr,' when those same individuals have previously been designated by Meta as 'dangerous' under its dangerous organizations and individuals policy. The Board found that Meta’s approach to moderating content that contains the term to refer to individuals who are designated by the company’s policy on “dangerous organizations and individuals”—a policy that covers both government-proscribed organizations and others selected by the company— substantially and disproportionately restricts free expression.

The Oversight Board first issued a call for comment in early 2023, and in April of last year, EFF partnered with the European Center for Not-for-Profit Law (ECNL) to submit comment for the Board’s consideration. In our joint comment, we wrote:

The automated removal of words such as ‘shaheed’ fail to meet the criteria for restricting users’ right to freedom of expression. They not only lack necessity and proportionality and operate on shaky legal grounds (if at all), but they also fail to ensure access to remedy and violate Arabic-speaking users’ right to non-discrimination.

In addition to finding that Meta’s current approach to moderating such content restricts free expression, the Board noted thate importance of any restrictions on freedom of expression that seek to prevent violence must be necessary and proportionate, “given that undue removal of content may be ineffective and even counterproductive.”

We couldn’t agree more. We have long been concerned about the impact of corporate policies and government regulations designed to limit violent extremist content on human rights and evidentiary content, as well as journalism and art. We have worked directly with companies and with multi stakeholder initiatives such as the Global Internet Forum to Counter Terrorism, Tech Against Terrorism, and the Christchurch Call to ensure that freedom of expression remains a core part of policymaking.

In its policy recommendation, the Board acknowledges the importance of Meta’s ability to take action to ensure its platforms are not used to incite violence or recruit people to engage in violence, and that the term “shaheed” is sometimes used by extremists “to praise or glorify people who have died while committing violent terrorist acts.” However, the Board also emphasizes that Meta’s response to such threats must be guided by respect for all human rights, including freedom of expression. Notably, the Board’s opinion echoes our previous demands for policy changes, as well as those of the Stop Silencing Palestine campaign initiated by nineteen digital and human rights organizations, including EFF.

We call on Meta to implement the Board’s recommendations and ensure that future policies and practices respect freedom of expression.

Disinformation and Elections: EFF and ARTICLE 19 Submit Key Recommendations to EU Commission

21 March 2024 at 14:35

Global Elections and Platform Responsibility

This year is a major one for elections around the world, with pivotal races in the U.S., the UK, the European Union, Russia, and India, to name just a few. Social media platforms play a crucial role in democratic engagement by enabling users to participate in public discourse and by providing access to information, especially as public figures increasingly engage with voters directly. Unfortunately elections also attract a sometimes dangerous amount of disinformation, filling users' news feed with ads touting conspiracy theories about candidates, false news stories about stolen elections, and so on.

Online election disinformation and misinformation can have real world consequences in the U.S. and all over the world. The EU Commission and other regulators are therefore formulating measures platforms could take to address disinformation related to elections. 

Given their dominance over the online information space, providers of Very Large Online Platforms (VLOPs), as sites with over 45 million users in the EU are called, have unique power to influence outcomes.  Platforms are driven by economic incentives that may not align with democratic values, and that disconnect  may be embedded in the design of their systems. For example, features like engagement-driven recommender systems may prioritize and amplify disinformation, divisive content, and incitement to violence. That effect, combined with a significant lack of transparency and targeting techniques, can too easily undermine free, fair, and well-informed electoral processes.

Digital Services Act and EU Commission Guidelines

The EU Digital Services Act (DSA) contains a set of sweeping regulations about online-content governance and responsibility for digital services that make X, Facebook, and other platforms subject in many ways to the European Commission and national authorities. It focuses on content moderation processes on platforms, limits targeted ads, and enhances transparency for users. However, the DSA also grants considerable power to authorities to flag content and investigate anonymous users - powers that they may be tempted to mis-use with elections looming. The DSA also obliges VLOPs to assess and mitigate systemic risks, but it is unclear what those obligations mean in practice. Much will depend on how social media platforms interpret their obligations under the DSA, and how European Union authorities enforce the regulation.

We therefore support the initiative by the EU Commission to gather views about what measures the Commission should call on platforms to take to mitigate specific risks linked to disinformation and electoral processes.

Together with ARTICLE 19, we have submitted comments to the EU Commission on future guidelines for platforms. In our response, we recommend that the guidelines prioritize best practices, instead of policing speech. Furthermore, DSA risk assessment and mitigation compliance evaluations should focus primarily on ensuring respect for fundamental rights. 

We further argue against using watermarking of AI content to curb disinformation, and caution against the draft guidelines’ broadly phrased recommendation that platforms should exchange information with national authorities. Any such exchanges should take care to respect human rights, beginning with a transparent process.  We also recommend that the guidelines pay particular attention to attacks against minority groups or online harassment and abuse of female candidates, lest such attacks further silence those parts of the population who are already often denied a voice.

EFF and ARTICLE 19 Submission: https://www.eff.org/document/joint-submission-euelections

Access to Internet Infrastructure is Essential, in Wartime and Peacetime

12 March 2024 at 10:49

We’ve been saying it for 20 years, and it remains true now more than ever: the internet is an essential service. It enables people to build and create communities, shed light on injustices, and acquire vital knowledge that might not otherwise be available. And access to it becomes even more imperative in circumstances where being able to communicate and share real-time information directly with the people you trust is instrumental to personal safety and survival. More specifically, during wartime and conflict, internet and phone services enable the communication of information between people in challenging situations, as well as the reporting by on-the-ground journalists and ordinary people of the news. 

Unfortunately, governments across the world are very aware of their power to cut off this crucial lifeline, and frequently undertake targeted initiatives to do so. These internet shutdowns have become a blunt instrument that aid state violence and inhibit free speech, and are routinely deployed in direct contravention of human rights and civil liberties.

And this is not a one-dimensional situation. Nearly twenty years after the world’s first total internet shutdowns, this draconian measure is no longer the sole domain of authoritarian states but has become a favorite of a diverse set of governments across three continents. For example:

In Iran, the government has been suppressing internet access for many years. In the past two years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media, and sometimes cutting off internet access altogether. 

In Sudan, authorities have enacted a total telecommunications blackout during a massive conflict and displacement crisis. Shutting down the internet is a deliberate strategy blocking the flow of information that brings visibility to the crisis and prevents humanitarian aid from supporting populations endangered by the conflict. The communications blackout has extended for weeks, and in response a global campaign #KeepItOn has formed to put pressure on the Sudanese government to restore its peoples' access to these vital services. More than 300 global humanitarian organizations have signed on to support #KeepItOn.

And in Palestine, where the Israeli government exercises near-total control over both wired internet and mobile phone infrastructure, Palestinians in Gaza have experienced repeated internet blackouts inflicted by the Israeli authorities. The latest blackout in January 2024 occurred amid a widespread crackdown by the Israeli government on digital rights—including censorship, surveillance, and arrests—and amid accusations of bias and unwarranted censorship by social media platforms. On that occasion, the internet was restored after calls from civil society and nations, including the U.S. As we’ve noted, internet shutdowns impede residents' ability to access and share resources and information, as well as the ability of residents and journalists to document and call attention to the situation on the ground—more necessary than ever given that a total of 83 journalists have been killed in the conflict so far. 

Given that all of the internet cables connecting Gaza to the outside world go through Israel, the Israeli Ministry of Communications has the ability to cut off Palestinians’ access with ease. The Ministry also allocates spectrum to cell phone companies; in 2015 we wrote about an agreement that delivered 3G to Palestinians years later than the rest of the world. In 2022, President Biden offered to upgrade the West Bank and Gaza to 4G, but the initiative stalled. While some Palestinians are able to circumvent the blackout by utilizing Israeli SIM cards (which are difficult to obtain) or Egyptian eSIMs, these workarounds are not solutions to the larger problem of blackouts, which the National Security Council has said: “[deprive] people from accessing lifesaving information, while also undermining first responders and other humanitarian actors’ ability to operate and to do so safely.”

Access to internet infrastructure is essential, in wartime as in peacetime. In light of these numerous blackouts, we remain concerned about the control that authorities are able to exercise over the ability of millions of people to communicate. It is imperative that people’s access to the internet remains protected, regardless of how user platforms and internet companies transform over time. We continue to shout this, again and again, because it needs to be restated, and unfortunately today there are ever more examples of it happening before our eyes.




European Court of Human Rights Confirms: Weakening Encryption Violates Fundamental Rights

5 March 2024 at 09:09

In a milestone judgment—Podchasov v. Russiathe European Court of Human Rights (ECtHR) has ruled that weakening of encryption can lead to general and indiscriminate surveillance of the communications of all users and violates the human right to privacy.  

In 2017, the landscape of digital communication in Russia faced a pivotal moment when the government required Telegram Messenger LLP and other “internet communication” providers to store all communication data—and content—for specified durations. These providers were also required to supply law enforcement authorities with users’ data, the content of their communications, as well as any information necessary to decrypt user messages. The FSB (the Russian Federal Security Service) subsequently ordered Telegram to assist in decrypting the communications of specific users suspected of engaging in terrorism-related activities.

Telegram opposed this order on the grounds that it would create a backdoor that would undermine encryption for all of its users. As a result, Russian courts fined Telegram and ordered the blocking of its app within the country. The controversy extended beyond Telegram, drawing in numerous users who contested the disclosure orders in Russian courts. A Russian citizen, Mr Podchasov, escalated the issue to the European Court of Human Rights (ECtHR), arguing that forced decryption of user communication would infringe on the right to private life under Article 8 of the European Convention of Human Rights (ECHR), which reads as follows:  

Everyone has the right to respect for his private and family life, his home and his correspondence (Article 8 ECHR, right to respect for private and family life, home and correspondence) 

EFF has always stood against government intrusion into the private lives of users and advocated for strong privacy guarantees, including the right to confidential communication. Encryption not only safeguards users’ privacy but also protects their right to freedom of expression protected under international human rights law. 

In a great victory for privacy advocates, the ECtHR agreed. The Court found that the requirement of continuous, blanket storage of private user data interferes with the right to privacy under the Convention, emphasizing that the possibility for national authorities to access these data is a crucial factor for determining a human rights violation [at 53]. The Court identified the inherent risks of arbitrary government action in secret surveillance in the present case and found again—following its stance in Roman Zakharov v. Russiathat the relevant legislation failed to live up to the quality of law standards and lacked the adequate and effective safeguards against misuse [75].  Turning to a potential justification for such interference, the ECtHR emphasized the need of a careful balancing test that considers the use of modern data storage and processing technologies and weighs the potential benefits against important private-life interests [62-64]. 

In addressing the State mandate for service providers to submit decryption keys to security services, the court's deliberations culminated in the following key findings [76-80]:

  1. Encryption being important for protecting the right to private life and other fundamental rights, such as freedom of expression: The ECtHR emphasized the importance of encryption technologies for safeguarding the privacy of online communications. Encryption safeguards and protects the right to private life generally while also supporting the exercise of other fundamental rights, such as freedom of expression.
  2. Encryption as a shield against abuses: The Court emphasized the role of encryption to provide a robust defense against unlawful access and generally “appears to help citizens and businesses to defend themselves against abuses of information technologies, such as hacking, identity and personal data theft, fraud and the improper disclosure of confidential information.” The Court held that this must be given due consideration when assessing measures which could weaken encryption.
  3. Decryption of communications orders weakens the encryption for all users: The ECtHR established that the need to decrypt Telegram's "secret chats" requires the weakening of encryption for all users. Taking note again of the dangers of restricting encryption described by many experts in the field, the Court held that backdoors could be exploited by criminal networks and would seriously compromise the security of all users’ electronic communications. 
  4. Alternatives to decryption: The ECtHR took note of a range of alternative solutions to compelled decryption that would not weaken the protective mechanisms, such as forensics on seized devices and better-resourced policing.  

In light of these findings, the Court held that the mandate to decrypt end-to-end encrypted communications risks weakening the encryption mechanism for all users, which was a disproportionate to the legitimate aims pursued. 

In summary [80], the Court concluded that the retention and unrestricted state access to internet communication data, coupled with decryption requirements, cannot be regarded as necessary in a democratic society, and are thus unlawful. It emphasized that a direct access of authorities to user data on a generalized basis and without sufficient safeguards impairs the very essence of the right to private life under the Convention. The Court also highlighted briefs filed by the European Information Society Institute (EISI) and Privacy International, which provided insight into the workings of end-to-end encryption and explained why mandated backdoors represent an illegal and disproportionate measure. 

Impact of the ECtHR ruling on current policy developments 

The ruling is a landmark judgment, which will likely draw new normative lines about human rights standards for private and confidential communication. We are currently supporting Telegram in its parallel complaint to the ECtHR, contending that blocking its app infringes upon fundamental rights. As part of a collaborative efforts of international human rights and media freedom organisations, we have submitted a third-party intervention to the ECtHR, arguing that blocking an entire app is a serious and disproportionate restriction on freedom of expression. That case is still pending. 

The Podchasov ruling also directly challenges ongoing efforts in Europe to weaken encryption to allow access and scanning of our private messages and pictures.

For example, the controversial UK's Online Safety Act creates the risk that online platforms will use software to search all users’ photos, files, and messages, scanning for illegal content. We recently submitted comments to the relevant UK regulator (Ofcom) to avoid any weakening of encryption when this law becomes operational. 

In the EU, we are concerned about the European Commission’s message-scanning proposal (CSAR) as being a disaster for online privacy. It would allow EU authorities to compel online services to scan users’ private messages and compare users’ photos to against law enforcement databases or use error-prone AI algorithms to detect criminal behavior. Such detection measures will inevitably lead to dangerous and unreliable Client-Side Scanning practices, undermining the essence of end-to-end encryption. As the ECtHR deems general user scanning as disproportionate, specifically criticizing measures that weaken existing privacy standards, forcing platforms like WhatsApp or Signal to weaken security by inserting a vulnerability into all users’ devices to enable message scanning must be considered unlawful. 

The EU regulation proposal is likely to be followed by other proposals to grant law enforcement access to encrypted data and communications. An EU high level expert group on ‘access to data for effective law enforcement’ is expected to make policy recommendations to the next EU Commission in mid-2024. 

We call on lawmakers to take the Court of Human Rights ruling seriously: blanket and indiscriminate scanning of user communication and the general weakening of encryption for users is unacceptable and unlawful. 

Protect Good Faith Security Research Globally in Proposed UN Cybercrime Treaty

7 February 2024 at 10:57

Statement submitted to the UN Ad Hoc Committee Secretariat by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282, on behalf of 124 signatories.

We, the undersigned, representing a broad spectrum of the global security research community, write to express our serious concerns about the UN Cybercrime Treaty drafts released during the sixth session and the most recent one. These drafts pose substantial risks to global cybersecurity and significantly impact the rights and activities of good faith cybersecurity researchers.

Our community, which includes good faith security researchers in academia and cybersecurity companies, as well as those working independently, plays a critical role in safeguarding information technology systems. We identify vulnerabilities that, if left unchecked, can spread malware, cause data breaches, and give criminals access to sensitive information of millions of people. We rely on the freedom to openly discuss, analyze, and test these systems, free of legal threats.

The nature of our work is to research, discover, and report vulnerabilities in networks, operating systems, devices, firmware, and software. However, several provisions in the draft treaty risk hindering our work by categorizing much of it as criminal activity. If adopted in its current form, the proposed treaty would increase the risk that good faith security researchers could face prosecution, even when our goal is to enhance technological safety and educate the public on cybersecurity matters. It is critical that legal frameworks support our efforts to find and disclose technological weaknesses to make everyone more secure, rather than penalize us, and chill the very research and disclosure needed to keep us safe. This support is essential to improving the security and safety of technology for everyone across the world.

Equally important is our ability to differentiate our legitimate security research activities from malicious
exploitation of security flaws. Current laws focusing on “unauthorized access” can be misapplied to good faith security researchers, leading to unnecessary legal challenges. In addressing this, we must consider two potential obstacles to our vital work. Broad, undefined rules for prior authorization risk deterring good faith security researchers, as they may not understand when or under what circumstances they need permission. This lack of clarity could ultimately weaken everyone's online safety and security. Moreover, our work often involves uncovering unknown vulnerabilities. These are security weaknesses that no one, including the system's owners, knows about until we discover them. We cannot be certain what vulnerabilities we might find. Therefore, requiring us to obtain prior authorization for each potential discovery is impractical and overlooks the essence of our work.

The unique strength of the security research community lies in its global focus, which prioritizes safeguarding infrastructure and protecting users worldwide, often putting aside geopolitical interests. Our work, particularly the open publication of research, minimizes and prevents harm that could impact people
globally, transcending particular jurisdictions. The proposed treaty’s failure to exempt good faith security research from the expansive scope of its cybercrime prohibitions and to make the safeguards and limitations in Article 6-10 mandatory leaves the door wide open for states to suppress or control the flow of security related information. This would undermine the universal benefit of openly shared cybersecurity knowledge, and ultimately the safety and security of the digital environment.

We urge states to recognize the vital role the security research community plays in defending our digital ecosystem against cybercriminals, and call on delegations to ensure that the treaty supports, rather than hinders, our efforts to enhance global cybersecurity and prevent cybercrime. Specifically:

Article 6 (Illegal Access): This article risks criminalizing essential activities in security research, particularly where researchers access systems without prior authorization, to identify vulnerabilities. A clearer distinction is needed between malicious unauthorized access “without right” and “good faith” security research activities; safeguards for legitimate activities should be mandatory. A malicious intent requirementincluding an intent to cause damage, defraud, or harmis needed to avoid criminal liability for accidental or unintended access to a computer system, as well as for good faith security testing.

Article 6 should not use the ambiguous term “without right” as a basis for establishing criminal liability for
unauthorized access. Apart from potentially criminalizing security research, similar provisions have also been misconstrued to attach criminal liability to minor violations committed deliberately or accidentally by authorized users. For example, violation of private terms of service (TOS)a minor infraction ordinarily considered a civil issuecould be elevated into a criminal offense category via this treaty on a global scale.

Additionally, the treaty currently gives states the option to define unauthorized access in national law as the bypassing of security measures. This should not be optional, but rather a mandatory safeguard, to avoid criminalizing routine behavior such as c
hanging one’s IP address, inspecting website code, and accessing unpublished URLs. Furthermore, it is crucial to specify that the bypassed security measures must be actually "effective." This distinction is important because it ensures that criminalization is precise and scoped to activities that cause harm. For instance, bypassing basic measures like geoblockingwhich can be done innocently simply by changing locationshould not be treated the same as overcoming robust security barriers with the intention to cause harm.

By adopting this safeguard and ensuring that security measures are indeed effective, the proposed treaty would shield researchers from arbitrary criminal sanctions for good faith security research.

These changes would clarify unauthorized access, more clearly differentiating malicious hacking from legitimate cybersecurity practices like security research and vulnerability testing. Adopting these amendments would enhance protection for cybersecurity efforts and more effectively address concerns about harmful or fraudulent unauthorized intrusions.

Article 7 (Illegal Interception): Analysis of network traffic is also a common practice in cybersecurity; this article currently risks criminalizing such analysis and should similarly be narrowed to require criminal intent (mens rea) to harm or defraud.

Article 8 (Interference with Data) and Article 9 (Interference with Computer Systems): These articles may inadvertently criminalize acts of security research, which often involve testing the robustness of systems by simulating attacks through interferences. As with prior articles, criminal intent to cause harm or defraud is not mandated, and a requirement that the activity cause serious harm is absent from Article 9 and optional in Article 8. These safeguards should be mandatory.

Article 10 (Misuse of Devices): The broad scope of this article could criminalize the legitimate use of tools employed in cybersecurity research, thereby affecting the development and use of these tools. Under the current draft, Article 10(2) specifically addresses the misuse of cybersecurity tools. It criminalizes obtaining, producing, or distributing these tools only if they are intended for committing cybercrimes as defined in Articles 6 to 9 (which cover illegal access, interception, data interference, and system interference). However, this also raises a concern. If Articles 6 to 9 do not explicitly protect activities like security testing, Article 10(2) may inadvertently criminalize security researchers. These researchers often use similar tools for legitimate purposes, like testing and enhancing systems security. Without narrow scope and clear safeguards in Articles 6-9, these well-intentioned activities could fall under legal scrutiny, despite not being aligned with the criminal malicious intent (mens rea) targeted by Article 10(2).

Article 22 (Jurisdiction): In combination with other provisions about measures that may be inappropriately used to punish or deter good-faith security researchers, the overly broad jurisdictional scope outlined in Article 22 also raises significant concerns. Under the article's provisions, security researchers discovering or disclosing vulnerabilities to keep the digital ecosystem secure could be subject to criminal prosecution simultaneously across multiple jurisdictions. This would have a chilling effect on essential security research globally and hinder researchers' ability to contribute to global cybersecurity. To mitigate this, we suggest revising Article 22(5) to prioritize “determining the most appropriate jurisdiction for prosecution” rather than “coordinating actions.” This shift could prevent the redundant prosecution of security researchers. Additionally, deleting Article 17 and limiting the scope of procedural and international cooperation measures to crimes defined in Articles 6 to 16 would further clarify and protect against overreach.

Article 28(4): This article is gravely concerning from a cybersecurity perspective. It empowers authorities to compel “any individual” with knowledge of computer systems to provide any “necessary information” for conducting searches and seizures of computer systems. This provision can be abused to force security experts, software engineers and/or tech employees to expose sensitive or proprietary information. It could also encourage authorities to bypass normal channels within companies and coerce individual employees, under the threat of criminal prosecution, to provide assistance in subverting technical access controls such as credentials, encryption, and just-in-time approvals without their employers’ knowledge. This dangerous paragraph must be removed in favor of the general duty for custodians of information to comply with lawful orders to the extent of their ability.

Security researchers
whether within organizations or independentdiscover, report and assist in fixing tens of thousands of critical Common Vulnerabilities and Exposure (CVE) reported over the lifetime of the National Vulnerability Database. Our work is a crucial part of the security landscape, yet often faces serious legal risk from overbroad cybercrime legislation.

While the proposed UN CybercrimeTreaty's core cybercrime provisions closely mirror the Council of
Europe’s Budapest Convention, the impact of cybercrime regimes and security research has evolved considerably in the two decades since that treaty was adopted in 2001. In that time, good faith cybersecurity researchers have faced significant repercussions for responsibly identifying security flaws. Concurrently, a number of countries have enacted legislative or other measures to protect the critical line of defense this type of research provides. The UN Treaty should learn from these past experiences by explicitly exempting good faith cybersecurity research from the scope of the treaty. It should also make existing safeguards and limitations mandatory. This change is essential to protect the crucial work of good faith security researchers and ensure the treaty remains effective against current and future cybersecurity challenges.

Since these negotiations began, we had hoped that governments would adopt a treaty that strengthens global computer security and enhances our ability to combat cybercrime. Unfortunately, the draft text, as written, would have the opposite effect. The current text would weaken cybersecurity and make it easier for malicious actors to create or exploit weaknesses in the digital ecosystem by subjecting us to criminal prosecution for good faith work that keeps us all safer. Such an outcome would undermine the very purpose of the treaty: to protect individuals and our institutions from cybercrime.

To be submitted by the Electronic Frontier Foundation, accredited under operative paragraph No. 9 of UN General Assembly Resolution 75/282 on behalf of 124 signatories.

Individual Signatories
Jobert Abma, Co-Founder, HackerOne (United States)
Martin Albrecht, Chair of Cryptography, King's College London (Global) Nicholas Allegra (United States)
Ross Anderson, Universities of Edinburgh and Cambridge (United Kingdom)
Diego F. Aranha, Associate Professor, Aarhus University (Denmark)
Kevin Beaumont, Security researcher (Global) Steven Becker (Global)
Janik Besendorf, Security Researcher (Global) Wietse Boonstra (Global)
Juan Brodersen, Cybersecurity Reporter, Clarin (Argentina)
Sven Bugiel, Faculty, CISPA Helmholtz Center for Information Security (Germany)
Jon Callas, Founder and Distinguished Engineer, Zatik Security (Global)
Lorenzo Cavallaro, Professor of Computer Science, University College London (Global)
Joel Cardella, Cybersecurity Researcher (Global)
Inti De Ceukelaire (Belgium)
Enrique Chaparro, Information Security Researcher (Global)
David Choffnes, Associate Professor and Executive Director of the Cybersecurity and Privacy Institute at Northeastern University (United States/Global)
Gabriella Coleman, Full Professor Harvard University (United States/Europe)
Cas Cremers, Professor and Faculty, CISPA Helmholtz Center for Information Security (Global)
Daniel Cuthbert (Europe, Middle East, Africa)
Ron Deibert, Professor and Director, the Citizen Lab at the University of Toronto's Munk School (Canada)
Domingo, Security Incident Handler, Access Now (Global)
Stephane Duguin, CEO, CyberPeace Institute (Global)
Zakir Durumeric, Assistant Professor of Computer Science, Stanford University; Chief Scientist, Censys (United States)
James Eaton-Lee, CISO, NetHope (Global)
Serge Egelman, University of California, Berkeley; Co-Founder and Chief Scientist, AppCensus (United States/Global)
Jen Ellis, Founder, NextJenSecurity (United Kingdom/Global)
Chris Evans, Chief Hacking Officer @ HackerOne; Founder @ Google Project Zero (United States)
Dra. Johanna Caterina Faliero, Phd; Professor, Faculty of Law, University of Buenos Aires; Professor, University of National Defence (Argentina/Global))
Dr. Ali Farooq, University of Strathclyde, United Kingdom (Global)
Victor Gevers, co-founder of the Dutch Institute for Vulnerability Disclosure (Netherlands)
Abir Ghattas (Global)
Ian Goldberg, Professor and Canada Research Chair in Privacy Enhancing Technologies, University of Waterloo (Canada)
Matthew D. Green, Associate Professor, Johns Hopkins University (United States)
Harry Grobbelaar, Chief Customer Officer, Intigriti (Global)
Juan Andrés Guerrero-Saade, Associate Vice President of Research, SentinelOne (United States/Global)
Mudit Gupta, Chief Information Security Officer, Polygon (Global)
Hamed Haddadi, Professor of Human-Centred Systems at Imperial College London; Chief Scientist at Brave Software (Global)
J. Alex Halderman, Professor of Computer Science & Engineering and Director of the Center for Computer Security & Society, University of Michigan (United States)
Joseph Lorenzo Hall, PhD, Distinguished Technologist, The Internet Society
Dr. Ryan Henry, Assistant Professor and Director of Masters of Information Security and Privacy Program, University of Calgary (Canada)
Thorsten Holz, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Joran Honig, Security Researcher (Global)
Wouter Honselaar, MSc student security; hosting engineer & volunteer, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Prof. Dr. Jaap-Henk Hoepman (Europe)
Christian “fukami” Horchert (Germany / Global)
Andrew 'bunnie' Huang, Researcher (Global)
Dr. Rodrigo Iglesias, Information Security, Lawyer (Argentina)
Hudson Jameson, Co-Founder - Security Alliance (SEAL)(Global)
Stijn Jans, CEO of Intigriti (Global)
Gerard Janssen, Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
JoyCfTw, Hacktivist (United States/Argentina/Global)
Doña Keating, President and CEO, Professional Options LLC (Global)

Olaf Kolkman, Principal, Internet Society (Global)Federico Kirschbaum, Co-Founder & CEO of Faraday Security, Co-Founder of Ekoparty Security Conference (Argentina/Global)
Xavier Knol, Cybersecurity Analyst and Researcher (Global) , Principal, Internet Society (Global)
Micah Lee, Director of Information Security, The Intercept (United States)
Jan Los (Europe/Global)
Matthias Marx, Hacker (Global)
Keane Matthews, CISSP (United States)
René Mayrhofer, Full Professor and Head of Institute of Networks and Security, Johannes Kepler University Linz, Austria (Austria/Global)
Ron Mélotte (Netherlands)
Hans Meuris (Global)
Marten Mickos, CEO, HackerOne (United States)
Adam Molnar, Assistant Professor, Sociology and Legal Studies, University of Waterloo (Canada/Global)
Jeff Moss, Founder of the information security conferences DEF CON and Black Hat (United States)
Katie Moussouris, Founder and CEO of Luta Security; coauthor of ISO standards on vulnerability disclosure and handling processes (Global)
Alec Muffett, Security Researcher (United Kingdom)
Kurt Opsahl,
Associate General Counsel for Cybersecurity and Civil Liberties Policy, Filecoin Foundation; President, Security Researcher Legal Defense Fund (Global)
Ivan "HacKan" Barrera Oro (Argentina)
Chris Palmer, Security Engineer (Global)
Yanna Papadodimitraki, University of Cambridge (United Kingdom/European Union/Global)
Sunoo Park, New York University (United States)
Mathias Payer, Associate Professor, École Polytechnique Fédérale de Lausanne (EPFL)(Global)
Giancarlo Pellegrino, Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Fabio Pierazzi, King’s College London (Global)
Bart Preneel, full professor, University of Leuven, Belgium (Global)
Michiel Prins, Founder @ HackerOne (United States)
Joel Reardon, Professor of Computer Science, University of Calgary, Canada; Co-Founder of AppCensus (Global)
Alex Rice, Co-Founder & CTO, HackerOne (United States)
René Rehme, rehme.infosec (Germany)
Tyler Robinson, Offensive Security Researcher (United States)
Michael Roland, Security Researcher and Lecturer, Institute of Networks and Security, Johannes Kepler University Linz; Member, SIGFLAG - Verein zur (Austria/Europe/Global)
Christian Rossow, Professor and Faculty, CISPA Helmholtz Center for Information Security, Germany (Global)
Pilar Sáenz, Coordinator Digital Security and Privacy Lab, Fundación Karisma (Colombia)
Runa Sandvik, Founder, Granitt (United States/Global)
Koen Schagen (Netherlands)
Sebastian Schinzel, Professor at University of Applied Sciences Münster and Fraunhofer SIT (Germany)
Bruce Schneier, Fellow and Lecturer, Harvard Kennedy School (United States)
HFJ Schokkenbroek (hp197), IFCAT board member (Netherlands)
Javier Smaldone, Security Researcher (Argentina)
Guillermo Suarez-Tangil, Assistant Professor, IMDEA Networks Institute (Global)
Juan Tapiador, Universidad Carlos III de Madrid, Spain (Global)
Dr Daniel R. Thomas, University of Strathclyde, StrathCyber, Computer & Information Sciences (United Kingdom)
Cris Thomas (Space Rogue), IBM X-Force (United States/Global)
Carmela Troncoso, Assistant Professor, École Polytechnique Fédérale de Lausanne (EPFL) (Global)
Narseo Vallina-Rodriguez, Research Professor at IMDEA Networks/Co-founder AppCensus Inc (Global)
Jeroen van der Broek, IT Security Engineer (Netherlands)
Jeroen van der Ham-de Vos, Associate Professor, University of Twente, The Netherlands (Global)
Charl van der Walt (Head of Security Research, Orange Cyberdefense (a division of Orange Networks)(South Arfica/France/Global)
Chris van 't Hof, Managing Director DIVD, Dutch Institute for Vulnerability Disclosure (Global) Dimitri Verhoeven (Global)
Tarah Wheeler, CEO Red Queen Dynamics & Senior Fellow Global Cyber Policy, Council on Foreign Relations (United States)
Dominic White, Ethical Hacking Director, Orange Cyberdefense (a division of Orange Networks)(South Africa/Europe)
Eddy Willems, Security Evangelist (Global)
Christo Wilson, Associate Professor, Northeastern University (United States) Robin Wilton, IT Consultant (Global)
Tom Wolters (Netherlands)
Mehdi Zerouali, Co-founder & Director, Sigma Prime (Australia/Global)

Organizational Signatories
Dutch Institute for Vulnerability Disclosure (DIVD)(Netherlands)
Fundacin Via Libre (Argentina)
Good Faith Cybersecurity Researchers Coalition (European Union)
Access Now (Global)
Chaos Computer Club (CCC)(Europe)
HackerOne (Global)
Hacking Policy Council (United States)
HINAC (Hacking is not a Crime)(United States/Argentina/Global)
Intigriti (Global)
Jolo Secure (Latin America)
K+LAB, Digital security and privacy Lab, Fundación Karisma (Colombia)
Luta Security (Global)
OpenZeppelin (United States)
Professional Options LLC (Global)
Stichting International Festivals for Creative Application of Technology Foundation

Draft UN Cybercrime Treaty Could Make Security Research a Crime, Leading 124 Experts to Call on UN Delegates to Fix Flawed Provisions that Weaken Everyone’s Security

7 February 2024 at 10:56

Security researchers’ work discovering and reporting vulnerabilities in software, firmware,  networks, and devices protects people, businesses and governments around the world from malware, theft of  critical data, and other cyberattacks. The internet and the digital ecosystem are safer because of their work.

The UN Cybercrime Treaty, which is in the final stages of drafting in New York this week, risks criminalizing this vitally important work. This is appalling and wrong, and must be fixed.

One hundred and twenty four prominent security researchers and cybersecurity organizations from around the world voiced their concern today about the draft and called on UN delegates to modify flawed language in the text that would hinder researchers’ efforts to enhance global security and prevent the actual criminal activity the treaty is meant to rein in.

Time is running out—the final negotiations over the treaty end Feb. 9. The talks are the culmination of two years of negotiations; EFF and its international partners have
raised concerns over the treaty’s flaws since the beginning. If approved as is, the treaty will substantially impact criminal laws around the world and grant new expansive police powers for both domestic and international criminal investigations.

Experts who work globally to find and fix vulnerabilities before real criminals can exploit them said in a statement today that vague language and overbroad provisions in the draft increase the risk that researchers could face prosecution. The draft fails to protect the good faith work of security researchers who may bypass security measures and gain access to computer systems in identifying vulnerabilities, the letter says.

The draft threatens security researchers because it doesn’t specify that access to computer systems with no malicious intent to cause harm, steal, or infect with malware should not be subject to prosecution. If left unchanged, the treaty would be a major blow to cybersecurity around the world.

Specifically, security researchers seek changes to Article 6,
which risks criminalizing essential activities, including accessing systems without prior authorization to identify vulnerabilities. The current text also includes the ambiguous term “without right” as a basis for establishing criminal liability for unauthorized access. Clarification of this vague language as well as a  requirement that unauthorized access be done with malicious intent is needed to protect security research.

The signers also called out Article 28(4), which empowers States to force “any individual” with knowledge of computer systems to turn over any information necessary to conduct searches and seizures of computer systems.
This dangerous paragraph must be removed and replaced with language specifying that custodians must only comply with lawful orders to the extent of their ability.

There are many other problems with the draft treaty—it lacks human rights safeguards, gives States’ powers to reach across borders to surveil and collect personal information of people in other States, and forces tech companies to collude with law enforcement in alleged cybercrime investigations.

EFF and its international partners have been and are pressing hard for human rights safeguards and other fixes to ensure that the fight against cybercrime does not require sacrificing fundamental rights. We stand with security researchers in demanding amendments to ensure the treaty is not used as a tool to threaten, intimidate, or prosecute them, software engineers, security teams, and developers.

 For the statement:
https://www.eff.org/deeplinks/2024/02/protect-good-faith-security-research-globally-proposed-un-cybercrime-treaty

For more on the treaty:
https://ahc.derechosdigitales.org/en/

In Final Talks on Proposed UN Cybercrime Treaty, EFF Calls on Delegates to Incorporate Protections Against Spying and Restrict Overcriminalization or Reject Convention

29 January 2024 at 12:42

Update: Delegates at the concluding negotiating session failed to reach consensus on human rights protections, government surveillance, and other key issues. The session was suspended Feb. 8 without a final draft text. Delegates will resume talks at a later day with a view to concluding their work and providing a draft convention to the UN General Assembly at its 78th session later this year.

UN Member States are meeting in New York this week to conclude negotiations over the final text of the UN Cybercrime Treaty, which—despite warnings from hundreds of civil society organizations across the globe, security researchers, media rights defenders, and the world’s largest tech companies—will, in its present form, endanger human rights and make the cyber ecosystem less secure for everyone.

EFF and its international partners are going into this last session with a
unified message: without meaningful changes to limit surveillance powers for electronic evidence gathering across borders and add robust minimum human rights safeguard that apply across borders, the convention should be rejected by state delegations and not advance to the UN General Assembly in February for adoption.

EFF and its partners have for months warned that enforcement of such a treaty would have dire consequences for human rights. On a practical level, it will impede free expression and endanger activists, journalists, dissenters, and everyday people.

Under the draft treaty's current provisions on accessing personal data for criminal investigations across borders, each country is allowed to define what constitutes a "serious crime." Such definitions can be excessively broad and violate international human rights standards. States where it’s a crime to  criticize political leaders (
Thailand), upload videos of yourself dancing (Iran), or wave a rainbow flag in support of LGBTQ+ rights (Egypt), can, under this UN-sanctioned treaty, require one country to conduct surveillance to aid another, in accordance with the data disclosure standards of the requesting country. This includes surveilling individuals under investigation for these offenses, with the expectation that technology companies will assist. Such assistance involves turning over personal information, location data, and private communications secretly, without any guardrails, in jurisdictions lacking robust legal protections.

The final 10-day negotiating session in New York will conclude a
series of talks that started in 2022 to create a treaty to prevent and combat core computer-enabled crimes, like distribution of malware, data interception and theft, and money laundering. From the beginning, Member States failed to reach consensus on the treaty’s scope, the inclusion of human rights safeguards, and even the definition of “cybercrime.” The scope of the entire treaty was too broad from the very beginning; Member States eventually drops some of these offenses, limiting the scope of the criminalization section, but not evidence gathering provisions that hands States dangerous surveillance powers. What was supposed to be an international accord to combat core cybercrime morphed into a global surveillance agreement covering any and all crimes conceived by Member States. 

The latest draft,
released last November, blatantly disregards our calls to narrow the scope, strengthen human rights safeguards, and tighten loopholes enabling countries to assist each other in spying on people. It also retains a controversial provision allowing states to compel engineers or tech employees to undermine security measures, posing a threat to encryption. Absent from the draft are protections for good-faith cybersecurity researchers and others acting in the public interest.

This is unacceptable. In a Jan. 23 joint
statement to delegates participating in this final session, EFF and 110 organizations outlined non-negotiable redlines for the draft that will emerge from this session, which ends Feb. 8. These include:

  • Narrowing the scope of the entire Convention to cyber-dependent crimes specifically defined within its text.
  • Including provisions to ensure that security researchers, whistleblowers, journalists, and human rights defenders are not prosecuted for their legitimate activities and that other public interest activities are protected. 
  • Guaranteeing explicit data protection and human rights standards like legitimate purpose, nondiscrimination, prior judicial authorization, necessity and proportionality apply to the entire Convention.
  • Mainstreaming gender across the Convention as a whole and throughout each article in efforts to prevent and combat cybercrime.

It’s been a long fight pushing for a treaty that combats cybercrime without undermining basic human rights. Without these improvements, the risks of this treaty far outweigh its potential benefits. States must stand firm and reject the treaty if our redlines can’t be met. We cannot and will not support or recommend a draft that will make everyone less, instead of more, secure.

UAE Confirms Trial Against 84 Detainees; Ahmed Mansoor Suspected Among Them

10 January 2024 at 05:51

The UAE confirmed this week that it has placed 84 detainees on trial, on charges of “establishing another secret organization for the purpose of committing acts of violence and terrorism on state territory.” Suspected to be among those facing trial is award-winning human rights defender Ahmed Mansoor, also known as the “the million dollar dissident,” as he was once the target of exploits that exposed major security flaws in Apple’s iOS operating system—the kind of “zero-day” vulnerabilities that fetch seven figures on the exploit market. Mansoor drew the ire of UAE authorities for criticizing the country’s internet censorship and surveillance apparatus and for calling for a free press and democratic freedoms in the country.

Having previously been arrested in 2011 and sentenced to three years' imprisonment for “insulting officials,'' Ahmed Mansoor was released after eight months due to a presidential pardon influenced by international pressure. Later, Mansoor faced new speech-related charges for using social media to “publish false information that harms national unity.” During this period, authorities held him in an unknown location for over a year, deprived of legal representation, before convicting him again in May 2018 to ten years in prison under the UAE’s draconian cybercrime law. We have long advocated for his release, and are joined in doing so by hundreds of digital and human rights organizations around the world.

At the recent COP28 climate talks, Human Rights Watch and Amnesty International and other activists conducted a protest inside the UN-protected “blue zone” to raise awareness of Mansoor’s plight, as well the cases of both UAE detainee Mohamed El-Siddiq and Egyptian-British activist  Alaa Abd El Fattah. At the same time, it was reported by a dissident group that the UAE was proceeding with the trial against 84 of its detainees.

We reiterate our call for Ahmed Mansoor’s freedom, and take this opportunity to raise further awareness of the oppressive nature of the legislation that was used to imprison him. The UAE’s use of its criminal law to silence those who speak truth to power is another example of how counter-terrorism laws restrict free expression and justify disproportionate state surveillance. This concern is not hypothetical; a 2023 study by the Special Rapporteur on counter-terrorism found widespread and systematic abuse of civil society and civic space through the use of similar laws supposedly designed to counter terrorism. Moreover, and problematically, references 'related to terrorism’ in the treaty preamble are still included in the latest version of a proposed United Nations Cybercrime Treaty, currently being negotiated with more than 190 member states, even though there is no  agreed-upon definition of terrorism in international law. If approved as currently written, the UN Cybercrime Treaty has the potential to substantively reshape international criminal law and bolster cross-border police surveillance powers to access and share users’ data, implicating the human rights of billions of people worldwide, and could enable States to justify repressive measures that overly restrict free expression and peaceful dissent.

Fighting European Threats to Encryption: 2023 Year in Review 

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. Yet throughout 2023, politicians across Europe attempted to undermine encryption, seeking to access and scan our private messages and pictures. 

But we pushed back in the EU, and so far, we’ve succeeded. EFF spent this year fighting hard against an EU proposal (text) that, if it became law, would have been a disaster for online privacy in the EU and throughout the world. In the name of fighting online child abuse, the European Commission, the EU’s executive body, put forward a draft bill that would allow EU authorities to compel online services to scan user data and check it against law enforcement databases. The proposal would have pressured online services to abandon end-to-end encryption. The Commission even suggested using AI to rifle through peoples’ text messages, leading some opponents to call the proposal “chat control.”

EFF has been opposed to this proposal since it was unveiled last year. We joined together with EU allies and urged people to sign the “Don’t Scan Me” petition. We lobbied EU lawmakers and urged them to protect their constituents’ human right to have a private conversation—backed up by strong encryption. 

Our message broke through. In November, a key EU committee adopted a position that bars mass scanning of messages and protects end-to-end encryption. It also bars mandatory age verification, which would have amounted to a mandate to show ID before you get online; age verification can erode a free and anonymous internet for both kids and adults. 

We’ll continue to monitor the EU proposal as attention shifts to the Council of the EU, the second decision-making body of the EU. Despite several Member States still supporting widespread surveillance of citizens, there are promising signs that such a measure won’t get majority support in the Council. 

Make no mistake—the hard-fought compromise in the European Parliament is a big victory for EFF and our supporters. The governments of the world should understand clearly: mass scanning of peoples’ messages is wrong, and at odds with human rights. 

A Wrong Turn in the U.K.

EFF also opposed the U.K.’s Online Safety Bill (OSB), which passed and became the Online Safety Act (OSA) this October, after more than four years on the British legislative agenda. The stated goal of the OSB was to make the U.K. the world’s “safest place” to use the internet, but the bill’s more than 260 pages actually outline a variety of ways to undermine our privacy and speech. 

The OSA requires platforms to take action to prevent individuals from encountering certain illegal content, which will likely mandate the use of intrusive scanning systems. Even worse, it empowers the British government, in certain situations, to demand that online platforms use government-approved software to scan for illegal content. The U.K. government said that content will only be scanned to check for specific categories of content. In one of the final OSB debates, a representative of the government noted that orders to scan user files “can be issued only where technically feasible,” as determined by the U.K. communications regulator, Ofcom. 

But as we’ve said many times, there is no middle ground to content scanning and no “safe backdoor” if the internet is to remain free and private. Either all content is scanned and all actors—including authoritarian governments and rogue criminals—have access, or no one does. 

Despite our opposition, working closely with civil society groups in the UK, the bill passed in September, with anti-encryption measures intact. But the story doesn't end here. The OSA remains vague about what exactly it requires of platforms and users alike. Ofcom must now take the OSA and, over the coming year, draft regulations to operationalize the legislation. 

The public understands better than ever that government efforts to “scan it all” will always undermine encryption, and prevent us from having a safe and secure internet. EFF will monitor Ofcom’s drafting of the regulation, and we will continue to hold the UK government accountable to the international and European human rights protections that they are signatories to. 

This blog is part of our Year in Review series. Read other articles about the fight for digital rights in 2023.

❌
❌