The Government Accountability Office (GAO) has issued a report finding that federal agents are using face recognition software without training, policies, or oversight. The GAO reviewed seven agencies within the Department of Homeland Security and Department of Justice, and found that none of the seven agencies fully complied with their own policies on handling personally identifiable information (PII), like facial images.
The GAO also found that thousands of face recognition searches have been conducted by federal agents without training or policies. In the period GAO studied, at least 63,000 searches had happened, but this number is a known undercount. A complete count of face recognition use is not possible, because some systems used by the Federal Bureau of Investigation (FBI) and Customs and Border Protection (CBP) don’t track these numbers.
The GAO report is a reminder of the dangers of face recognition technology, particularly when used by law enforcement and government. Face recognition technology can be used to facilitate covert mass surveillance, make judgments about how we feel and behave, and track people automatically as they go about their day.
The GAO recommends that the federal government immediately put guardrails around who can use face recognition technology for what and cease its use of this technology altogether.
The Electronic Frontier Foundation (EFF) filed an amicus brief urging the Michigan Supreme Court to find that warrantless drone surveillance of a home violates the Fourth Amendment. The EFF argues that drones are fundamentally different from helicopters or airplanes, and that their silent and unobtrusive capabilities make them a formidable threat to privacy. The EFF also points out that the government is increasingly using drones for surveillance, and that communities of color are more likely to be targeted. The EFF calls on the court to recognize the danger that governmental drone use poses to our Fourth Amendment rights.
A recent privacy study from Cornell University reveals that Amazon Alexa, the virtual assistant found in smart speakers, collects user data for targeted advertising both on and off its platform. This practice has raised concerns about privacy violations. The study also highlights that Amazon's and third-party skills' operational practices are often not transparent in their privacy policies.
Amazon Alexa is designed to respond to voice commands and is present in various Amazon devices, offering a wide range of functionalities, including controlling smart devices, providing information, and playing music.
While Amazon claims that Alexa only records when activated by its wake word ("Alexa"), research has shown that it can sometimes activate accidentally, leading to unintended recordings. Amazon employees listen to and transcribe these recordings, raising concerns about privacy.
Amazon links interactions with Alexa to user accounts, using this data for targeted advertising. Advertisers pay a premium for this information, making it highly valuable. Although Amazon allows users to delete their recordings, compliance with this feature has been questioned.
Additionally, third-party "skills" on Alexa can access user data, and many developers abuse Amazon's privacy policies by collecting voice data and sharing it with third parties without proper oversight.
The recent FTC fine against Amazon highlights its failure to delete certain data, including voice recordings, after users requested their removal, violating the Children's Online Privacy Protection Act (COPPA).
While Amazon Alexa offers convenience, it comes at the cost of privacy. Users looking for more privacy-friendly alternatives can consider Apple's Siri, which offers stronger privacy protection. For those interested in open-source options, Mycroft provides a natural language voice assistant with an emphasis on privacy, but note that the company may be shutting down soon.
The FBI has requested a significant budget increase for 2024, specifically for its DNA database known as CODIS. This request, totaling $53 million, is in response to a 2020 rule that requires the Department of Homeland Security to collect DNA from individuals in immigration detention. CODIS currently holds genetic information from over 21 million people, with 92,000 new DNA samples added monthly. This increase in funding demonstrates the government's commitment to collecting over 750,000 new samples annually from immigrant detainees, raising concerns about civil liberties, government surveillance, and the weaponization of biometrics.
Since the Supreme Court's Maryland v. King decision in 2013, states have expanded DNA collection to cover more offenses, even those unrelated to DNA evidence. The federal government's push to collect DNA from all immigrant detainees represents a drastic effort to accumulate genetic information, despite evidence disproving a link between crime and immigration status.
Studies suggest that increasing DNA database profiles does not significantly improve crime-solving rates, with the number of crime-scene samples being more relevant. Additionally, inclusion in a DNA database increases the risk of innocent individuals being implicated in crimes.
This expanded DNA collection worsens racial disparities in the criminal justice system, as it disproportionately affects communities of color. Black and Latino men are already overrepresented in DNA databases, and adding nearly a million new profiles of immigrant detainees, mostly people of color, will further skew the existing 21 million profiles in CODIS.
The government's increased capacity for collecting and storing invasive data poses a risk to all individuals. With the potential for greater sample volume and broader collection methods, society is moving closer to a future of mass biometric surveillance where everyone's privacy is at risk.
The UK Parliament has passed the Online Safety Bill (OSB), claiming it will enhance online safety but actually leading to increased censorship and surveillance. The bill grants the government the authority to compel tech companies to scan all user data, including encrypted messages, to detect child abuse content, effectively creating a backdoor. This jeopardizes privacy and security for everyone. The bill also mandates the removal of content deemed inappropriate for children, potentially resulting in politicized censorship decisions. Age-verification systems may infringe on anonymity and free speech. The implications of how these powers will be used are a cause for concern, with the possibility that encrypted services may withdraw from the UK if their users' security is compromised.
Israeli software maker Insanet has developed a commercial product called Sherlock that can infect devices via online adverts to snoop on targets and collect data about them for the biz's clients. This is the first time details of Insanet and its surveillanceware have been made public. Sherlock is capable of drilling its way into Microsoft Windows, Google Android, and Apple iOS devices. Insanet received approval from Israel's Defense Ministry to sell Sherlock globally as a military product albeit under various tight restrictions, such as only selling to Western nations.
To market its snoopware, Insanet reportedly teamed up with Candiru, an Israel-based spyware maker that has been sanctioned in the US, to offer Sherlock along with Candiru's spyware.
The Electronic Frontier Foundation's Director of Activism Jason Kelley said Insanet's use of advertising technology to infect devices and spy on clients' targets makes it especially worrisome.
There are some measures netizens can take to protect themselves from Sherlock and other data-harvesting technologies.
* using ad blockers or privacy-aware browsers
* not clicking on advertisements
* pass consumer data privacy laws
* A federal judge has dismissed a lawsuit challenging a rule that requires visa applicants to disclose their social media accounts to the U.S. government.
* The rule, which went into effect in 2019, applies to visa applicants from all countries.
* The plaintiffs in the lawsuit, two U.S.-based documentary film organizations, argued that the rule violated the First Amendment rights of visa applicants.
* It's unclear if the plaintiffs plan to appeal the ruling.
* The rule requires visa applicants to disclose their social media identifiers, including pseudonymous accounts, for the past five years.
* The plaintiffs argued that the rule would chill free speech and association, as visa applicants would be less likely to express themselves on social media if they knew that the government could see their posts.
* The ruling is a reminder of the challenges faced by people who want to protect their privacy online.
This is not me. I just found the article to be interesting.
This post discusses personal privacy and security for Chief Information Security Officers (CISOs) and their families. The author shares their journey of enhancing safety, which was prompted by a potential breach of their personal life and their wife's celebrity status. They outline a two-phase approach: lockdown and disappearing.
In the lockdown phase, the author secured their digital life by creating an ultra-secure root account, implementing two-factor authentication (2FA) for all accounts, managing SMS and email recovery, and taking various safety measures, including the use of specific tools.
The disappearing phase involves maintaining privacy online by creating different personas for various aspects of life. The author explains how they established these personas, set up prerequisites like virtual credit cards and private mailboxes, and used VOIP services and email forwarding to manage different contact information.
The results of these efforts include increased security through privacy, making it challenging for attackers to target the author. The post also highlights an advanced experiment in purchasing a car anonymously and the importance of being cautious about potential privacy leaks even with careful planning.
* Customs and Border Protection (CBP) is increasing its target for scanning passengers with facial recognition as they leave the U.S. from 40% to 75%.
* The new goal will be implemented at the end of this month.
* CBP is changing its metric for measuring progress from the percentage of flights that have at least one biometrically processed traveler, to the percentage of passengers who are biometrically processed.
* CBP says that the change in metric is more accurate and provides a more complete picture of how robust biometric exit processing is on a national level.
* The Congress-mandated goal of CBP is to have 97% or greater biometric exit compliance.
* Airlines are increasingly using facial recognition systems to confirm travelers when boarding aircraft.
* Passengers who do not want to participate in facial recognition can opt out, but they may be asked to present travel documents or other proof of identification, *and in some case, fingerprints*.
* CBP says that it will only store facial images for no more than two weeks and that it will share entry and exit data for law enforcement.
The article also mentions a case where a privacy attorney was told by airline staff that she had to participate in facial recognition, even though she had a right to opt out. This suggests that there may be some confusion among airline staff about the rules surrounding facial recognition.
> A June 2017 CBP document explains its “Biometric Exit Process” for passengers: “All travelers are required to submit to CBP inspection upon exit. Facial images will be matched and then stored for ***no more than two weeks*** in secure data systems managed by the U.S. Department of Homeland Security in order to further evaluate the technology, ensure its accuracy, and for auditing purposes. In lieu of facial images, travelers may be asked to present travel documents or other proof of identification, and ***in some cases provide fingerprints***.” That document adds that it could share traveler exit and entry data with other government agencies “if the situation warrants, for law enforcement purposes.”
> It seems likely CBP will meet its goal for biometrically-processing 75 percent of passengers. In 2021 I obtained a cache of documents related to the airline JetBlue’s piloting of facial recognition systems. Already back then, JetBlue said it had seen more than 90 percent of customers participate in biometric boarding when it was available.
- The Office of the Director of National Intelligence (ODNI) released a report in June that found that U.S. government intelligence agencies are buying data about us from private surveillance companies.
- This is a dangerous practice because it allows the government to surveil us without following basic constitutional safeguards, like obtaining warrants.
- The report warned that when the government buys data about us, it can be "misused to pry into private lives, ruin reputations, and cause emotional distress and threaten the safety of individuals."
- The government's purchases of corporate surveillance data are pervasive. Intelligence agencies are buying up so much data that ODNI was not able to comprehensively review all the purchases.
- The report called for intelligence agencies to do more to consider the privacy impact of buying and using commercial data.
- It also called for intelligence agencies to conduct a sweeping review to understand how they are buying and using commercial surveillance data.
- However, the report does not solve the problems it identifies. It is not binding on the Director of National Intelligence or her successors, and it fails to disavow government use of commercial surveillance data.
- Instead, we need changes across government. Legislatures need to pass strong consumer data privacy legislation, so that data brokers have less data to sell the government.
- Legislatures also need to pass statutory limits on the government, to prevent them from using data brokers to dodge search warrant requirements, and to stop them using reverse warrants.
- Courts should respect Fourth Amendment precedent by continuing to disallow the government from buying personal data without a warrant.
In conclusion, the government's purchase of corporate surveillance data is a dangerous practice that threatens our privacy and civil liberties. We need changes across government to prevent this from happening.