• 10 Posts
  • 14 Comments
Joined 10M ago
cake
Cake day: Jul 11, 2023

help-circle
rss
**Summary:** The Government Accountability Office (GAO) has issued a report finding that federal agents are using face recognition software without training, policies, or oversight. The GAO reviewed seven agencies within the Department of Homeland Security and Department of Justice, and found that none of the seven agencies fully complied with their own policies on handling personally identifiable information (PII), like facial images. The GAO also found that thousands of face recognition searches have been conducted by federal agents without training or policies. In the period GAO studied, at least 63,000 searches had happened, but this number is a known undercount. A complete count of face recognition use is not possible, because some systems used by the Federal Bureau of Investigation (FBI) and Customs and Border Protection (CBP) don’t track these numbers. The GAO report is a reminder of the dangers of face recognition technology, particularly when used by law enforcement and government. Face recognition technology can be used to facilitate covert mass surveillance, make judgments about how we feel and behave, and track people automatically as they go about their day. The GAO recommends that the federal government immediately put guardrails around who can use face recognition technology for what and cease its use of this technology altogether.
fedilink

**Summary** The Electronic Frontier Foundation (EFF) filed an amicus brief urging the Michigan Supreme Court to find that warrantless drone surveillance of a home violates the Fourth Amendment. The EFF argues that drones are fundamentally different from helicopters or airplanes, and that their silent and unobtrusive capabilities make them a formidable threat to privacy. The EFF also points out that the government is increasingly using drones for surveillance, and that communities of color are more likely to be targeted. The EFF calls on the court to recognize the danger that governmental drone use poses to our Fourth Amendment rights.
fedilink

**Summary** A recent privacy study from Cornell University reveals that Amazon Alexa, the virtual assistant found in smart speakers, collects user data for targeted advertising both on and off its platform. This practice has raised concerns about privacy violations. The study also highlights that Amazon's and third-party skills' operational practices are often not transparent in their privacy policies. Amazon Alexa is designed to respond to voice commands and is present in various Amazon devices, offering a wide range of functionalities, including controlling smart devices, providing information, and playing music. While Amazon claims that Alexa only records when activated by its wake word ("Alexa"), research has shown that it can sometimes activate accidentally, leading to unintended recordings. Amazon employees listen to and transcribe these recordings, raising concerns about privacy. Amazon links interactions with Alexa to user accounts, using this data for targeted advertising. Advertisers pay a premium for this information, making it highly valuable. Although Amazon allows users to delete their recordings, compliance with this feature has been questioned. Additionally, third-party "skills" on Alexa can access user data, and many developers abuse Amazon's privacy policies by collecting voice data and sharing it with third parties without proper oversight. The recent FTC fine against Amazon highlights its failure to delete certain data, including voice recordings, after users requested their removal, violating the Children's Online Privacy Protection Act (COPPA). While Amazon Alexa offers convenience, it comes at the cost of privacy. Users looking for more privacy-friendly alternatives can consider Apple's Siri, which offers stronger privacy protection. For those interested in open-source options, Mycroft provides a natural language voice assistant with an emphasis on privacy, but note that the company may be shutting down soon.
fedilink

**Summary** The FBI has requested a significant budget increase for 2024, specifically for its DNA database known as CODIS. This request, totaling $53 million, is in response to a 2020 rule that requires the Department of Homeland Security to collect DNA from individuals in immigration detention. CODIS currently holds genetic information from over 21 million people, with 92,000 new DNA samples added monthly. This increase in funding demonstrates the government's commitment to collecting over 750,000 new samples annually from immigrant detainees, raising concerns about civil liberties, government surveillance, and the weaponization of biometrics. Since the Supreme Court's Maryland v. King decision in 2013, states have expanded DNA collection to cover more offenses, even those unrelated to DNA evidence. The federal government's push to collect DNA from all immigrant detainees represents a drastic effort to accumulate genetic information, despite evidence disproving a link between crime and immigration status. Studies suggest that increasing DNA database profiles does not significantly improve crime-solving rates, with the number of crime-scene samples being more relevant. Additionally, inclusion in a DNA database increases the risk of innocent individuals being implicated in crimes. This expanded DNA collection worsens racial disparities in the criminal justice system, as it disproportionately affects communities of color. Black and Latino men are already overrepresented in DNA databases, and adding nearly a million new profiles of immigrant detainees, mostly people of color, will further skew the existing 21 million profiles in CODIS. The government's increased capacity for collecting and storing invasive data poses a risk to all individuals. With the potential for greater sample volume and broader collection methods, society is moving closer to a future of mass biometric surveillance where everyone's privacy is at risk.
fedilink

Since I am not in anyway inclined to go read their code, I probably will just trust FF’s “recommended” flag until there is an obvious problem. Of course, when it is like that, then it’s too late. I tried the “Dark theme” on FF for a little bit, switch back to using Dark Reader in no time.


**Summary** The UK Parliament has passed the Online Safety Bill (OSB), claiming it will enhance online safety but actually leading to increased censorship and surveillance. The bill grants the government the authority to compel tech companies to scan all user data, including encrypted messages, to detect child abuse content, effectively creating a backdoor. This jeopardizes privacy and security for everyone. The bill also mandates the removal of content deemed inappropriate for children, potentially resulting in politicized censorship decisions. Age-verification systems may infringe on anonymity and free speech. The implications of how these powers will be used are a cause for concern, with the possibility that encrypted services may withdraw from the UK if their users' security is compromised.
fedilink

**Summary** Israeli software maker Insanet has developed a commercial product called Sherlock that can infect devices via online adverts to snoop on targets and collect data about them for the biz's clients. This is the first time details of Insanet and its surveillanceware have been made public. Sherlock is capable of drilling its way into Microsoft Windows, Google Android, and Apple iOS devices. Insanet received approval from Israel's Defense Ministry to sell Sherlock globally as a military product albeit under various tight restrictions, such as only selling to Western nations. To market its snoopware, Insanet reportedly teamed up with Candiru, an Israel-based spyware maker that has been sanctioned in the US, to offer Sherlock along with Candiru's spyware. The Electronic Frontier Foundation's Director of Activism Jason Kelley said Insanet's use of advertising technology to infect devices and spy on clients' targets makes it especially worrisome. There are some measures netizens can take to protect themselves from Sherlock and other data-harvesting technologies. * not loading JavaScript * using ad blockers or privacy-aware browsers * not clicking on advertisements * pass consumer data privacy laws
fedilink


https://www.404media.co/revealed-the-country-that-secretly-wiretapped-the-world-for-the-fbi/

It’s already behind a paywall. But it was really a sting operation, using fake “secure” phones, to catch criminals, by skirting constitutional requirements.


I personally think you have to be careful. If they don’t like your application and find that you are not disclosing the information, it might become a justification to reject the application. Remember that there are 3rd parties that massively correlate internet data that are sold to governments and corporations. Unless you accounts definitely cannot be linked to your real identity, there is a chance that they will find out what social accounts you have anyway.


It seems more or less. Have you seen the recent news about US government’s arrangement to have an eastern European country running a platform to collect data on its own citizens to skirt around the warrant law? If citizens are being treated as such, how are non-citizens being treated?


This article specifically addresses Visa applications. So, if the person is already applying for a citizenship, there is most likely already a residency which doesn’t require Visa on entry. There also seems to be a different set of rules for people already in the country. From the article:

And while the court recognized the First Amendment rights of noncitizens currently present in the United States who limit their online speech because they may need to renew a visa in the future, it held that the federal government’s regulation of immigration should be granted significant deference.


**Summary** * A federal judge has dismissed a lawsuit challenging a rule that requires visa applicants to disclose their social media accounts to the U.S. government. * The rule, which went into effect in 2019, applies to visa applicants from all countries. * The plaintiffs in the lawsuit, two U.S.-based documentary film organizations, argued that the rule violated the First Amendment rights of visa applicants. * It's unclear if the plaintiffs plan to appeal the ruling. **Additional Details** * The rule requires visa applicants to disclose their social media identifiers, including pseudonymous accounts, for the past five years. * The plaintiffs argued that the rule would chill free speech and association, as visa applicants would be less likely to express themselves on social media if they knew that the government could see their posts. * The ruling is a reminder of the challenges faced by people who want to protect their privacy online.
fedilink

This is not me. I just found the article to be interesting. **Summary** This post discusses personal privacy and security for Chief Information Security Officers (CISOs) and their families. The author shares their journey of enhancing safety, which was prompted by a potential breach of their personal life and their wife's celebrity status. They outline a two-phase approach: lockdown and disappearing. In the lockdown phase, the author secured their digital life by creating an ultra-secure root account, implementing two-factor authentication (2FA) for all accounts, managing SMS and email recovery, and taking various safety measures, including the use of specific tools. The disappearing phase involves maintaining privacy online by creating different personas for various aspects of life. The author explains how they established these personas, set up prerequisites like virtual credit cards and private mailboxes, and used VOIP services and email forwarding to manage different contact information. The results of these efforts include increased security through privacy, making it challenging for attackers to target the author. The post also highlights an advanced experiment in purchasing a car anonymously and the importance of being cautious about potential privacy leaks even with careful planning.
fedilink

If they knew it wasn’t going to work, why ask for it at the first place. Red herring? Ask for something big to trade for something smaller? Drama for the bored?



With airlines, it still seems to be voluntary and justified by convenience. (https://www.businessinsider.com/american-airlines-facial-recognition-boarding-dfw-aviation-trend-2019-8)

For boarding-crossing trip related to CBP, it seems you can still explicitly opt-out, possibly incurring inconveniences. But it maybe less voluntary in the long run (also because of inconveniences.)

For foreigners needing to travel in-and-out of the US, they will always be more willing to submit to the whims of Airport/US authorities.


They’re trying to push it up to 97% compliance, apparently mandated by Congress. So, generally, don’t cross the US border. Here’s an article from EFF: https://www.eff.org/deeplinks/2017/08/end-biometric-border-screening

DHS recently took the alarming position that “the only way for an individual to ensure he or she is not subject to collection of biometric information when traveling internationally is to refrain from traveling.”


**Summary** * Customs and Border Protection (CBP) is increasing its target for scanning passengers with facial recognition as they leave the U.S. from 40% to 75%. * The new goal will be implemented at the end of this month. * CBP is changing its metric for measuring progress from the percentage of flights that have at least one biometrically processed traveler, to the percentage of passengers who are biometrically processed. * CBP says that the change in metric is more accurate and provides a more complete picture of how robust biometric exit processing is on a national level. * The Congress-mandated goal of CBP is to have 97% or greater biometric exit compliance. * Airlines are increasingly using facial recognition systems to confirm travelers when boarding aircraft. * Passengers who do not want to participate in facial recognition can opt out, but they may be asked to present travel documents or other proof of identification, *and in some case, fingerprints*. * CBP says that it will only store facial images for no more than two weeks and that it will share entry and exit data for law enforcement. The article also mentions a case where a privacy attorney was told by airline staff that she had to participate in facial recognition, even though she had a right to opt out. This suggests that there may be some confusion among airline staff about the rules surrounding facial recognition. **Interesting Passages** > A June 2017 CBP document explains its “Biometric Exit Process” for passengers: “All travelers are required to submit to CBP inspection upon exit. Facial images will be matched and then stored for ***no more than two weeks*** in secure data systems managed by the U.S. Department of Homeland Security in order to further evaluate the technology, ensure its accuracy, and for auditing purposes. In lieu of facial images, travelers may be asked to present travel documents or other proof of identification, and ***in some cases provide fingerprints***.” That document adds that it could share traveler exit and entry data with other government agencies “if the situation warrants, for law enforcement purposes.” > It seems likely CBP will meet its goal for biometrically-processing 75 percent of passengers. In 2021 I obtained a cache of documents related to the airline JetBlue’s piloting of facial recognition systems. Already back then, JetBlue said it had seen more than 90 percent of customers participate in biometric boarding when it was available.
fedilink

Try 2FAS. Open-sourced. Also works on Android. Has a browser extension that allows automatic 2FA entry paired with a phone.

OTH, if you need a Windows client, then Authy may be the way to go. Need to religiously copy the TOTP secret (when setting up) and save it somewhere else, though. Because it doesn’t officially allow export, it might be a bitch to move to other authenticators.


Forbes: https://www.forbes.com/sites/thomasbrewster/2023/08/08/protonmail-fbi-search-led-to-a-suspect-threatening-a-2020-election-official/?sh=48791539235c

  • Claire Woodall-Vogg, the executive director of the Milwaukee Election Commission, was harassed and threatened after an innocuous email exchange with an election consultant was published by conservative news outlets.
  • One of the harassers used ProtonMail, an encrypted email service, to send Woodall-Vogg a threatening email.
  • The FBI acquired data from Proton Technologies, the owner of ProtonMail, to help them identify the anonymous emailer.
  • The FBI was able to find the suspect’s identity and conduct a sweep across their internet accounts, but they were not charged with any crimes.
  • Woodall-Vogg said that the harassment has not continued recently.
  • ProtonMail said that they employ several teams to handle instances of abuse on their platform and that they only provide metadata to law enforcement agencies.
  • ProtonMail has received 6,995 orders for data in 2022, of which it contested 1,038.

They said in the past that to retain anonymity, users should use Tor to access the service.


Also beware that for any web/app client that auto-retrieves the image links in a post/comment/message, the other person can put a tracker that can retrieve your IP address, and possibly your browser/other info as well. VPN/Tor would prevent this.

It’s like your email client not retrieving the images automatically to prevent the spammers to get any info about your interactions with the spam emails.


It seems that the IPs are not just logged at the web server level, i.e. it goes up to the Lemmy server too. Do you know if both the admins and the mods have access to the users’ IP addresses?


- The Office of the Director of National Intelligence (ODNI) released a report in June that found that U.S. government intelligence agencies are buying data about us from private surveillance companies. - This is a dangerous practice because it allows the government to surveil us without following basic constitutional safeguards, like obtaining warrants. - The report warned that when the government buys data about us, it can be "misused to pry into private lives, ruin reputations, and cause emotional distress and threaten the safety of individuals." - The government's purchases of corporate surveillance data are pervasive. Intelligence agencies are buying up so much data that ODNI was not able to comprehensively review all the purchases. - The report called for intelligence agencies to do more to consider the privacy impact of buying and using commercial data. - It also called for intelligence agencies to conduct a sweeping review to understand how they are buying and using commercial surveillance data. - However, the report does not solve the problems it identifies. It is not binding on the Director of National Intelligence or her successors, and it fails to disavow government use of commercial surveillance data. - Instead, we need changes across government. Legislatures need to pass strong consumer data privacy legislation, so that data brokers have less data to sell the government. - Legislatures also need to pass statutory limits on the government, to prevent them from using data brokers to dodge search warrant requirements, and to stop them using reverse warrants. - Courts should respect Fourth Amendment precedent by continuing to disallow the government from buying personal data without a warrant. In conclusion, the government's purchase of corporate surveillance data is a dangerous practice that threatens our privacy and civil liberties. We need changes across government to prevent this from happening.
fedilink