• 3 Posts
  • 26 Comments
Joined 9d ago
cake
Cake day: Jun 18, 2025

help-circle
rss

I will also say that what I have listed is for my known digital foot print. If you catch my drift.


You are right. It’s the choice I’ve made. I’m decided that I would rather have the lock down because I no longer think that being anonymous means anything. It’s my opinion that due to the rise and ease of apply AI/ML and computational access we are all data points. So it’s no longer a matter of blending in.

TLDR, I weighed the two and chose this


They aren’t open. But yes. It would be if they were. The are open within my VPN. :)



sure thing, here you are

services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    ports:
      # DNS Ports
      - "53:53/tcp"
      - "53:53/udp"
      # Default HTTP Port
      - "8082:80/tcp"
      # Default HTTPs Port. FTL will generate a self-signed certificate
      - "8443:443/tcp"
      # Uncomment the below if using Pi-hole as your DHCP Server
      #- "67:67/udp"
      # Uncomment the line below if you are using Pi-hole as your NTP server
      #- "123:123/udp"
    environment:
      # Set the appropriate timezone for your location from
      # https://en.wikipedia.org/wiki/List_of_tz_database_time_zones, e.g:
      TZ: 'America/New_York'
      # Set a password to access the web interface. Not setting one will result in a random password being assigned
      FTLCONF_webserver_api_password: 'false cat call cup'
      # If using Docker's default `bridge` network setting the dns listening mode should be set to 'all'
      FTLCONF_dns_listeningMode: 'all'
      FTLCONF_dns_upstreams: '127.0.0.1#5335' # Unbound
    # Volumes store your data between container upgrades
    volumes:
      # For persisting Pi-hole's databases and common configuration file
      - './etc-pihole:/etc/pihole'
      # Uncomment the below if you have custom dnsmasq config files that you want to persist. Not needed for most starting fresh with Pi-hole v6. If you're upgrading from v5 you and have used this directory before, you should keep it enabled for the first v6 container start to allow for a complete migration. It can be removed afterwards. Needs environment variable FTLCONF_misc_etc_dnsmasq_d: 'true'
      #- './etc-dnsmasq.d:/etc/dnsmasq.d'
    cap_add:
      # See https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
      # Required if you are using Pi-hole as your DHCP server, else not needed
      - NET_ADMIN
      # Required if you are using Pi-hole as your NTP client to be able to set the host's system time
      - SYS_TIME
      # Optional, if Pi-hole should get some more processing time
      - SYS_NICE
    restart: unless-stopped
  unbound:
    container_name: unbound
    image: mvance/unbound:latest # Change to use 'mvance/unbound-rpi:latest' on raspberry pi
    # use pihole network stack
    network_mode: service:pihole
    volumes:
      # main config
      - ./unbound-config/unbound.conf:/opt/unbound/etc/unbound/unbound.conf:ro
      # custom config (unbound.conf.d/your-config.conf). unbound.conf includes these via wilcard include
      - ./unbound-config/unbound.conf.d:/opt/unbound/etc/unbound/unbound.conf.d:ro
      # log file
      - /srv/docker/pihole-unbound/unbound/etc-unbound/unbound.log:/opt/unbound/etc/unbound/unbound.log
    restart: unless-stopped

I am relatively new to docker as well tbh. I did a lot with virtualization and a lot with linux and never bothered, but I totally get the use case now ha. just an FYI, if you use docker on Windows it runs slower as it has to leverage the Windows subsystem Linux (WSL) and a slightly different docker engine (forget which one). So linux is your best bet. If you do want to use a full VM I found Qemu to be the best option for least resource usage.


Yes, you can give fake info. I would say thats kinda the next step. Harden your browser and associated tech stack so you are secure. Then provide fake data that is generic enough so that it blends in. firefox or chrome standard agent, windows 11, etc.

for example https://deviceatlas.com/blog/list-of-user-agent-strings


The problem with hardening your system is that you become more identifieable unless you provide fake data. For example, here are my test results from coveryourtracks.eff.org

Within our dataset of several hundred thousand visitors tested in the past 45 days, only one in 2054.58 browsers have the same fingerprint as yours.


plugins are definitely detectable. just came across this, worth checking out your browser security.

https://coveryourtracks.eff.org/


everything you do to customize your browser makes your browser fingerprint unique. but you have a mostly unique fingerprint due to things you arent considering as well. system related stuff that your browser tells about you.

you have some options. 1) there are addons that limit privacy issues, 2) use a local web proxy, im using squid proxy for example just have it running on an old laptop. Optionally, I would also say, from a privacy standpoint look into DNS blackholing pihole, unbound, etc, and there are plenty of other things.

my favorite addons are ublock, privacy badger, i run noScript which is probably more painful than most are willing to put up with but I have heard that jShelter is a good compromise.


happy to share my docker-compose with pihole and unbound. im not the original author its a compilation of a few peoples. no issues. normal DNS inside the house DoT outside.


I have been thinking about this a lot recently. I live a life where OPSEC is relevant. Its something that I have had to consider always, and has been for 2 decades. Even so, I wasn’t as concerned this whole time as I am these days. The fact is that technology is making it such that its no longer “im not a person of interest they wont spend resources on me” because data crunching is happening to such an extreme, on such a grand scale, that person of interest doesn’t even matter. Do you exist, yes. Do you have a digital foot print, yes you do. Even if you dont do a lot online. Your metrics are being captured and being inferenced, and systems are using predictive analysis to determine what you “may” do in a given situation. Depending on who controls those systems they may decide not to give you a chance to make that choice.

Ill I can say is that there are a large number of groups that want your data, for a lot of different reasons, and none of them are for your benefit. So, are you going to let them have it, or are you going to take steps to reign in the amount of info you leave about?


Checking out RethinkDNS right now, this looks great! Thanks. Was tracking most of the other stuff, that stuff holds true on computers as well, but on mobile I was kinda drawing a blank.


Interesting in learning more about that. I do a lot of dev work with AI, agentic and otherwise. Did a proof of concept for quick fact finding but of course you run into “where do you source the truth” and the more I looked the harder it was.


totally arbitrary, lol. Im used to DNSSEC, saw DoT and DoH about the same time, think I saw a write up that used DoT and just went for it. Havent even compared DoT vs DoH, but DoH reminds me of Homer Simpson cuz im old XD


In my particular setup, I have an additional constraint and that is that my network has to be designed for portability and travel. Not that it affects your design per say. Thank you for the response. Just something that occurred to me that I hadnt mentioned.

I am living a transient life at the moment. So lots of virtualization and lack of control concerning the WAP and such.

I do like your set up btw.


Yeah, I am pretty close to that, the pihole to unbound, unbound DoT to cloudflare. What I am doing at this point is bypassing the DNS to ISP, but as I stated in my response above, not yet blocking everything on the net from using the regular stuff. Just feasibility testing at the moment.

Love the dual setup for DNS. I set my primary to this and my secondary to just cloudflare at them moment for when I bork my primary DNS will fidgeting with it, haha.


“Dnsbl is only a small component of effective network security. Arguably the firewall is most important and so I have a default deny all for any device on my LAN trying to reach the Internet.” 100%, I decided to break up my posts into sub components of the total stack, but to your point currently im enforcing a deny all inbound and outbound at the host level, as the network is shared with the fam and they are not ready for that level of learning (pain, lol)

I just learned about unbound, didnt realize it had a blocklist capability so thats great to know. Gotta dig into it.

I like that last bit, blocking DoT except for the one approved path. Much like TLS 1.3 it offers insider threat protection against inspection. So with that in mind when you said you are using unbound instead of using DoT forwarding, you mean instead of allowing clients to DoT forward, right? Thats what I am doing now as well, though I am not actively blocking it yet. Just currently enabling and testing feasibility on a single host to see the performance and operational impacts of privacy/security implementations.

Curious to your IDS solution, I gotta dig into opnsense. I know about it, its been around a long time, but havent touched it in so long I cant remember its capabilities.


good point. not a huge fan, but better than no option at all. Actually thats probably the best option for now.


I think if you are using any meta app on your phone yes. I would assume yes, if they put in the time to figure out the security bypasses then I cant see why they would limit it to one app. I removed all meta apps from my phone.


So DNS Black-holing is not new obviously, and what stands out as the go to solution? Pihole probably... and yeah thats what im using because hey its a popular choice. Though I am running it in docker. Combining that with Unbound (also in docker), and configuring outbound DNS to use DNS over TLS, with a few additional minor tweaks, but otherwise mostly standard configuration on both. Wondering what you guys might be using, and if you are using Pihole and/or Unbound if you have any tips on configuration. Happy to share my config if there is interest.
fedilink

same for me, codeberg works quite well. no issues at all in comparison to github. slowly moving my code over.


nice. Im looking to make the transition to graphene OS. would go to linux daily driver if I can get away from MS Office. I do too much writing collaboration with others and it gets wonky going back and forth with office users. Though Denmark is saying they are ditching office so that might incentivize alternatives and such. exciting times.

Im currently working on a whole stack, so docker pi-hole with unbound using dns over tls, squid proxy with maximum privacy, FF fork with ublock, privacy badger, noscript. mullvad and/or tor depending on where and when im using it.


At this point it not about passive collection, corporations are going to extreme ends to get our data. https://www.zeropartydata.es/p/localhost-tracking-explained-it-could I am interested in what people are doing to enforce their privacy while using the web. I have some things in place, looking to compare with the community. (btw, I am new here, this is my first post. So uh… Hi )
fedilink

yeah, and extensions additionally work against you in fingerprinting. Though I’m totally interested in what extensions you are using.


Ive only started looking into these. GrapheneOS looks cool, but being stuck with only the Pixel is kinda annoying and google is being shitty about supporting it. Removing drivers and squashing git commits, making it harder to support.

I need to look at the others to see how they fair.


I should mention that DuckDuckGo recently released an android browser and it is privacy focused. I cant tell you how well it does its job BUT the important thing is that it has an experimental feature that creates a virtual network interface that routes coms and blocks phone home attempts and tells you what app is doing what.

I have had it running for a few months and its crazy to see how much traffic is going on without your knowledge.


exactly, we cant be going by vibes here. lets talk about specific metrics and then say ok, well these metrics are important, and this one has the ones I want.


I would stay away from chromium forks in general. Google is doing some underhanded stuff using web manifest v3, not to mention all the bastard stuff they are doing in general.

I am very curious not only to hear the answer to your question regarding FF forks, but also why they get rated that way.