I’ve been using this search engine and I have to say I’m absolutely in love with it.

Search results are great, Google level even. Can’t tell you how happy I am after trying multiple privacy oriented engines and always feeling underwhelmed with them.

Have you tried it? What are your thoughts on it?

@LWD@lemm.ee
link
fedilink
14M

removed by mod

@sudneo@lemmy.world
link
fedilink
1
edit-2
4M

It’s pretty clear that you only draw your conclusions from a predetermined trust in Kagi, a brand loyalty.

As I said before, I also draw this conclusion based on the direction that they have currently taken. Like the features that actually exist right now, you know. You started this whole thing about dystopian future when talking about lenses, a feature in which the user chooses to uprank/downrank websites based on their voluntary decision. I am specifically telling that this has been the general attitude, providing tools so that users can customize stuff, and therefore I am looking at that vision with this additional element in mind. You instead use only your own interpretation of that manifesto.

Kagi Corp is good, so feeding data to it is done in a good way, but Facebook Corp is bad so feeding data to it is done in a bad way.

You are just throwing the cards up. If you can’t see the difference between me having the ability to submit data, when I want, what I want and Facebook collecting data, there are only two options: you don’t understand how this works, or you are in bad faith. Which one it is?

@LWD@lemm.ee
link
fedilink
04M

removed by mod

The “lens” feature isn’t mentioned in either Kagi manifesto.

So? It exists, unlike the vision in the manifesto. Since the manifesto can be interpreted in many ways (despite what you might claim), I think this feature can be helpful to show the Kagi intentions, since they invested work into it no? They could have build data collection and automated ranking based on your clicks, they didn’t.

People just submitted it. I don’t know why. They “trust me”. Dumb fucks.

Not sure what the argument is. The fact that people voluntary give data (for completely different reasons that do not benefit those users directly, but under the implicit blackmail to use the service)? I have no objections anyway against Facebook collecting the data that users submit voluntarily and that is disclosed by the policy. The problem is in the data inferred, in the behavioral data collected, which are much more sneaky, and in those collected about non users (shadow profiles through the pixel etc.). You putting Facebook and an imaginary future Kagi in the same pot, in my opinion, is completely out of place.

@LWD@lemm.ee
link
fedilink
04M

removed by mod

The manifesto is actually a future vision. And again, you are interpreting it in your own way.

At the same time, you are completely ignoring:

  • what the product already does
  • the features they actually invested to build
  • their documentation in which they stress and emphasize on privacy as a core value
  • their privacy policy in which they legally bind themselves to such commitment.

Because obviously who cares of facts, right? You have your own interpretation of a sentence which starts with “in the future we will have” and that counts more than anything.

Also, can you please share to me the quote where I say that I need to blindly trust the privacy policy? Thanks.

Because I remember to have said in various comments that the privacy policy is a legally binding document, and that I can make a report to a data protection authority if I suspect they are violating them, so that they will be audited. Also, guess what! The manifesto is not a legally binding document that they need to respond of, the privacy policy is. Nobody can hold them accountable if “in the future there will not be” all that stuff that are mentioned in the manifesto, but they are accountable already today for what they put in the privacy policy.

Do you see the difference?

@LWD@lemm.ee
link
fedilink
04M

removed by mod

You are really moving the goal post eh

Developing AI feature does not mean anything in itself. None of the AI features they built do anything at all in a personalized way. For sure they seem very invested into integrating AI in their product, but so far no data is used, and all the AI features are simply summarizers and research assistants. What is this supposed to prove?

I will make it simpler anyway:

What they wrote in a manifesto is a vague expression of what will happen in a non-specified future. If the whole AI fad will fade in a year, it won’t happen. In addition, we have no idea of what specifically they are going to build, we have no idea of what the impact on privacy is, what are the specific implementation choices they will take and many other things. Without all of this, your dystopian interpretation is purely arbitrary.

And this is rather ironic too:

Ironic how? Saying that a document is binding doesn’t mean blindly trusting it, it means that I know the power it holds, and it means it gives the power to get their ass audited and potentially fined on that basis if anybody doesn’t trust them.

Your attempt to mess with the meaning of my sentences is honestly gross. Being aware of the fact that a company is accountable has nothing do to with blind trust.


Just to sum it up, your arguments so far are that:

  • they mention a “future” in which AI will be personalized and can act as our personal assistant, using data, in the manifesto.
  • they integrated AI features in the current offering

This somehow leads you to the conclusion that they are building some dystopian nightmare in which they get your data and build a bubble around you.

My arguments are that:

  • the current AI features are completely stateless and don’t depend on user data in any way (this capability is not developed in general and they use external models).
  • the current features are very user-centric and the users have complete agency in what they can customize, hence we can only assume that similar agency will be implemented in AI features (in opposition to data being collected passively).
  • to strengthen the point above, their privacy policy is not only great, but it’s also extremely clear in terms of implications of data collected. We can expect that if AI features “personalized” will come up, they will maintain the same standard in terms of clarity, so that users are informed exactly on the implication of disclosing their data. This differentiate the situation from Facebook, where the privacy policy is a book.
  • the company business model also gives hope. With no other customer to serve than the users, there are no substantial incentive for kagi to somehow get data for anything else. If they can be profitable just by having users paying, then there is no economical advantage in screwing the users (in fact, the opposite). This is also clearly written in their doc, and the emphasis on the business model and incentive is also present in the manifesto.

The reality is: we don’t know. It might be that they will build something like you say, but the current track record doesn’t give me any reason to think they will. I, and I am sure a substantial percentage of their user base, use their product specifically because they are good and because they are user-centric and privacy focused. If they change posture, I would dump them in a second, and a search engine is not inherently something that locks you in (like an email). At the moment they deliver, and I am all-in for supporting businesses that use revenue models that are in opposition to ad-driven models and don’t rely on free labor. I do believe that economic and systemic incentive are the major reasons why companies are destroying user-privacy, I don’t thing there is any inherent evil. That’s why I can’t really understand how a business which depends on users paying (kagi) can be compared to one that depends on advertisers paying (meta), where users (their data) are just a part of a product.

Like, even if we assume that what’s written in the manifesto comes to life, if the data is collected by the company and only, exclusively, used to customize the AI in the way I want (not to tune it to sell me shit I don’t need), within the scope I need, with the data I choose to give, with full awareness of the implication, where is the problem? This is not a dystopia. The dystopia is if google builds the same tool and tunes it automatically so that it benefits whoever pays google (not users, but the ones who want to sell you shit). If a tool is truly making my own interests and the company interest is simply that I find the tool useful, without additional goals (ad impressions, visits to pages, product sold), then that’s completely acceptable in my view.

And now I will conclude this conversation, because I said what I had to, and I don’t see progress.

@LWD@lemm.ee
link
fedilink
0
edit-2
4M

removed by mod

Create a post

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

  • Posting a link to a website containing tracking isn’t great, if contents of the website are behind a paywall maybe copy them into the post
  • Don’t promote proprietary software
  • Try to keep things on topic
  • If you have a question, please try searching for previous discussions, maybe it has already been answered
  • Reposts are fine, but should have at least a couple of weeks in between so that the post can reach a new audience
  • Be nice :)

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

  • 0 users online
  • 57 users / day
  • 383 users / week
  • 1.5K users / month
  • 5.7K users / 6 months
  • 1 subscriber
  • 2.44K Posts
  • 57.6K Comments
  • Modlog