If you’re interested in (co-)moderating any of the communities created by me, you’re welcome to message me.

I also have the account @[email protected]. Furthermore, I own the account @[email protected], which I hope to make a small bot out of in the future.

  • 470 Posts
  • 631 Comments
Joined 10 个月前
cake
Cake day: 2025年2月27日

help-circle












  • And, most importantly, it’s about so much more than just the banners. For example:

    (1) A new GDPR loophole via “pseudonyms” or “IDs”. The Commission proposes to significantly narrow the definition of “personal data” – which would result in the GDPR not applying to many companies in various sectors. For example, sectors that currently operate via “pseudonyms” or random ID numbers, such as data brokers or the advertising industry, would not be (fully) covered anymore. This would done by adding a “subjective approach” in the text of the GDPR.

    Instead of having an objective definition of personal data (e.g. data that is linked to a directly or indirectly identifiable person), a subjective definition would mean that if a specific company claims that it cannot (yet) or does not aim to (currently) identify a person, the GDPR ceases to apply. Such a case-by-case decision is inherently more complex and everything but a “simplification”. It also means that data may be “personal” or not depending on the internal thinking of a company, or given the circumstances that they have at a current point. This can also make cooperation between companies more complex as some would fall under the GDPR and others not.

    (2) Pulling personal data from your device? So far, Article 5(3) ePrivacy has protected users against remote access of data stored on “terminal equipment”, such as PCs or smartphones. This is based on the right to protection of communications under Article 7 of the Charter of Fundamental Rights of the EU and made sure that companies cannot “remotely search” devices.

    The Commission now adds “white listed” processing operations for the access to terminal equipment, that would include “aggregated statistics” and “security purposes”. While the general direction of changes is understandable, the wording is extremely permissive and would also allow excessive “searches” on user devices for (tiny) security purposes.

    (3) AI Training of Meta or Google with EU’s Personal Data? When Meta or LinkedIn started using social media data, it was widely unpopular. In a recent study for example only 7% of Germans say that they want Meta to use their personal data to train AI. Nevertheless, the Commission now wants to allow the use of highly personal data (like the content of 15+ years of a social media profile) for AI training by Big Tech.
























  • Interesting, thank you. From the text here,

    "As we collectively identify and validate slop across the web, Kagi’s SlopStop initiative will help transform those insights into a comprehensive, structured dataset of AI slop – an invaluable resource for training AI models.

    Access to the database will be shared soon. Use this form to express your interest if you’d like to receive updates.

    especially the part “an invaluable resource for training AI models”, and the absence of any community-focused language, and the fact that Kagi iirc is still operating at losses and are looking for ways to become profitable, I fear it will be essentially commercial. But who knows, and even if it will be so, that still is not to say it’s all bad.