Project report

FaktUp?!

“Fakt Up?!” is a social media campaign that takes a satirical and critical look at the influence of digital technologies on our society. We provide information about data protection, AI, and digital responsibility—for anyone who wants to stay curious and critical. Here we collect our info posts for reference and information. What's fakt up?

Project status

Ongoing

Collection of links

Click here to go directly to our collection of all links to our Fakt Up posts—from initiatives to studies and sources to app downloads, you'll find everything here.
#17 Fact Up from January 15, 2026
  • 🚀 Adobe vs Affinity vs Canva


    Affinity is now free.
    And suddenly, “Adobe is standard” becomes a real decision point.

    Because it's not just about which tool is better—it's about how we design in the future: subscription ecosystem vs. free suite vs. template speed with AI buttons.
    And somewhere in between: your workflow. 👀

    What we would look for when comparing tools:
    ✅ What do you really need? (Print/layouts, photo, UI, social, team?)
    ✅ How much control do you want? (files, export, offline, color management)
    ✅ What is a feature – and what is a paywall?

    Where (and what) would you work with:
    Adobe, Affinity, or Canva – and why? 💬Let's talk tools – without hype, with facts.

    🔗 More info & tools in our Linktree

#16 Fact Up from December 11, 2025
  • 🎅 Santa knows your search history – and that's no fairy tale.

    This year, he's not just bringing gifts, he's selling your data to the highest bidder. 🎁💸

    You thought you saved money during Black Week? Think again! The real “bargains” are your clicks, likes, and secret searches—packaged as “personalized advertising” and sold to the highest bidder.

    Data is the new gold—and we often give it away without realizing it. But with a few simple steps, you can slow down Santa and his algorithmic helpers:
    ✅ Block trackers (e.g., with uBlock or Privacy Badger)
    ✅ Delete data regularly (Firefox Focus, Brave)
    ✅ Check prices critically (CamelCamelCamel, Keepa)

    Because ultimately, you should decide who knows your wishes—not the algorithm.

    🔗 More information and tools in our Linktree

#15 Fact Up from November 27, 2025
  • 📢 Your AI chat becomes an advertising machine—and Meta is laughing all the way to the bank!

    Starting December 16, 2025, your meta-AI small talk will become currency: what you tell the AI determines which ads you see. And no, “opt-out” is not an option—only “opt-illusion”!

    🎭Bonus fact: WhatsApp isn't secure either! If you link it to your Meta account, your chat history becomes an advertising buffet. 🍿

    What to do?✅ Set your ad preferences to “less personalized” (if you can find them 🔍) ✅ Remove WhatsApp from the Meta Accounts Center ✅ Use alternatives: Pixelfed, Signal/Telegram, Mistral AI & Co.

    📌 You can find some tools in our Linktree.

#14 Fakt Up from November 13, 2025
  • 🤖 When AI guides the pen...

    Large language models (LLMs, such as ChatGPT) can help with writing or brainstorming. However, studies show that relying too heavily on AI means that certain areas of the brain are exercised less, as measured by EEG (electroencephalogram). Creativity and original ideas can suffer as a result.

    💡 Pro tip:
    - Think for yourself first, then use AI as a sparring partner
    - Practice creatively without AI
    - Question AI results and take breaks from prompt mode

    ⚠️ Critical view: LLMs can be tempting for delegating work—but if you constantly think that the machine knows better, you shift responsibility and lose some of your own thinking.

    📌 You can find the studies in our Linktree.

    #FaktUp #KIBias #DigitalResponsibility #LLM #ChatGPT #AIethics #NeuroScience #CreativityVsAI #DigitalSelf-Determination

#13 Fact Up from October 30, 2025
  • 💥 Big Tech vs. Europe – an unfair final battle.

    Some have billions, server farms, and your data. Others have data protection, open source, and courage. 💪 While Meta, Google, and others monetize your clicks, European alternatives struggle for visibility with minimal budgets.

    But: They do exist! And they work. Just without spying on you.

    🔹 WhatsApp → Signal / Threema
    🔹 Google Drive → Nextcloud
    🔹 Chrome → Firefox / Brave
    🔹 ChatGPT → Mistral / Aleph Alpha

    💡 What you can do:🔁 Switch where you can—even small steps count.
    📣 Talk about alternatives before they disappear.
    🧠 Make data protection sexy again—you don't use the password “1234,” do you?
    📍You can find more information about European tools, studies, and open-source initiatives at Linktree.

    Europe can do tech – if we just use it.

#12 Fakt Up from October 16, 2025
  • 🤖 Feminism.exe not found

    When algorithms attempt to explain the world, it often sounds remarkably similar to the 1950s. AI learns from old data – and that data shows that men get the jobs, women are stuck in traditional roles, and diversity only exists on paper. This means that it reproduces stereotypes and discrimination. Women, people of color, and queer individuals are often portrayed or evaluated negatively because the training data is historically biased.

    💡 Who is researching this?
    Eva Gengler is co-founder of the feminist AI initiative and, as part of her doctoral program in “Business & Human Rights,” is investigating how power structures work in AI systems, with a particular focus on marginalized groups.

    📌 What you can do:
    - Critically question AI results, especially in job applications, advertising, or image generation.
    - Report or comment on distorted suggestions to highlight bias.
    - Use different sources and perspectives so you don't rely solely on AI.

    The responsibility lies with all of us—AI only reflects what we give it.

    📍 You can find more information, studies, and tools in our Linktree.

    #FaktUp #KIBias #DigitalResponsibility #feministAI #AlgorithmicBias #DigitalSelfDetermination #TechEthics #AIethics #DigitalSociety

#11 Fakt Up from October 2, 2025
  • ⚠️ Trust is good – debugging is better.

    A recent report on DeepSeek shows that AI can deliberately introduce errors into program code—and that these errors vary depending on the region. For developers, this means that what looks like a helpful code assistant can suddenly contain security vulnerabilities or functional errors.

    Why this is relevant:
    - AI manipulation can cause software to crash or create security risks.
    - Even small bugs cause high costs for companies – globally, errors add up to billions.
    - Trust in AI systems is undermined when users cannot tell whether the results are correct or manipulated.

    What developers can do about it:
    🕵️ Automatic checking: Tools such as SonarQube or ESLint automatically detect many errors.
    👀 Team code reviews: The dual control principle helps to detect AI errors more quickly.
    🔓 Prefer transparency: Open-source AI such as Code Llama or models on Hugging Face offer traceable results.
    📚 Use the community: Stack Overflow, forums, and peer groups are indispensable for validating AI outputs.

    Conclusion: AI can be a powerful tool—but it cannot replace critical thinking and careful review processes.

    📌 More information, studies, and tools in our Linktree.

#10 Fact Up from September 18, 2025
  • A violent video is still online – other content disappears immediately? 🤔

    Censorship on the internet is opaque and inconsistent. Platforms, algorithms, and laws decide every day what remains visible and what gets deleted.

    🔹 Overblocking: Content is blocked incorrectly, e.g., satire or scientific texts.
    🔹 Regional differences: In the US, AI systems are sometimes used, while in Europe, the DSA regulates what is permitted—implementation varies greatly.

    ✅ What you can do:
    - Learn about guidelines and demand transparency
    - Question algorithms: Organizations such as AlgorithmWatch advocate for fair algorithms and AI
    - Use diverse sources and share responsibly

    You can find the studies and exciting organizations in our Fact-Up-Linktree

#9 Fakt Up from September 4, 2025
  • 📝 Terms and conditions pitfalls – more than just the fine print

    You click “Accept” and think, “All right, let's keep scrolling.” But did you know that by doing so, you may be giving away worldwide, free rights to use your content—including reproduction, editing, distribution, and AI training?

    💡 CapCut is just one example – almost every social media platform has similar clauses. You remain the copyright holder, but often no longer decide what happens to your videos, photos, or texts. Characteristics of T&C pitfalls:

    - Worldwide, royalty-free license for your content
    - Permission to edit, redistribute, and use for AI
    - Obligation to agree - no access without “accepting”
    - Texts so convoluted that hardly anyone understands them

    ✋ How to protect yourself:
    - Check the terms and conditions (keywords: “license,” “worldwide,” “free of charge”)
    - Only upload sensitive content to platforms that have fairer rights conditions
    - Use open-source alternatives (Shotcut, DaVinci Resolve, OpenShot, GIMP)
    - Check for updates to the terms and conditions regularly; tools can help you with this

    You can find the tools and links in our Fact-Up-Linktree

#8 Fact Up from August 21, 2025
  • 🌀 Forced break or forced advertising?

    Think Instagram will give you a break? Nope. Your feed stops—but only so you'll obediently look at ads. 🤳✨And while you try to keep scrolling, it happens: doomscrolling.

    👉 More stress.
    👉 Poorer sleep.
    👉 More advertising than content.
    👉 Negative news sticks around longer.

    It sounds like satire, but it's reality: studies show that endless scrolling increases stress, anxiety, and depressive symptoms.

    ✋ What you can do:
    🕒 Set app limits – with Screen Time (iOS) or Digital Wellbeing (Android).
    🚫 Use scroll stoppers – apps like Freedom or One Sec slow you down when you open social media.
    🌳 Positive distraction – Forest lets you plant virtual trees, Focus Friend lets you knit a little bean when you stay offline – try out what helps you switch off.
    🌿 Offline breaks – Turn off push notifications, consciously put your phone away, schedule fixed screen times.

    You can find the tools and links in our Fact-Up-Linktree

#7 Fact Up from July 31, 2025
  • 🎬 Leak from the Avengers shoot at the Externsteine?

    Sure. And right after that, Iron Man drinks a Fritz-Kola in Detmold.🫠 Of course, the image is a deepfake—AI-generated, deceptively real, 100% fake. And that's exactly the point:

    🎭 Fact Up: Deepfakes
    They look like reality, feel real—and are often completely fabricated. From fake popes to deepfake pornography to voices that persuade CEOs to transfer money: deepfakes are becoming a real problem.

    In our new post, you will learn:
    ⚠️ Why deepfakes are dangerous
    🔍 How you can recognize them (spoiler: not always)
    🛠️ Which tools can help you expose fakes

    📲 You can find more background information and useful links in our Linktree.
    👉 Ask yourself with every picture/video: What part of this is actually real?

#6 Fact Up from July 17, 2025
  • 🧨 “It's just a few emojis, don't be so silly!”😂

    Okay—then let's just laugh about it when you get 💩 sent to you 200 times a day. When your name spreads as a meme. When your DMs look like a digital pillory. When no one says anything. But everyone likes it. 😂😂😂

    Cyberbullying is not a problem of comments—it is a systemic problem.
    And spoiler alert: if you don't say anything, you are saying something.🫵 If you look away or laugh, you are part of it.

    Because algorithms love reach—even that of hate.
    Likes, shares, comments: every reaction fuels the next blow.

    What you can do:
    ✋ Speak up—civil courage is possible even in the digital world.
    📲 Report, block, document—you're not just protecting others.
    💌 Write to those affected: "I saw it. I've got your back."
    🧠 Get help – for others or yourself.
    📍You can find help in our Linktree:
    🔗 #NichtEgal – for digital moral courage
    🔗 Cybermobbing-Hilfe e. V. – with information & support
    📞 Or by phone: Number against sorrow 116 111 – anonymous & free of charge

    👉 Because bullying isn't digital. It's real.
    Cyberbullying doesn't disappear with the chat history.
    Once something is online, it stays there—saved, shared, repeated.
    24/7. Publicly. Anonymously. Painfully.

#5 Fact Up from July 3, 2025
  • 🧠💥 In the past, a heart was just a heart. Today, it's a political statement.

    ...And suddenly I sympathize with the AfD, just because I sent my crush a 💙?!

    Welcome to the internet in 2025—where emojis no longer just express feelings, but entire worldviews.
    In the series Adolescence, a chat with 🧨 and 🫘 is enough to suddenly trigger an incel alert. What once meant “really good” with 💯 now stands for misogynistic math myths from the manosphere. And the ☕️? Not a latte girl moment – but derogatory slang for “typical woman.”

    🚨 Time to learn the codes—before you inadvertently become part of them.

    👉 Don't want to be part of these invisible “secret societies”? Then check out what the emojis really mean! You can find more information in our Linktree.
    ❌ Don't use emojis that are codes for hate or exclusion.
    🤔 Question it if you suddenly get ☕️, 💯, or 🧨.
    💬 Talk openly about these codes—that's the only way to expose them.
    ❤️ Show real respect—it doesn't need secret languages or toxic memes.

    And yes: the 💙 on your crush can stay—but maybe not everywhere. Context is important.

    🎯 Swipe through our emoji quiz and find out what you didn't know (yet) about the dark side of DMs.

#4 Fact Up from June 19, 2025
  • 🏳️‍🌈🤖 PRIDE, PIXEL & PINK PROFIT

    Truly queer or just coded queer? While AI influencers such as Lil Miquela, Imma, and Shudu perform rainbow vibes for brands like Versace, Amazon Fashion, and Puma, real queer people fight every day for visibility—and against discrimination.

    We say: #QueernessIsNotAFilter – look behind the façade and pay attention to who is really making a difference.

    Virtual influencers are available around the clock, never say anything “wrong” – and don't cost anything. Their (sometimes queer) personas? Scripted by creative agencies, monetized by corporations. Behind Lil Miquela, Shudu & Co. are agencies and brands that play at diversity – but don't live it. While real queer people are rejected for jobs, AI characters sell “diversity” free of real identity, origin, or resistance – much to the chagrin of real voices.

    What you can do - Beyond Pride:

    💬 Listen to real queer voices. Not those programmed for brands, but those who tell their own stories—loud, vulnerable, angry, courageous.

    💸 Support queer artists, creators, and activists. Book them, donate to their projects, share their content—not just in June.

    🏳️‍⚧️ Take a stand against queerphobia—even when it gets uncomfortable. In your family. At work. Online. No algorithm can replace real moral courage.

    📚 Learn, question, show solidarity. Queerness is more than an aesthetic trend. It is life, resistance, and community.

    👉 Pride doesn't end at the end of the month—neither should your solidarity.
    You can find the most important links here.

#3 Fact Up from June 5, 2025
  • 🧙‍♀️✨ Attention at platform 9 3/4: The Pinkwashing Express is arriving! 🏳‍🌈

    J.K. Rowling, author of the Harry Potter series, presents herself as rainbow-friendly, yet at the same time she publicly supports groups such as For Women Scotland, which campaigns against the recognition of trans women in Scottish equality law. She has also set up her own Women's Fund to promote legal action against trans-inclusive policies.

    🪄 The irony of it all? The Harry Potter community itself is diverse, vocal, and queer. Numerous fans and fan projects advocate for trans rights, distance themselves from Rowling's statements, and make it clear: You can love Hogwarts without tolerating discrimination.

    🚂 Welcome to the pinkwashing express: glitter on the outside, gatekeeping on the inside. Rowling isn't the only one doing it: in June, rainbow flags sprout up in the profile pictures of various companies. Queerness and diversity are trendy—but only as long as they fit the image and don't become uncomfortable!

    🔍 How to recognize genuine commitment (even without a magic spell):
    ✅ Is there support outside of June?
    ✅ Are queer people actively involved, made visible, and paid?
    ✅ Are rainbow products used to support queer projects and not just to generate profit?
    ✅ Are there clear statements against queerphobia—or just colorful logos?
    ✅ Does tolerance also include trans, inter*, non-binary, BIPoC, people with disabilities, etc.?

    👉 If you want more than just marketing magic: Get involved locally—for example, at CSD Lippe. There you'll find genuine solidarity instead of rainbow packaging.

#2: Fact Up from May 8, 2025
  • Are we currently witnessing the greatest art theft of all time?

    While the internet revels in AI-generated Barbie figures and starter packs, a gigantic raid is taking place in the background: welcome to digital colonialism.

    Our action figure in the form of Sam Altman represents a system in which companies such as OpenAI, Midjourney, and other AI providers access billions of creative works. They help themselves to content from forums, blogs, portfolios, and social networks. In most cases, this is done without consent, payment, or transparency.

    🧑🏻‍🎨And now? New works are being generated from this.
    AI systems produce content every day in the style of real artists. Their signatures, aesthetics, and ideas are replicated in seconds. However, the economic profits flow to platforms, not to those who laid the creative foundation. Many creative professionals are experiencing a decline in commissions because AI is supposedly replacing their work.

    🤔 Between fascination and responsibility.
    Yes, we ourselves are fascinated by AI. We use it. We experiment with it. But that is precisely where the conflict lies: how do we deal with technologies that seemingly expand our creativity but are based on the work of others? When is inspiration legitimate, and when does it become exploitation? This is not about fear of technology. It's about fairness, respect, and equitable conditions for those whose work makes these systems possible in the first place.

    👉 How can you protect yourself and defend against unsolicited AI training? We'll show you in our slides – with tools, tips, and concrete steps. Just click to view!

#1: Fact Up from May 1, 2025
  • Mark is hungry again – for your data.

    Starting May 27, 2025, Meta will begin using public content from Facebook and Instagram—such as posts, comments, and photos—to train its AI (Meta AI) in Europe. Unless you explicitly object, your data will be part of this process.

    📌 What does this mean in concrete terms?
    - Opt-out instead of opt-in: Meta relies on a questionable procedure in which you must actively object to prevent the use of your data.
    - Irrevocable: Once used for AI training, your data cannot be deleted or retrieved.
    - Legal gray area: Data protectionists criticize Meta for invoking a “legitimate interest” instead of obtaining explicit consent from users.

    🤔 Where do you draw the line?
    Is the promise of technological progress enough for you in exchange for your data? Do you trust a corporation like Meta when it comes to your digital identity? Or do you say quite clearly: Not with me.

    🛑 What can you do?
    You can object to the use of your data, at least for the data you share from the time of your objection. Meta provides online forms for this purpose on Facebook and Instagram, which you can access in the logged-in area of the services. You can also find more information on this at Verbraucherzentrale.de. And you can share this information to educate others.