Discover more from Random Minds by Katherine Brodsky
How AI can get you cancelled
If you thought humans weren’t particularly great when it comes to cancel culture, well, just you wait until artificial intelligence gets into the mix. And you’re probably not going to be waiting long.
Here are some things to be concerned about:
Loss of Anonymity
Many people these days are hesitant to express their honest point of view under their own identity because they are afraid of facing repercussions like loss of employment, reputational damage, bullying, or the fear of losing their community and friends.
That’s why many use anonymous accounts to engage with the outside world on social media, either using voice or text. However, as AI algorithms grow increasingly sophisticated, they can analyze vast amounts of data, including online behaviors and posts in order to create detailed profiles of individuals—which may eventually result in enough pattern recognition to reveal the identities of anonymous accounts. AI can analyze patterns in writing styles, grammar, and vocabulary. The AI algorithms are able to aggregate data from multiple sources, comparing content written anonymously with content that’s published under your actual name and by analyzing patterns, it can predict to a degree of certainty how likely it is to be a match.
Voice data is of particular concern since this is an especially unique data signature and it is relatively easy to compare voice samples to establish whether one taken in an anonymous setting is a match for one that is not.
This means that even if you believe that you are anonymous, always be prepared to be doxxed.
Since artificial intelligence is able to collect extensive data and analyze the online activity of users, as these activities become more prevalent, there is more potential for their weaponization by various entities—including governments with more draconian laws, or perhaps employers with strict policies. Your digital footprint will likely follow you like a hybrid between a social credit score and psychological profile of sorts, whether you know it or not. This invasive monitoring could have a chilling effect on the freedom of individual expression, hindering open discourse and diversity of thought.
AI is also able to analyze your social connections and track relationships and interactions—which can be a problem if you happen to be linked to the “wrong” person or people.
In the future, AI can be implemented on social media platforms to target users based on perceived transgressions. Of course, while some of this can help with moderating justifiably problematic content and enforce rules, there is also a risk of overly biased enforcement and the ‘cancellation’ of both individuals and ideas. For examples, too much wrongthink and posting about controversial topics can be de-boosted by the algorithm—or these tools can even be used to find users who engage in the wrong kind of rhetoric to call out and shame.
These days we have a difficult enough time disproving text versions of ‘fake news’ making ridiculous claims, let alone videos and images that allow people to see with their own eyes public figures or private individuals engaging in dubious acts or making statements that can get them in trouble.
These “deep fakes” are getting increasingly convincing and can be easily fabricated for malicious purposes. It’s easier than ever to use AI technology to make a video or audio recording that has someone saying something offensive enough to spark public outrage, tarnish their reputation and even get them cancelled entirely. They may try to deny it, but it’s much harder to correct a lie than spread it.
These technologies are developing at a rapid pace and are accessible to the average person, whereas the technologies meant to detect and combat deep fakes are not. This is a significant problem. Whenever a deep fake is created, there should be an identifier automatically labelling it as such to help avoid scenarios of mass confusion or intentional subterfuge.
AI bot campaigns
Bullying no longer has to be limited to humans. Imagine a powerful AI-powered bot army whose sole purpose is to target specific individuals by automatically generating and disseminating mean messages about them, harassing them, spreading false information, and spreading their personal information (doxxing). These AI bot bullies don’t have to eat, sleep, watch Netflix, or even sip cheap white wine—they can bully around the clock. And since these aren’t just regular bots, it makes it harder for any spam filters to detect and tame. This can be done on a small scale, or large. The result is the same: Ruined reputations. In fact, it can rewrite the online narrative entirely, even manipulating search engine results.
The future of artificial intelligence is going to bring about many changes. Some for the better, some for the worse. But if we’re not careful, it could get us cancelled.
What is the solution? Do you want to see more regulation for the AI tools being made available? Leave a comment below.
☕️ Thoughtful writing takes time. Want to support my work by making a donation and buying me a coffee? Here’s how.
NOTE TO READERS:
Thank you for keeping me company. Although I try to make many posts public and available for free access, to ensure sustainability and future growth—if you can—please consider becoming a paid subscriber. In addition to supporting my work, it will also give you access to an archive of member-only posts. And if you’re already a paid subscriber, THANK YOU. Please also share, like, and comment. Got ideas for future posts? Email me.
Who am I? I’m a writer with an overactive imagination and a random mind. Outside of Substack, you’ll find my work in publications such as Newsweek, WIRED, Variety, The Washington Post, The Guardian, Esquire, Playboy, Mashable, CNN Travel, The Independent, and many others.