If you thought humans weren’t particularly great when it comes to cancel culture, well, just you wait until artificial intelligence gets into the mix. And you’re probably not going to be waiting long.
Here are some things to be concerned about:
Loss of Anonymity
Many people these days are hesitant to express their honest point of view under their own identity because they are afraid of facing repercussions like loss of employment, reputational damage, bullying, or the fear of losing their community and friends.
That’s why many use anonymous accounts to engage with the outside world on social media, either using voice or text. However, as AI algorithms grow increasingly sophisticated, they can analyze vast amounts of data, including online behaviors and posts in order to create detailed profiles of individuals—which may eventually result in enough pattern recognition to reveal the identities of anonymous accounts. AI can analyze patterns in writing styles, grammar, and vocabulary. The AI algorithms are able to aggregate data from multiple sources, comparing content written anonymously with content that’s published under your actual name and by analyzing patterns, it can predict to a degree of certainty how likely it is to be a match.
Voice data is of particular concern since this is an especially unique data signature and it is relatively easy to compare voice samples to establish whether one taken in an anonymous setting is a match for one that is not.
This means that even if you believe that you are anonymous, always be prepared to be doxxed.
AI surveillance
Since artificial intelligence is able to collect extensive data and analyze the online activity of users, as these activities become more prevalent, there is more potential for their weaponization by various entities—including governments with more draconian laws, or perhaps employers with strict policies. Your digital footprint will likely follow you like a hybrid between a social credit score and psychological profile of sorts, whether you know it or not. This invasive monitoring could have a chilling effect on the freedom of individual expression, hindering open discourse and diversity of thought.
AI is also able to analyze your social connections and track relationships and interactions—which can be a problem if you happen to be linked to the “wrong” person or people.
Algorithmic control
In the future, AI can be implemented on social media platforms to target users based on perceived transgressions. Of course, while some of this can help with moderating justifiably problematic content and enforce rules, there is also a risk of overly biased enforcement and the ‘cancellation’ of both individuals and ideas. For examples, too much wrongthink and posting about controversial topics can be de-boosted by the algorithm—or these tools can even be used to find users who engage in the wrong kind of rhetoric to call out and shame.
Deep fakes
These days we have a difficult enough time disproving text versions of ‘fake news’ making ridiculous claims, let alone videos and images that allow people to see with their own eyes public figures or private individuals engaging in dubious acts or making statements that can get them in trouble.
These “deep fakes” are getting increasingly convincing and can be easily fabricated for malicious purposes. It’s easier than ever to use AI technology to make a video or audio recording that has someone saying something offensive enough to spark public outrage, tarnish their reputation and even get them cancelled entirely. They may try to deny it, but it’s much harder to correct a lie than spread it.
These technologies are developing at a rapid pace and are accessible to the average person, whereas the technologies meant to detect and combat deep fakes are not. This is a significant problem. Whenever a deep fake is created, there should be an identifier automatically labelling it as such to help avoid scenarios of mass confusion or intentional subterfuge.
AI bot campaigns
Bullying no longer has to be limited to humans. Imagine a powerful AI-powered bot army whose sole purpose is to target specific individuals by automatically generating and disseminating mean messages about them, harassing them, spreading false information, and spreading their personal information (doxxing). These AI bot bullies don’t have to eat, sleep, watch Netflix, or even sip cheap white wine—they can bully around the clock. And since these aren’t just regular bots, it makes it harder for any spam filters to detect and tame. This can be done on a small scale, or large. The result is the same: Ruined reputations. In fact, it can rewrite the online narrative entirely, even manipulating search engine results.
The future of artificial intelligence is going to bring about many changes. Some for the better, some for the worse. But if we’re not careful, it could get us cancelled.
What is the solution? Do you want to see more regulation for the AI tools being made available? Leave a comment below.
☕️ Thoughtful writing takes time. Want to support my work by making a donation and buying me a coffee? Here’s how.
NOTE TO READERS:
Thank you for keeping me company. Although I try to make many posts public and available for free access, to ensure sustainability and future growth—if you can—please consider becoming a paid subscriber. In addition to supporting my work, it will also give you access to an archive of member-only posts. And if you’re already a paid subscriber, THANK YOU. Please also share, like, and comment. Got ideas for future posts? Email me.
Enjoy FREE Premium Membership for a Week! Sign up.
Who am I? I’m a writer with an overactive imagination and a random mind. Outside of Substack, you’ll find my work in publications such as Newsweek, WIRED, Variety, The Washington Post, The Guardian, Esquire, Playboy, Mashable, CNN Travel, The Independent, and many others.
Don't forget that our Intel (NSA, CIA, etc) agencies probably all have automated backdoors on all our devices (likely at the CPU level... wish I could find that article from years back, maybe up to a decade or more ago?, where someone was explaining that computer CPUs had hidden coding that would allow these agencies to access your computer at the sub CPU/OS level), and maybe even corporations like Razer and other keyboard manufacturers might possibly be, or eventually be getting AI to parse through everything you type on your computer or other devices.
"AI can analyze patterns in writing styles, grammar, and vocabulary."
Not to mention that with information such as described in the first paragraph, your typing rhythm/tempo/style might also be used to identify the writer/typist.
Here the NSA was inserting backdoors into public encryption: https://www.theglobeandmail.com/technology/business-technology/the-strange-connection-between-the-nsa-and-an-ontario-tech-firm/article16402341/
And before that in the late 2000s, Lavabit e-mail's founder was threatened with jail time by the US government because he would not give these spy agencies access to his customers' encrypted e-mails:
https://www.forbes.com/sites/kashmirhill/2013/08/09/lavabits-ladar-levison-if-you-knew-what-i-know-about-email-you-might-not-use-it/
I also remember reading how these spook agencies were putting backdoors on routers, or was it that all data running through the routers is also being sent to the NSA? Forget now. Oh, and back in the early 2000s, a NY AT&T employee had blown the whistle when he found out that the NSA had equipment on the Internet backbone, that would send all data flowing through said backbone over to the NSA.
On top of that we can think of the Snowden revelations about all of the other "collect it all, store it all" operations of the various Intel agencies of the G7 or G20 countries.
Over these last 20+ years one can only imagine how much more lawless and pervasive all these kinds of things have become, and besides, a lot of these prior-illegal activities might have since been codified into legality.
In a lot of rhetorical questions about "AI" you can basically just replace "AI" with "powerful people" to get to the actual question. The tech is just a multiplier of power imbalances already in full force.
https://prada.substack.com/p/if-sarah-oconnor-meet-chatbot-sydney