I had just read an email from Google telling me that my daughter installed an app called “Undetectable AI”. I was intrigued by the name and went to check it out, while patiently waiting for her to get back from school to ask her point blank.
Undetectable AI is a service that allows you to copy and paste your AI-generated content so that you can make it appear human and effectively bypass any AI detectors. The irony was not lost on me: the same technology that detects AI-generated content is being used to disguise it as non-AI-generated text. This is another example of what I heard in the podcast “Your undivided attention”1 over the weekend: the same AI model can be easily used for one good action and its opposite. The same AI that helps us create new vaccines faster than we ever imagined can also serve to create new viruses.
For some reason, this AI detection cycle immediately reminded me of the “Sneetches on the beaches” of Dr Seuss2, endlessly cycling through machines that either added or removed their star-belly marks until they couldn't distinguish who had been born with stars and who hadn't. Just like the Sneetches going through machines to take off their stars, only to go through another machine to put them back on, we can have AI create content, then give it to another AI to make it look human, and then feed to another AI for training to create more content again, until we no longer know which content is of human origin and which one is not. And while a world in which all Sneetches are good Sneetches whether they have stars on their bellies or not is unquestionably good for Sneetches, or people, I am not sure I could say the same about a world in which we cannot distinguish between AI generated content and content generated by humans.
Let’s assume for a moment we agree on the need for transparency, and therefore AI-generated content should be flagged as such. As we say in Spanish: “Hecha la ley hecha la trampa” - loosely translated as “When there’s a law there’s a loophole” - and the ones making the rules will always be chasing the ones taking advantage of the loopholes. Drawing from my decade in the ad tech industry, I have seen this pattern before. While some companies developed sophisticated anti-fraud systems for digital advertising, others could potentially use the same technology to go around those protections and make money. A lot of money. The key difference? No company would dare openly market a service to “help your fraudulent activities go undetected”, as it is unethical, and against industry regulations. However, a simple search will show how many companies proudly announce their services to create truly undetectable AI.
So many questions come to my mind:
Do we want to live in a world in which AI-generated content is indistinguishable from human-generated content?
Would services that “humanize AI text” face penalties under emerging regulations like the EU AI Act3?
Are our kids' teachers adapting fast enough to the ubiquity of this type of technology, so that grades reflect what kids have learned and not their proficiency in using this type of tools?
As parents, how do we balance teaching responsible use of these technologies while ensuring we do not put them at a disadvantage?
In case you were curious, I did ask my daughter. She explained that she found Undetectable AI when searching for tools to detect what she had copied and pasted from the web on her school project, as she wanted to re-write things in her own words… And then she didn’t use it because she had to pay.
And as a disclaimer, I did run the text I originally wrote through AI to help me find a title so, credit goes to Claude where it is due.
If you do not know this story you can read a quick summary in Wikipedia and, if you have kids, I absolutely recommend the book with this and other stories!
https://artificialintelligenceact.eu/
As a uni professor what I struggle the most with now is this question you pose “Are our kids' teachers adapting fast enough to the ubiquity of this type of technology, so that grades reflect what kids have learned and not their proficiency in using this type of tools?”
We need to know what exactly are we assessing! We can go about this in many ways I am sure: maybe change assessments to something where AI cannot be used, an in class presentation for example where we can discuss arguments presented to better gauge learning or learn ourselves how to detect AI generated content!
My opening statement is already going to firmly plant me on one side (the unpopular one, I suspect) as it is my strong belief that *all* content is human-generated content. Machines just repeat patterns humans have created (by looking at statistical correlations) yet the patterns they copy are only possibly human at origin. Even now that "synthetic data" is a thing, all synthetic tokens originated in human-generated data at some point in the layer cake of training sets.
They do mix up content in surprising, some say "emergent" new patterns. But even those come from correlations that were at one point human and simply got reflected into a matrix of statistical weights.
The core question for me is: are we willing to place a higher (market) value in the processing done by humans? I surely would hope so, as long as said output improves upon what can be generated by a machine with lower marginal cost. Putting a higher value in "direct from human" content also means the bar for quality content has all of a sudden been raised, because generating derivative "art" is no longer as complicated as it used to be and anybody can aspire to make "art" just like anybody thinks themselves the next Ansel Adams when it is in fact machine learning fixing the horrible exposure of their landscape shots.
What's truly hard to come to grips with is that we are, after all, just stochastic parrots in one way or another. What makes us special, as biological organisms? What is the so-called soul? Let me go ask Claude ... BRB!