I come from Galicia, a magic land in Spain with a rich culture and some deeply ingrained superstitions. We have a saying - “Non creo nas meigas, pero habelas hailas” - that roughly translated to English means something like “I don’t believe in witches, but if there are, they are out there”. Meigas, though, is another one of those words you cannot easily translate into other languages. They are not just “witches”. They are our very own blend of good witches, healers, and fortune tellers.
And I will soon get to the point of why I am bringing them up.
I’ve never really believed in witches or meigas. As a physicist and engineer, I was too focused on what I could scientifically explain and more drawn to data and numbers. But I’ve always been so aware of how much I don’t know or understand, that I could not completely dismiss them either. People from Galicia are also known for never being sure of anything and, in that sense, I am a true gallega. But I digress… let me get back to my point. A friend once told me about a fortune teller - one of these meigas - who was very good. Hearing my friend's stories I was tempted to go and see her. What stood between me and the fortune teller was not my lack of belief (habelas hailas!), but something else. Regardless of her accuracy at predicting my future, I knew that whatever she said would influence me, whether consciously or unconsciously, and I did not want that influence on my life.
For some reason, I thought about meigas when I came across a paper1 by Dr. Arvind Narayanan and his team about machine learning algorithms that predict our behaviour. We talk a lot lately about generative AI and not so much about the “traditional” predictive algorithms everywhere around us, as present as mycelium in a forest. You don’t see them and yet they are there, doing all sorts of things: from predicting what will entertain us on a digital screen to determining whether we can get a loan from our bank. But are the decisions they make delivering on their promise? The paper argues that when they use predictions to decide about individuals they are not: they are inherently flawed and should be presumed illegitimate unless proven otherwise, because despite their appeal for increased efficiency and objectivity they fail to be accurate, fair, or efficient. Quite a claim for a computer scientist!
And it got me thinking…. We interact with this type of AI every day: you are shown any given ad based on the prediction that you are going to click on it2, you are shown certain movies in your TV menu based on what is likely to draw your attention, and you may be recommended for a medical trial based on what will likely succeed for your specific condition. All of this may be good: you may see more relevant ads that pique your interest, you may save time searching for movies that you like and you may take the medicines that are most effective for you, and will potentially save your life.
However, I can’t help but wonder.
How accurate can these predictions ever be? The degree to which the predictions reflect what you would have done in their absence is limited by the information the predictive algorithms were trained on. Even in a world where digital data is concentrated on just a few big tech companies (or governments) who have a lot of data on you, so much of what goes on in our minds and our world is not data in the digital realm. Or as Harari puts it in his latest book Nexus3: Can these algorithms discover something new about ourselves, or are they imposing order on us?
And even if those predictions were 100% accurate, would they drive better outcomes overall? I think that will always be debatable. We know so little about how the universe works! Maybe predictive recommendations work well for Netflix, but if I did not have recommendations so aligned with my preferences I may watch less TV and read more books, or I may open up to other experiences and become more open-minded. Or maybe if I had better recommendations we would avoid heated discussions at home when trying to pick a movie and spend more enjoyable rainy days as a family.
I don’t know whether we will, soon or ever, get to a stage with almost perfect information for algorithms to make the most accurate predictions. And I don’t know whether more accurate predictive AI will turn us into miserable human beings. But just in case - habelas hailas! - I would like to have the choice, as an individual, to decide whether I want to take advantage of those predictions or not. Just as I had the choice of whether or not to go to the fortune teller back in the day.
Regulation may force predictive AI applications to allow users the option to choose and, although this is the first step, it is not a sufficient one. Are we as individuals empowered to make the choice? Are we aware of our options in the first place, and what they entail? Of course, movie consumption is an example where the stakes are low in terms of impact (and I have not taken the time to figure out how I can turn off my recommendations in Netflix, although I’d bet there’s a somewhat convoluted way of doing it). But our institutions, both public and private, are relying more and more on these types of technology and each of us should be aware and have a choice.
And now, my confession: even if I did not want the influence of a future prediction in my life, I did go to the fortune teller. We are complicated human beings, are we not?4
Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4238015
As a disclaimer, this was the seed of the company I started back in 2012
https://www.penguin.co.uk/books/451878/nexus-by-harari-yuval-noah/9781911717089
Credit for the drawing you have seen if you come from the home page goes to Aymar de Villele, visual artist and cognitive therapist