Around 7 years back, I switched to reading my books on an e-reader. It makes it much easier to read in bed without waking my wife and much easier to take several books and articles around with me. Initially I used a Kindle but then I moved over to Kobo because… Amazon. (So now my wagon is hitched to a lesser Satan). Anyway, I was doing a search recently to get a sense of the Kobo Sage, a newer e-reader I was considering upgrading to. As is now the norm, the first thing to pop up was an AI (Artificial Intelligence) synopsis which stated that the Sage had a good battery life. Luckily, just below, I saw a snippet from an actual review that said the Sage has terrible battery life. That sent me down a rabbit hole to figure out what the reality was (terrible battery life.)
This, however, is relatively easily verifiable in the scope of things: Kobo put a smaller battery in a larger e-reader with writing capabilities than they did in a smaller reader without such capabilities. (Goodness knows why.) In a world where people are used to charging their phones every day, some would say it does not matter, but others want their e-reader to more closely approximate a book, so one can just pick it up and barely have to think about battery life. Either way, verifiable facts are just that.
Until they are not.
I am currently in the midst of reading Tom Holland’s Persian Fire — The First World Empire and the Battle for the West. In the preface, Holland takes time to remind the reader of a valuable point: much of what we consider “known” history is, in fact, very murky. The Persians in the title of the book left no written history, so what we know of them is gleaned from the writings of the Greeks…their long-time foes. If we took these records on face value, the Persians would appear to be (as Holland translates) “effete cowards that somehow, inexplicably, conquered the world.” And so, only by comparing multiple sources plus every shred of evidence and then discounting bias, a history emerges.
Will that be possible in the future? Because the reality is, for almost anyone that may read this — forget ancient Persia — you have no actual evidence that Eisenhower or JFK actually lived. You take it as a fact from records and history — and, increasingly, from AI.
As the NY Times recently reported, an emerging problem is that various AI engines are scraping their knowledge from other AI engines, so that small or even large inaccuracies are promulgated widely. The inaccurate great battery life! can just as easily be the inaccurate immigrants eating pets! And if the search companies are then promoting this at the top of their search pages as a verified amalgam of available knowledge and truth, how many people will question it? And, as time passes, what will be the resources to do so?
It seems quaint that we were once worried about Wikipedia, which, except for very esoteric pages, has great, reliable, crowd-based and constant fact-checking. Whereas AI is becoming more like a photocopy of a photocopy of a photocopy and there is no easy, open way to correct inaccuracies.
It's also important to understand that your needs and priorities may differ from the AI companies feeding you information. Whereas your primary motive may be a search for the truth, their primary motive may well be a quest for market share, revenue and volume. If you couple that with the tech communities’ comfort with “moving fast and breaking things” it is a combustible mix and massively open to manipulation. — The things they break may include your children, your friendships, your country.
A highly experienced graphic designer I know recently got 50% wrong when taking a test to discern AI-generated video from real-life video. — And AI video is still in its very early days.
Already, the Google Pixel folks boast that their camera can perfectly manipulate photos to show that you were where you weren’t — and they suggest that people should be perfectly fine with doing so.
So, we have AI companies in charge of the warehousing and retrieval of facts. We have AI companies creating visual evidence out of thin air. And we have AI companies promoting the concept that truth is a completely malleable commodity.
What could possibly go wrong?
So many people’s identities are completely wrapped up in the fact (for them) that what they do is who they are. And then they retire…
Much like language has changed over the centuries, so have social acceptances. As we have public conversations about differences of opinion, we would be well advised to remember our children will emulate what we teach them about relationships, power and violence.
People seek to have power over their own lives. Teens, adults, older adults – everyone. What happens when they feel powerless? What happens when you or your communications make them feel they have less power?
The way we talk to others demand that they accept an identity for themselves, and sets up a particular relational dynamic. If we're not careful, that identity can be stigmatizing or turn away the very people we're trying to help.
Get the latest posts and updates delivered to your inbox.