Hello, my name is Maurice. I am an alum of Webster University in Ohio. I live in Missouri and am very interested in “Jewish in St. Louis” community activities. These all are lies about me, and I can’t correct them. This misinformation, likely the result of human input errors, is embedded in computer data and is what some algorithms “think” is true when they send me emails. In this case, the consequences are benign; I actually enjoy learning about the Jewish community in St. Louis. What’s scary is how computer algorithms, AI, and all our technology can take data that is perfectly true and turn it into “facts” that are frighteningly false.
What's scary is how computer algorithms, AI, and all our technology can take data that is perfectly true and turn it into "facts" that are frighteningly false.
Algorithms might not lie, but they cannot act with integrity or parse information like humans can. Two past occurrences I experienced could have led to bad consequences. The first was in the late 1990s, when the internet was very young. A client of mine invented a little plastic strip that, when placed in an autoclave, changed color when medical instruments and the like were thoroughly sterilized. We were having a lot of success marketing the product to dental offices and were brainstorming other industries in which the product could be used.
Someone suggested researching tattoo and piercing parlors, which had been given a lot of attention because of issues with non-sterile needles and implements. At home I went online and began searching, coming across lots of disturbing information, odd tattoo art, details about intimate piercings (I don’t ever want to know what a “Prince Albert” is), and links to porn sites. Luckily the algorithms weren’t as sophisticated then, and my identity wasn’t associated with these sites.
By 2012, the online world was a lot more sophisticated. My lovely 18-year-old Siamese mix cat was diagnosed with diabetes, and because there is no synthetic feline insulin, I had to give her shots of human insulin. I bought the insulin and syringes at the local Costco. Soon after, when I went online, I began seeing ads for all sorts of diabetes products, definitely for people, not cats. A period of anxiety followed when I expected the world to think I was diabetic, with possible impact on insurance and the like. In this case, the inputs the algorithms provided were perfectly accurate, but the conclusion was not, because algorithms can’t understand context.
My favorite thought experiment would be to test an AI against a person with the question, “Do these pants make me look fat?” Possible answers:
“No, they are flattering.” Presumably the AI could provide this answer only if it were true. For a human, this answer could be true (they really like the way the pants look), or it could be a total lie, with the intent to spare the inquirer’s feelings.
“Yes, they do make you look fat.” Again, the AI would answer this way if it were true. The human could as well, but the motivation would be more complex–to deliberately hurt the inquirer as a “bad” act, to be “honest,” or possibly to try to spare the inquirer embarrassment in public, which could be interpreted as a kind act.
“You look fabulous in anything.” For a human, this could be technically true, but more likely a classic white lie. I can’t see how an AI could come up with this answer.
As we are finding out in our current political environment, context and integrity mean a lot more than whatever “facts” are thrown around. It’s going to be a puzzling world for some time to come.
I have recently retired from a marketing and technical writing and editing career and am thoroughly enjoying writing for myself and others.
Marian, this is so interesting! I hear the word “algorithms” thrown around all the time, but I don’t really know what it means. You obviously do, which is impressive! How did you discover the lies about you that are embedded in computer data? Did you start getting a lot of messages addressed to Maurice? Would love to talk further with you about this at some point.
Thanks, Suzy. An algorithm is essentially a set of computer code instructions that tells the system what to do with the input it gets. If the input is bad, garbage in, garbage out applies. However, as is being discussed with a lot of “intelligent” systems, if the people who make the algorithms are biased in some way (think the “bro” culture in Silicon Valley), then the algorithms might incorporate those biases. Algorithms don’t understand context, hence, when I ordered a toilet seat riser for my 91-year-old mother from Amazon, I got a lot of recommendations for products I did’t need. Yes, I did get emails starting “Dear Maurice.” I knew something was up after I signed a sheet during an Obama fundraiser in 2008, and apparently someone misread my name and captured it on a computer as Maurice. From there a Democratic group in Ohio must have bought the list, and Maurice received emails from the senator there. Then, the Webster University communications started. I have no idea how the Jewish in St. Louis group got my name, or why they think I live there, but at least I’m not being addressed as Maurice!
“AI”: what a concept. Artificial, yes, but intelligence? Reminds me of a scene in The Imitation Game when Turing responds to the inspector’s open-ended question “can machines think?” I suspect that our notions of “thinking” and “intelligence” will always be evolutionary. But. You may have missed an opportunity about your sample query. I asked the question of Siri and she said “To me you are perfect”. But, of course, that’s Siri. The same virtual soul who admonished me when I inadvertently triggered her while fumbling and ultimately dropping my phone. I let out a good Oh Fudge and she said “Tom! Your language!
That’s great, Tom. I have a feeling Siri and Alexa will evolve to pass more Turing-type questions. Can’t imagine what our world will be like then!
Marian, I have had the same thoughts when I do Internet searches for my writing. I know I should do them using incognito mode, but I’m lazy and probably careless. It’s frightening to see ads popping up because I looked at something or bought a gift online. I shudder to think what algorithms assume to be true about me.
Yes, we probably should be searching incognito, but why should we have to make such an effort? The deck is stacked for the advertisers.