After relying on printed maps for years, most of us relish using GPS systems that direct us vocally to our destinations today. We appreciate that our home is now “smart” and that just about anything electric can be controlled automatically by our voice. That’s the value of artificial intelligence of AI.
But what if AI revolts and goes rogue? What if the few geniuses who create and design AI create unwanted biases purposefully or by accident within some of their designs?
A study released earlier this year reported that driverless cars could better detect fair-skinned pedestrians than those with dark skin. It stated that the algorithm for detecting pedestrians was filled with three times as much data of light-colored people than dark. As a result, driverless cars were quicker to sense fair-skinned pedestrians. Facial recognition software has also come under the same criticism for similar reasons.
How Does AI Relate To PR?
If your organization already uses AI, put together a crisis communication plan that considers the worst-case scenario.
Say you’re with a law enforcement agency and AI alerts you that a certain person is likely the suspect you’re seeking in a high-profile felony. The suspect is arrested and charged amidst lots of publicity. It’s subsequently learned that the data fed into the program was weighted heavily toward one ethnic group and that the suspect had an unimpeachable alibi.
Imagine what could be a long-lasting stain on the department’s image and the credibility of its facial recognition program not to mention the possibility of a big lawsuit. Consider how many cities today are even abandoning or suspending their red-light cameras at heavily trafficked intersections.
If your AI was indeed weighted with disproportionate data, the crisis communication plan would likely contain more “don’ts” than “dos.” One of the “don’t’s” is not throwing AI under the proverbial bus by saying, “It’s not our fault. We relied on AI.”
It would be more prudent and wiser to respond that the department is investigating the facial recognition program and talking with its manufacturer about any possible issues it may have.
If your company is considering an investment in AI, get yourself a seat at the table and raise questions that may avoid problems in the future. These questions should revolve around whether the data fed into your potential investment is representative of today’s reality or not. Be on the outlook for unintentional bias.
Even the language used in algorithms can be biased. Men tend to recognize and use words more slanted to engineering and weapons while women lean more to words relating to flowers and fashion. When the proper words are loaded into an algorithm, the effects can be priceless. Used improperly, they’re meaningless and, in some cases, dangerous.
One of the latest developments in AI is its use in wealth management. It will be interesting to see if investors have confidence in it over a “live” advisor.
These are important things to consider, at least until AI is programmed to make its own ethical decisions.
Ronn Torossian is the CEO and Founder of 5WPR