Categories
ai technology

The Humanity In Artificial Intelligence

Algorithms, artificial intelligence, and machine learning are not new concepts. But they are finding new applications. Wherever there is data, engineers are building systems to make sense our of that data. Wherever there is an opportunity for a machine to make a decision, engineers are building it. It could be for simple, low-risk decisions to free up a human to make a more complicated decision. Or it could be because there is too much data for a human to decide at all. Data-driven algorithms are making more decisions in many areas of our lives.

Algorithms already decide what search results we see. They determine our driving routes or assign us the closest Lyft and, soon, they will enable self-driving cars and other autonomous vehicles. They’re matching job candidates with applicants. They’re recommending the next movie you should watch or product you should buy. They’re figuring out which houses to show you and the whether you can make the mortgage payment.The more data we feed them, the more they learn about us, and they are getting better at judging our mood and intention to predict our behavior.

I’ve been thinking a lot about these systems lately. My son has epilepsy, and I’m working on a project to gauge the sentiment towards epilepsy on social media. I’m scraping epilepsy-related tweets from Twitter and feeding them to a sentiment analyzer. The system calculates a score that represents whether an opinion expressed is positive, negative, or neutral.

Companies already use sentiment analysis to understand the relationship with their customers. They analyze reviews and social media mentions to measure the effectiveness of an ad. They can inspect negative comments and find ways to make a product better. They can see when a public relations incident turns against them.

For the epilepsy project, my initial goal was to track sentiment over time. I wanted to see why people were using Twitter to talk about epilepsy. Were they sharing positive stories or were they sharing hardship and challenges? I also wanted to know whether people responded more to the positive or negative tweets.

While the potential is there, the technology may not be quite ready. These systems aren’t perfect. Context and the complexities of human expression confuse even humans. “I [expletive] love epilepsy” may seem to express a positive sentiment to an immature algorithm. The effectiveness of any system built on top of them is limited by these algorithms themselves.

I thought about this as I compared two different sentiment analyzers. They gave me different answers for tweets that expressed a negative sentiment. Of course, which was “right” could be subjective. But most reasonable people would have agreed that the tone of the text was negative.

Like a child, a system sometimes gets a wrong answer because it hasn’t learned enough to know the right one. This was likely the case in my example. The answer given was likely due to limitations in the algorithm. Still, imagine if I built my system to predict the mood of a patient using the immature algorithm. When the foundation is wrong, the house will crumble.

But, also like a child, sometimes they give an answer because a parent taught them that answer. Whether through explicit coding choices or biased data sets, systems can “learn wrong”. After all, people created these systems. People, with their logic and ingenuity, but also their biases and flaws. A human told it that an answer was right or wrong. A human with a viewpoint. Or a human with an agenda.

We create these systems with branches of code and then teach them which branch to follow. We let them learn and show enough proficiency and then we trust them to keep getting better. We create new systems and give them more responsibility. But somewhere, back in the beginning, a fallible human wrote that first line of code. It is impossible for those actions to not influence every outcome.

These systems will continue to be pervasive, reaching into new areas of our lives. We’ll continue to depend on them and even trust them because they make our lives easier. And because they get it right most of the time. The danger is assuming they always get it right and to not question an answer the feels wrong. “The machine gave me the answer so it must be true” is a dangerous statement, now more than ever.

We dehumanize these programs once they make contact with the cold metal box that they run in. But they are extensions our humanity and it’s important to remember their human origins.