Wednesday, August 13, 2014

11986: Algorithm Optimizes Ignorance.

Digiday published a perspective titled, “Help, my algorithm is a racist.” The post proved at least two points:

1. Digital practitioners are not very bright.

2. White people in our field tend to be culturally clueless.

Essentially, the author noted that an online message featuring a White kid received more clicks than a similar version featuring a Black kid—prompting his “racist” algorithm to optimize the campaign to the creative that performed better.

Um, to call the author’s conclusion culturally clueless is an understatement, as he’s professionally ignorant too. As someone in the comments section pointed out, it’s difficult—if not impossible—to draw any conclusions based on the data provided in the story. The biggest questions include:

1. Who is the target audience?

2. What was being sold?

3. Was race/ethnicity a relevant part of the message?

4. Were the White kid and Black kid images completely identical except for the race component?

5. Why did the author even have two different images to begin with?

To simply make a judgmental statement about the algorithm—even if the author’s true intent was amusement and entertainment—is ignorant. Plus, calling the algorithm a racist is really stupid, as well as insensitive. After all, if one were to label the author a racist for having conceived and published the piece, there would be a backlash from the PC-haters—and the author would probably take offense too. Racist is a combustible tag for White folks, especially when it’s tossed in their direction. The culturally clueless among us—and let’s be honest, there are lots of culturally clueless critters in our ranks—should take care when using the word. Better yet, just delete it from your vocabulary.

Help, my algorithm is a racist

By Nate Carter

Nate Carter is managing director of eEffective, a digital trading desk.

The other week I found out that my algorithm is a racist.

Don’t get me wrong, it wasn’t birthed this way. In fact, we can be sure that in this case the racism is a product of nurture, not nature. You see, I was running two creative sets. Both were pictures of children, their mere image beckoning the web browser to click on them. Click on them people did. The problem is that, over time, they clicked on one creative more than the other, and when they converted on the landing page, they converted on that same creative with higher frequency. Doing what it was designed to do, my algorithm jumped in, optimizing the campaign to the better-performing creative: the one with the white child, not the black child.

An awkward moment arose. What do we do? After all, this is a results business and the Caucasian creative was [bringing] in the goods. Still something didn’t feel quite right. It also made me wonder, are we racist? Had our racism poisoned my algorithm and turned it into a monster?

These were difficult ethical questions. On the surface it appeared that I may have uncovered statistical proof of underlying racism. But what if the motives of the audience clicking were less devious? What if the creative with the Caucasian child was simply more appealing, without regard to skin color? Also there was the question of, now what? Do I reprogram my algorithm? Do we take the learnings and run with the better-performing creative? What are the ethical ramifications of the latter?

Overall it was a healthy conversation to have. It also showcased that in an age where it is easy to let the machine make all of the decisions, there are things which are worth debating, considering and pondering which go beyond simple numerical analysis. You see, there is a danger that our algorithms can end up racist or bigoted, for they are by their very function prejudiced. If we allow them to optimize, unencumbered they become a reflection of us, all of our best and all of our worst.

As we continue to make strides in customization and individualization of our messaging it is important that we are looking at what we are telling people, giving clients insights into campaign bias and considering the ethical ramifications.

No comments: