It's Time To Recognize That Machines Are Learning All The Wrong Things

We need to be more cautious and aware of how data can be used and misused.
Nov 9, 2016 7:55 AM ET

Originally posted on Fast Company.

Data-driven algorithms govern many aspects of life: university admissions, resume screening, and a person’s ability to get a car or home loan. Often, using data leads to more efficient allocation of resources and better outcomes for everyone. But algorithms can come with unintended consequences—and without care, their application can result in a society we don’t want.

Typically, we think of algorithms as being neutral and objective, but when software is written and trained by humans, it often encodes the biases and prejudices of the people that make and shape it. Ultimately, the biases built into algorithms can be racist and marginalize low-ranking socioeconomic groups. What’s truly worrying is that, unlike with people, the biases in algorithms are sometimes difficult to detect, undo, and fix.​

Click here to continue reading the full article.