Computing human bias with AI technology
Articles,  Blog

Computing human bias with AI technology


The artificial intelligence we’re building is biased. And the short explanation for that is it’s our fault. We’re biased and our machines are learning from us. But how do we become biased? Joanna Bryson thinks we might be able to learn
that from machines. — Well, the only reason that AI is so powerful at
understanding ourselves is the fact that we are predictable. Joanna is a computer scientist who studies artificial intelligence. Last year she basically fed the entire internet into a machine. OK, not everything. But about 840 billion words of it. Tons of stuff: your tweets, the Declaration of Independence,
our League of Legends threads, and more. She wanted to see if the machine could form biases
just based on the patterns it could see in our language. — So basically words mean more like the same thing
if you use them more like the same way. Kind of obvious. So you can talk about going home and feeding your dog
or going home and feeding your cat. And you’re never going to talk about, you know
going home and feeding an orange or something. Joanna fed the machine all this language and then told
it to create clusters of words that were related to each other. Then she compared her language association data
to the results of a hidden bias test. This is a test people take to measure the biases
we might not be aware of. Joanna was completely taken aback by how closely her machine’s bias
tracked to the results of that test which is called the Implicit Association Test
and has been taken over 17 million times. That means Joanna’s machine’s bias matched millions
of people’s aggregate biases almost perfectly. — Even though it’s a giant spread sheet that counted a lot of words it has this knowledge about the fact that flowers
are more pleasant and insects are less pleasant. We think flowers are more pleasant than insects. So did the machine. But we also think woman when someone says nurse. So did the machine. Anything. Any bias that we have. So did the machine. — Most of us believe that we are consciously in
control of our beliefs our attitudes our behavior, our actions, our judgements. That’s Tony Greenwald. He co-created the
Implicit Association Test in 1995. — And the research says there is something going on
automatically behind the scenes, unconsciously. The I.A.T. involves showing test takers a series of images and words
and measuring how quickly they associate the two. The idea is that the test can reveal biases we’ve unwittingly picked up
and that may have nothing to do
with what we actually believe. — So explicitly we can say that men and women both deserve careers But implicitly you are exposed constantly to the Brady Bunch or whatever. And the men are more likely to have careers. And so it’s a little bit easier, a little bit
faster to talk about men’s names in professional positions, than women’s names. Since it’s invention the Implicit Association Test has been criticized partly because it’s not good at predicting an individual’s prejudiced behavior. But Bryson was shocked to see how closely her machine’s bias tracked the results of the test on a large scale. — The fact that the signal was so strong was just astounding. And it made us realize this is a really important result. It got her thinking. Maybe our brains are constantly processing language
in the background like a computer. And maybe that process creates our biases. — Well, the only reason AI is so powerful at understanding
ourselves is the fact that we are predictable. We are algorithmic.
And we can be explained in these kinds of ways. In other words maybe your biases
aren’t responsible for our language. Maybe it’s the other way around. — Everyone thought, oh, this means machines are prejudiced
and that’s what got all the headlines. But the people in science recognize that this means
this is a new possible explanation for why people are prejudiced
or how we transmit prejudice to our children. Not intentionally, but just by
letting them hear the language. What’s really a weird thing — you started
out writing a technical paper And suddenly you’re talking about fundamental things of ethics. Her work also suggests that AI’s bias is not just
inherited from the people who built it. — There were some people saying of course AI is all biased because it’s written by white guys in California. It is true that you have things that pass you by
because it’s not your lived experience. but even if you just completely, fairly,
take a sample of all the words out there you’re going to wind up with this
and get a biased machine which is just that you’ve absorbed the biased culture. And that means just making sure developers are more diverse a good thing for a bunch of reasons –
won’t necessarily make our machines fair. Bryson’s research tells us a lot about how machine’s learn. And opens the door for a lot more research on bias and AI. But she says, it also tells us a lot about how we learn. And that bias can be harder to shake than we think.

13 Comments

  • DJ Programer

    So they made a social/psychological mirror of the route total.

    It's miscommunication…kind of.

    What we want to express Vs what we actually express

  • garet claborn

    all neural networks are biased, both digital and biological. this is a good thing, and a major aspect of making minds possible

  • λ

    "But even if technology can’t fully solve the social ills of institutional bias and prejudicial discrimination, the evidence reviewed here suggests that, in practice, it can play a small but measurable part in improving the status quo. This is not an argument for algorithmic absolutism or blind faith in the power of statistics. If we find in some instances that algorithms have an unacceptably high degree of bias in comparison with current decision-making processes, then there is no harm done by following the evidence and maintaining the existing paradigm. But a commitment to following the evidence cuts both ways, and we should to be willing to accept that — in some instances — algorithms will be part of the solution for reducing institutional biases. So the next time you read a headline about the perils of algorithmic bias, remember to look in the mirror and recall that the perils of human bias are likely even worse."

    Source: https://hbr.org/2018/07/want-less-biased-decisions-use-algorithms

  • Ernst Jünger

    I fail to see how associating nursing as a profession with woman is bias. It's a logical outcome of the fact that woman vastly outnumber men in the profession. And probably always will because they are more biologically predisposed towards maternalism.

  • Jim Frans

    imho, i don't think this sense in my brain would be very useful in the future since there'd be plenty of technologies that would help me with directions, except (perhaps) when i am in a place where those technologies could not reach me.

    instead, i think it would be better if they make the same technology that could enhance my sense of time.
    it'd be great if i could always know how long i've been doing a certain activity, so i could always control my sense of time.

  • Yadisf Haddad

    Could you please link to the scientific paper with the research of Mrs. Bryson and also the bias test? really interesting topic, it relates to the research I'm currently procrastinating with this video.

Leave a Reply

Your email address will not be published. Required fields are marked *