Previous Page  22 / 84 Next Page
Information
Show Menu
Previous Page 22 / 84 Next Page
Page Background

would like to cross. But this type

of ‘unwritten rules’ in human-to-

human communication is not easy

to transfer to AI systems.

The way humans and cobots work

together on the workfloor can take

the formof the humandemonstrating

how something is done and the

cobot learning from it so that it can

then perform a particular action

perfectly. In a single, repetitive

process it may be that the human

worker will only have to show the

robot how to do something a few

times and the robot will then take

it from there. But in more complex

situations, the cobot may always

need a human workmate on hand

to give it instructions and to instruct

it how to do things. One example of

this would be collecting waste in a

city. It can be a complex business

distinguishing what is waste and

what isn’t. It’s also hard to know

how to react if someone waves to

the waste truck driver and then runs

up behind with a bag of waste to be

picked up. A robot would not know

how to respond, whereas a human

knows that the friendliest thing to

do is wait. This means that in some

situations, humans and robots will

always have to work together, with

the robot taking on the heavier work

and its human workmate having

more time for interaction with other

people and knowing how to respond

to unexpected situations.

AI systems need to be

tested regularly

As we have already said, the human

always needs to understand how a

robot arrives at a certain conclusion

or action – and must always be

able to make adjustments where

necessary. Recent examples of

prob-lems with artificial systems

have demonstrated exactly that.

For example, there is the instance

of the chatbot, Tay, which began

posting racist messages on Twitter

after certain other Twitter users

left politically incorrect posts. The

chatbot had not been given any

instruc-tions to recognize these

types of statements as being

inappropriate.

‘Norman’ also made the news in

2018. Norman is an AI system

that

displayed

psychopathic

characteristics when doing a well-

known test with Rorschach inkblots.

It happened because Norman had

previously been shown mainly

sensational and violent images from

Reddit and he had built up a picture

of the world based on those images.

MIT re-searchers wanted to use

the experiment to demonstrate the

danger of ‘false data’ being used as

input for AI systems.

And finally, there is also the example

of the COMPAS algorithm that

was used by the judicial system in

America to make predictions about

the recidivism of convicts. What

happened? Based on the historical

data used as input for the algorithm,

it reached the conclusion that blacks

Fig 2:

The cobot developed for Audi can be operated using hand gestures

and has a screen in the form of a face to communicate with the worker.

Fig 3:

This is Norman, the ‘psychopathic’ AI algorithm that MIT scientists

trained to demonstrate the danger of AI when ‘false’ data is used as input.

(copyright “Thunderbrush on Fiverr”,

https://www.fiverr.com/thunderbrush)

.

At right is one of the Rorschach inkblot tests that Nor-man was given to look

at. In it, he saw a man knocked down and killed by a speeding car, whereas

standard AI systems see it as a close-up of a wedding cake on a table.

22 l New-Tech Magazine Europe