New-Tech Europe Magazine | January 2019

Fig 2: Should we make digital twins of the factory in the cloud to realize a reliable communication between humans and machines? Although this ‘dictator model’ seems an ideal solution to deal with complex situations on the factory floor, there are two caveats: competitors working in the same factory don’t want to share data and a human employee needs to be able to intervene.

competitors). In such a context, protection of data, privacy and information is enormously important – which does not fit the ‘dictator model’ scenario where the central brain must have access to all possible types of data (including competitively sensitive data) to do its job properly. For many business leaders, having to share those data would be the ultimate nightmare. And the second caveat? Human unpredictability! Even if we can operate a factory in which the commercial interests of only one party are involved, the centrally controlled scenario falls to pieces as soon as one person walks around the factory; a person with their own autonomy and authority. Imagine for example that the human employee (the ‘creative architect’, as we labeled them earlier) notices that a robot is doing something wrong and gets involved to rectify the fault… At that moment, the whole system would come to a standstill, as the virtual brain would have lost all control.

Hence, this model might only be a valid one for industrial facilities that focus on the production of bulk goods, and where the role of humans is minimal (or – in the long run – perhaps even non-existent). A new form of artificial intelligence: complex reasoning In other words: whenever man and machine do have to work together, we will need to use different methods to cater for human unpredictability, and to ensure that robots can anticipate it. “A particularly promising principle is that of ‘complex reasoning’ – a new form of artificial intelligence that can be used to teach machines how to reason autonomously and anticipate the actions of something (or someone) else. However, there is still a long way to go before we can put the principle of complex reasoning into practice.” After all, artificial intelligence as we know it today, is based on ‘deep

learning’ – a powerful technology to recognize patterns in huge amounts of data. In the meantime, we have mastered this technology, so now the goal is to take the next step and to have machines ask themselves the question: “How do my actions affect the actions of people around me?” To make things even more complicated, we must throw this extra consideration into the mix: in an industrial setting, the foremost requirement is transparency (to make sure production targets can be met). But deep learning is actually the opposite of this, namely a ‘black box’: you train the system to recognize patterns, but you lose control over how that system comes to its conclusions. Hence, an extra requirement of complex reasoning is that it must be sufficiently transparent (or ‘explainable’) for people to accept it, meaning that in the future we will be talking about ‘explainable AI’.

20 l New-Tech Magazine Europe

Made with FlippingBook - Online Brochure Maker