Britta Hilt

Is AI Too "Spooky" to be Trusted? - Explainable AI with semantic networks at ZF, Knauf & Co.


Tuesday, Oct 4, 2022


On demand



Can you trust “artificial brains” and thus, “artificial” decisions or recommendations? Well-known deep learning with neuronal networks is normally a black box – thus, people cannot understand why AI decided this way. Therefore, new AI methods are on the rise: deep learning with semantic networks. This explainable AI makes subject matter expert understand and thus, trust AI. In this presentation, case studies are shown from discrete manufacturing (i.e. ZF) and process industry (i.e. Knauf).

Ready to attend?

Register now! Join your peers.

Register nowView Agenda
Newsletter Knowledge is everything! Sign up for our newsletter to receive:
  • 10% off your first ticket!
  • insights, interviews, tips, news, and much more about Predictive Analytics World Industry
  • price break reminders