Climate change is a systemic risk which impacts on all business sectors. It increases uncertainty and investment risk and endangers entire business models. Professional investors and asset managers are taking climate change more and more into account. Corporates are starting to quantify climate risk in their mid- and long-term strategies. Predictive analytics turns out to be key in making quantified assessments in mostly unexplored terrain: How are extreme weather risks impacting on production sites and physical assets of the firm? How is the upcoming carbon taxation impacting on the company now and in future? And how vulnerable is the global supply chain of the company against business interruption risks? This deep dive explores methods and tools for climate risk quantification.

Time series describe many business, technical and industrial processes, they can be generated by different devices and can characterize consumers’ behaviour e.g. energy consumption. So time series-based predictive analytics is an important part in decision making processes in different industrial domain areas. Bohdan is going to consider different use cases, including energy consumption, which depend on many exogenous factors like weather, building option, etc. and we are going to consider different approaches: linear models, bayesian inference and machine learning. They can be propagated on a wide range of business and technical problems. Probabilistic models based on the bayesian inference can take into account expert opinion via prior distributions for parameters and can be used for different kinds of risk assessments. Multilevel predictive ensembles of models based on the bagging and stacking approaches will be regarded. Bohdan is also going to consider a Q-learning approach which can be used in many industrial problems where it is necessary to obtain the sequences of optimal decisions.

In this presentation, Terry will present a unique method for accessing and processing industrial data to optimize asset and process control using statistical modeling and visualization on the Edge. Specifically, he will evaluate how to securely stream data from a simulated wastewater treatment plant flow loop in order to model a control-valve failure condition called “stiction”. He will evaluate how to predict stiction (or not) using Linear Regression. In addition, Terry will evaluate the best practices of asset & process control using scored models and visualizations on the Edge. The key takeaways of this deep dive are: remote Data Science teams can, and should, leverage innovative cybersecurity tools to access industrial data in near real time via machine-to-cloud and machine-to-edge architectures for predictive & explanatory analytics; and industrial Data Science teams should contribute to process & asset control via predictive and explanatory analytics. Edge analytics should incorporate the best practices of statistical modeling and visualization development to define industrial operations procedures.

There has been exponential increase in number of electronic sensors, which now are driving every industry – right from manufacturing to aerospace. These sensors are capable of producing a lot of data. However, the sensors are known to be notorious for their jitters. These jitters often result in false alarms which could be troublesome for industrialists. In the session Rohit will walkthrough the traditional methods and then take deep dive into state of the art approach. Auto-Encoders are a great product of Deep Learning Neural Networks, if used correctly they have the potential to save a lot of cost on the asset.

The potential of machine learning in industrial applications is huge. But a big challenge in many projects is the availability of sufficient training data. Often the large number of product variants means that for a given variant only a handful of examples are available. Sometimes the parameters of interest are seldom changed, which decreases the amount of information in the training data. To enhance such data we need to include process knowledge. A natural framework for this are Bayesian networks. Guided by examples Maksim will explain the basic principles of how you can encode process knowledge in a Bayesian network, how this network is trained and how it helps you to make accurate predictions or to optimize an industrial process.

Delivering ad-hoc insights from data and conducting data science proof-of-concepts have become common practice in organizations across industries. In contrast, sustaining value from scaling such initiatives through a strategic approach remains a challenge. Julia and Michael present their point of view on three crucial components for successful implementation of analytics work. These include cross-functional data science capabilities combining functional and technical expertise along with the right technical stack, user-centric design to gain the acceptance and adoption of analytics solutions, and an operating model that facilitates the transition towards an industrialized approach for scaling analytics solutions. In the context of this presentation they will discuss those factors by means of real examples focusing on right stack for the supply chain, operations and mobility capabilities.

Many manufacturing companies are not yet very data-driven but willing to use their data to optimize production. In that case, one of the most challenging tasks of data experts is to establish an understanding from top management to all employees about circumstances that are prerequisites when working with data. Managers often tend to think in terms of tools – a well-designed front-end delivers the analytics solution – ignoring that proven data quality and data reliability are key to this. In this session, some actions are explained which have triggered a steady change in thinking processes at REHAU (thinking in solutions instead of tools). However, developing a useful data-driven solution for the manufacturing industry requires strong collaboration of industry, technical and data experts. Right from the beginning, all important stakeholders have to be involved and highly motivated even if they have their daily business. Sandra will explain what challenges they faced within the interdisciplinary teams and how they successfully integrated the different experts and created efficient communication channels to avoid misunderstandings. The last part of the session focuses on the steps that have to be followed to develop long-term and scalable data-driven solutions. Even REHAU has still some way to go, Sandra will talk about her learnings to set up a sustainable data strategy and about how to get a similar understanding between stakeholders evidently focusing on different targets.

Newsletter Knowledge is everything! Sign up for our newsletter to receive:
  • 10% off your first ticket!
  • insights, interviews, tips, news, and much more about Predictive Analytics World Industry
  • price break reminders