Advertisment

Best practices for enabling industrial digital transformation: Jacques Durand, IIC

What works, is not just defined by the improvements sought in one area, but also by assessing and minimizing undesirable side effects

author-image
Pradeep Chakraborty
New Update
Digital twin

The Industrial Internet Consortium (IIC), the world’s leading organization transforming business and society by accelerating the adoption of the Industrial Internet of Things (IIoT), announced the publication of the Enabling Digital Transformation with IoT Performance and Properties Measurement White Paper.

Advertisment

Written for all IIoT stakeholders, the white paper explains why measuring the efficiency and efficacy of industrial digital transformation solutions across industries is essential.

Jacques Durand is technical director IoT technologies Fujitsu America Inc. and member, steering committee, the Industrial Internet Consortium (IIC). Here, he discusses the white paper. Excerpts from an interview:

DQ: How essential is measuring the efficiency and efficacy of industrial digital transformation (DX) solutions across industries today?

Advertisment

Jacques Durand: KPIs and performance measurements are not new, but a culture of measurement becomes essential for DX solutions. In many cases, DX is about transformations taking place in an existing industrial system that has already its established processes. A proposed upgrade is in the challenger position: there will be uncertainty on the value, concerns about risks, skepticism and resistance. A consensus between the different stakeholders involved in a DX solution can only be reached based on some measurable agreed-upon objectives.

IIC

Assessing “efficiency” has several dimensions: while improving performance of a particular operation or quality of its output (speed, productivity, throughput, error and defect rate, etc.) may be at the forefront, other aspects need be measured as well. These aspects have their own objectives: non-functional properties such as security, safety, robustness, flexibility. These properties are harder to evaluate, and take more time. Finally, the impact (or “side effects”) of a solution on other operations must be assessed as well. Only then can we assess in a well-rounded (a well as consensual) manner the value and viability of a solution.

Advertisment

There is an investigation side to deploying DX. The case for digital transformation is rarely clear from the start. Digital solutions rely on the recent or emerging technologies that may or may not work for the situation at hand. A problem to be resolved may be agreed upon, but its causes may remain obscure.

An investigation may be necessary to understand these (Kaizen process in Japan). Collecting IoT data on various parameters of a process/product to improve, including its context (e.g. who is operating this machine, what is the inventory of parts to be used) , then correlating these with actual performance/quality variations has been observed to speed-up this research phase.

In several instances, a solution or technology has been used with success in other places by other users. Even if a solution is replicated, there are many parameters and requirements that are different from one implementation to the other. We have observed in IIC that one size does not fit all: every case is different, and needs to be adjusted, based on measurements and targets specific to this case.

Advertisment

DQ: How can one objectively assess what works and what does not?

Jacques Durand: That is where metrics and targets need to be precisely defined. These are two different but complementary things. You may set a target of 98% of service uptime per month, but for this to be meaningful you need to define precisely what a downtime is: do we count as downtime a case where the cause of disruption is external and not the responsibility of the provider?

Does a service degraded beyond some threshold (e.g. acceptable response time limit), qualify as downtime? Should downtimes not be measured during scheduled maintenance periods? Significant differences exist between the providers of a same type of service based on such details.

Advertisment

The definition of a metric includes more than the quantity to measure. It includes all the modalities and the precise conditions of the measurements to be made. Only then, defining targets for these metrics is meaningful.

The process of defining these metrics and their targets is not accessory: it leads to a deeper thinking of what people really want and value. Without this, some stakeholders will always see the glass half empty and others will see it half full.

Only after defining the metrics that capture the different aspects of a solution and their targets, can all parties (stakeholders) interested in a solution get a common, non ambiguous understanding of their goals and of what is defined as “success.”

Advertisment

Finally, “what works” is not just defined by the improvements sought in one area, but also by assessing and minimizing undesirable side effects. Is the introduction of a new technology or of a process change, creating troubles in other parts of the system? Will the proposed solution still prove valuable over time beyond a successful initial demonstration, after it is deployed in real conditions? This kind of assessment relies on measures done over time after the solution is deployed.

DQ: How do you determine the best practices for developing and deploying IIoT solutions?

Jacques Durand: We believe that this question has two facets. We assume here that IIoT is understood as an essential part of digital transformation:

Advertisment

(a) Best practices for technologies supportive of IIoT and more generally DX. These are often emerging technologies that are mature enough to be deployed (such as AI, real-time analytics, digital twins, time sensitive networks), although still evolving. Yet, these are still lacking best practices for their usage in industrial or business conditions.

(b) Best practices for developing IIoT systems. Systems supportive of digital transformation come in many shape and forms. Different requirements dictate very different architectures, even if they involve similar technologies. The IIC has identified some architecture patterns for IIoT systems, and is now attempting to associate best practices to these, by developing a tool (“project explorer) that captures the profile of an IIoT solution by collecting requirements along a set of indicators, such as the expected type and volume of data, the characteristics of physical assets to be connected, or the networking constraints. These indicators require their own measures to be established (these are useful to assess the initial profile of a solution, but also to track how the solution is evolving over time after deployment).

Establishing best practices for both of these IIoT aspects was a primary motivation behind the testbed program in IIC. In short, closely monitoring a well-rounded set of pilot projects is in our view the best way to collect such best practices. Another source is the collection of case studies and use cases from IIC members and partners.

DQ: Will the process or product enhancements empower the digital transformation journey or create risks for it?

Jacques Durand: Risks come with an incomplete awareness or capture of unexpected side effects and undesirable impact of deploying a solution. Some trade-offs will need to be managed: the operational expectations that motivate DX are relatively easy to formulate: performance, throughput, productivity, cost reduction, response and lead times, defect or error rates.

Yet, an understanding of adverse effects is key to the long term viability of a solution: are there undesirable side effects, unexpected operating costs, overhead, disruption, rigidity & fragility of a process, and other risks tied to personnel (skills needed to operate or maintain a sophisticated system, loss of human expertise with automation)?

Establishing the right metrics to capture both positive and negative aspects is crucial to controlling the impact of DX choices in a complex environment.

Also, the context of DX solutions is evolving over time. Requirements, objectives and constraints may change. Not recognizing this is another source of risk. Measures are needed to validate performance over time. Continuous assessment (outcome evaluation) of a solution is expected.

Finally, non-functional properties of a solution deserve attention up-front: scalability, manageability, and more importantly those defining the notion of “trustworthiness” for which the notion of risk has been well defined: security, safety, reliability, resilience and privacy. Objectives for these have to be stated and managed, and these may conflict with operational objectives. Again, trade-offs need to be managed (see: “managing trustworthiness in practice…”). Metrics and targets are an important tool to manage these trade-offs.

DQ: What are the gains in efficiency or efficacy, and are they worth the investment costs and process changes?

Jacques Durand: The gains expected for a DX solution include well-known business indicators or KPIs. However, in an industrial context such as manufacturing, DX solutions are often about improving existing operations, and are expressed and judged in operational terms: speed, lead time, productivity, throughput, personnel error rate, product quality and defect rate, etc.

Because operation personnel are the first line of experts to make or break the success of a solution, these operational goals deserve a lot of attention, more than financial indicators, which often come too late in the process to inform agile decisions.

Again, as mentioned above, it is essential to track both the positive factors in a solution (the improvements we seek) and the negative aspects (costs and risks of all kinds, related to process changes). These effects are harder to assess and may need to be measured over a longer period of time.

Also, non-functional properties and their objectives need to be documented, and tentatively measured (metrics, targets) although that is harder and often requires a longer period to assess.

DQ: What are the metrics for performance as digital transformation solutions evolve in complexity, interdependency?

Jacques Durand: System complexity and interdependency are managed with an architecture based on sub-systems and the use of services, often shared and managed by 3rd parties. The contractual aspect between parties or between components/services – either technical or business-like such as SLAs – becomes important.

When a service depends on underlying services, committing to its quality is only possible if there is a clear understanding of the quality of the underlying service. We see IoT solutions relying on a 3rd party data collection system usable by all kinds of smart city applications. For example smart cities where all monitoring data about traffic, road condition, parking and transportation schedules, is collected through a separate platform and offered as a service - a data marketplace – to an open-ended set of applications that provide services consumed by end-users (citizen and city agents.)

So, the metrics for performance will not change much on the end-user side, but metrics for subcomponents and underlying services become important to establish these contracts, whether internal to a system (to control the behavior of components) or commercial for example to obtain some assurance on the availability of a cloud service.

DQ: What happens after a deployment is changed, or a new, better solution replaces the existing one?

Jacques Durand: Changes are expected over a solution lifecycle. The new solution needs to be monitored and evaluated. The metrics used for the initial solution can serve as reference, although they (or their targets) will need to be adjusted. It is important to monitor the adverse effects of a solution upgrade and to ensure they are acceptable. Only then the solution can be said to be “better.”

As a solution always evolves over time (or its context is changing thus requiring an evolution), it is useful to keep a record of the solution performance in relation with the context of the times. A record of adverse effects (or of the absence of these) should be kept as well, so that the personnel in charge of deploying the new solution knows what to pay attention to. Again, metrics – both their definitions and their monitoring records – are very helpful to keep track of this history.

industry-4-0 iic
Advertisment