You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

Controlling Supervised Industry 4.0 Processes through Logic Rules and Tensor Deformation Functions

Abstract

Industry 4.0 solutions are composed of autonomous engineered systems where heterogeneous agents act in a choreographed manner to create complex workflows. Agents work at low-level in a flexible and independent manner, and their actions and behaviour may be sparsely manipulated. Besides, agents such as humans tend to show a very dynamic behaviour and processes may be executed in a very anarchic, but correct way. Thus, innovative, and more flexible control techniques are required. In this work, supervisory control techniques are employed to guarantee a correct execution of distributed and choreographed processes in Industry 4.0 scenarios. At prosumer level, processes are represented using soft models where logic rules and deformation indicators are used to analyse the correctness of executions. These logic rules are verified using specific engines at business level. These engines are fed with deformation metrics obtained through tensor deformation functions at production level. To apply deformation functions, processes are represented as discrete flexible solids in a phase space, under external forces representing the variations in every task’s inputs. The proposed solution presents two main novelties and original contributions. On the one hand, the innovative use of soft models and deformation indicators allows the implementation of this control solution not only in traditional industrial scenarios where rigid procedures are followed, but also in other future engineered applications. On the other hand, the original integration of logic rules and events makes possible to control any kind of device, including those which do not have an explicit control plane or interface. Finally, to evaluate the performance of the proposed solution, an experimental validation using a real pervasive computing infrastructure is carried out.

1Introduction

Industry 4.0 (Lasi et al.2014) refers to a new industrial revolution where current production solutions and robots are being replaced by Cyber-Physical Systems (Bordel et al.2017b) and other innovative engineered systems such as pervasive computing or sensing infrastructures (Ebling and Want, 2017). Traditionally, in industrial scenarios, managers define at a very high abstraction level processes to be executed and supported by production systems (Sánchez et al.2016). Dashboards and other graphic environments are employed for this purpose. In these models, actions to be performed at low-level together with their input parameters are indicated. Connections among tasks and their temporal organization are strict and cannot be modified. Moreover, sometimes valid ranges for task outputs may also be defined as common industrial systems and robots are very predictable and precise (Bordel et al.2018c). This top-down approach employs technologies such as YAWL (Van Der Aalst and Ter Hofstede, 2005) or BPMN (Geiger et al.2018) to create and validate processes and assumes that every low-level infrastructure is following a request-actuation design paradigm. In this paradigm, every low-level system is provided with an interface through which requests may be received. After each request, the hardware system performs a certain action or actuation and, after that, stops its operation waiting for a new request (Bordel et al.2018b). These systems, then, are totally controllable by external agents and match perfectly the request sequences that are created through modelling technologies such as YAWL (Bordel et al.2017a).

However, Industry 4.0 scenarios are different. Cyber-Physical Systems and pervasive computing infrastructures are not typically provided with open interfaces, and they tend to act autonomously according to deterministic algorithms or, even, learning technologies (Bordel et al.2020b). Dense environments where thousands of agents with heterogeneous capabilities are deployed and working together are the most common application scenarios for Industry 4.0. This context, furthermore, can get more complex if humans are considered (Bordel et al.2017c). In fact, Industry 4.0 is inclusive, and many manufacturing companies produce handmade products where human labour is essential. Production systems including humans are even more heterogenous, as people’s behaviour is very variable and dynamic. And, obviously, human agents are not externally controllable (Bordel et al.2017c). Thus, in Industry 4.0, complex tasks and services are provided through the choreographed coordination of heterogenous agents acting in an autonomous way (Bordel et al.2018a). This bottom-up approach is also compatible with computational processes defined at high-level, if decomposition and transformation engines are considered. However, it is a very costly and inefficient approach, as many false negative alarms are triggered. When autonomous agents are included, processes may be executed in a very anarchic but still correct manner.

Therefore, traditional industrial control mechanisms are not valid for these scenarios, and innovative technologies are needed (Bordel et al.2020a). Then, in this work we propose a new supervisory control mechanism for Industry 4.0 scenarios, matching the special characteristics of distributed processes supported by coordinated autonomous and heterogeneous agents. This new technology defines (at prosumer level) processes using soft models where flexible logic rules and general deformation metrics guarantee the correctness of executions. These rules are verified in a passive manner at business level using specific engines. These engines receive information (events) from lower layers and run a validation procedure to analyse if minimum rules are being met by low-level agents. These events are enriched with information at production level, describing if tasks are being correctly executed or not. Finally, at production level, the physical parameters or tasks under execution are being monitored. Using tensor deformation functions (Bordel et al.2017a), the global similarity between the expected executions and the real actions is measured. To allow these calculations, processes are represented as discrete flexible solids in a multidimensional phase space. This generalized approach is focused on reducing the false negative alarms observed in traditional control systems. All information is acquired through observation and recognition mechanisms, which are not described in this paper, but already existing (Bordel et al.2019).

The rest of the paper is organized as follows: Section 2 introduces the state of the art on control mechanisms for Industry 4.0 scenarios. Section 3 presents the proposed technology, including the three abstraction levels (prosumer, business, and production) and their associated mechanisms. Section 4 provides an experimental validation of the proposal. Finally, Sections 5 and 6 explain some results of this experimental validation and the conclusions of our work.

2State of the Art

Industry 4.0 is one of the most popular research topics nowadays (Lu, 2017), thus, many different control solutions for these new scenarios have been reported. In fact, almost every existing control mechanism has been already applied and integrated into Industry 4.0 technologies and scenarios: from traditional monolithic instruments (Kretschmer et al.2016) to most modern intelligent algorithms (Meissner et al.2017). Besides, a large catalogue of specific control solutions for Industry 4.0 scenarios have been also reported (Dolgui et al.2019).

Two basic types of control solutions for Industry 4.0 have been described: supervisory control (Wonham and Cai, 2019) and embedded control (Aminifar, 2016) solutions. In supervisory control mechanisms, a surveillance system monitors the behaviour of hardware infrastructures and actuates and intervenes in the production process if it goes outside an acceptable variation margin. On the other hand, embedded control mechanisms are integrated into the production processes themselves and are continuously regulating the evolution and behaviour of the hardware platform.

One of the most popular supervisory control mechanisms in Industry 4.0 are SCADA systems (Mohammad et al.2019) (Supervisory Control And Data Acquisition). SCADA solutions can manage heterogeneous infrastructures, through complex and heavy modular software tools (Calderón Godoy and González Pérez, 2018). Traditionally, SCADA solutions were built as monolithic platforms where specific industrial protocols, such as OPC, were employed (Boyer, 2016). Initial applications for Industry 4.0 also followed this paradigm (Merchan et al.2018). Nevertheless, recently, new and different modules for slightly distributed SCADA solutions have been reported (Branger and Pang, 2015), and even cloud-based platforms may be found (Sajid et al.2016). Furthermore, some SCADA mechanisms for industrial scenarios based on Internet-of-Things have been described (Wollschlaeger et al.2017). The main problem of SCADA systems is their security weaknesses: many different reports about security problems of SCADA systems in the context of Industry 4.0 scenarios have been reported (Igure et al.2006; Chhetri et al.2017). Moreover, in largely distributed production systems, communication delays usually create complex malfunctions in SCADA control functions. Then, different stability analyses (Foruzan et al.2016), mathematical models to compensate delays (Silva et al.2018) and evolution analyses (Gu et al.2019) to detect problems have been reported. In any case, these problems are still present and only distributed solutions including a small number of devices are working nowadays.

Other supervisory control solutions for future industrial scenarios based on autonomous agents have been described. These mechanisms are typically defined as Discrete event systems (Wonham et al.2018) and are focused on autonomous robots (Gonzalez et al.2018) and similar mobile machines (Roszkowska, 2002), although other applications to real-time solutions (Sampath et al.1995) and fault diagnosis (Moreira and Basilio, 2014) may be found. Logic rules have been also employed to implement robot navigation frameworks (Kloetzer and Mahulea, 2016) and different mechanisms to optimal (Fabre and Jezequel, 2009) or clean paths (Iqbal et al.2012) in robotized Industry 4.0 have been also reported. Works on this area are also evaluating, in a formal way, the scalability (Hill and Lafortune, 2017) and software characteristics (Goryca and Hill, 2013) of supervisory software solution for Industry 4.0.

Contrary to all these previous works, the problem and scenario addressed in this work is more general. First, we are considering not only robots and similar devices but also pervasive infrastructure and humans; what introduces an important challenge. Besides, all previous supervisory control solutions assume there is an interface so the surveillance system can intervene and act in the production process; however in most future engineered solutions (and, of course, when humans are considered), that’s not a realistic assumption.

Embedded control solutions can be classified into two different groups: vertical and horizontal architectures (Dolgui et al.2019). Vertical architectures are, probably, the genuine approach for Industry 4.0 applications. In this approach, computational processes are transformed and decomposed (Ivanov et al.2016a; Bagheri et al.2015), so executable units may be transferred and delegated to remote production infrastructures or, even, cloud services (Bordel et al.2018c). On the contrary, horizontal architectures are traditional embedded control paradigms, which have been adapted to Industry 4.0 (Lalwani et al.2006). Feedback control systems are the most traditional approach. In these mechanisms, a complex production system is represented through a block diagram where different key indicators are calculated at each step (Disney et al.2006). Feedback loops guarantee that if any deviation is detected, that information is considered in previous steps to correct the situation. Feedback control solutions for traditional linear production schemes (Bensoussan et al.2009; Lin et al.2018) are very popular, but additional proposals for new non-linear schemes (Spiegler and Naim, 2017; Zhang et al.2019) may also be found. Furthermore, different studies about how randomness (Garcia et al.2012), disturbances (Scholz-Reiter et al.2011), and fluctuations (Yang and Fan, 2016) affect the global behaviour and performance of these feedback control mechanisms have been published. On the other hand, optimal control applications have been reported. Optimal control is the most common technique in horizontal control architectures for Industry 4.0 (Dolgui et al.2019). In these scenarios, cloud production systems are the most common application for these technologies (Frazzon et al.2018; Rossit et al.2019). Optimal control is characterized by a process evolution that is not allowed to belong to certain states or areas in the phase state. With this view, solutions for optimal planning (Sokolov et al.2018) and efficient activity scheduling (Ivanov et al.2019) in Industry 4.0 may be found. As in previous topics, works on robustness and resilience analyses have also been reported (Aven, 2017; Ivanov et al.2016b).

The main problem of embedded control mechanisms in Industry 4.0 applications is that they require a total access to every component in the industrial system; as control modules must be integrated in every component and all of them must be interconnected to generate the global expected behaviour. Contrary to these systems, the proposed approach in this paper is also valid for proprietary solutions (very usual in industrial applications) which cannot be accessed or easily modified, as only a supervisory transversal component is able to support the whole control policy.

Finally, control mechanisms based on modern technologies such as artificial intelligence and fuzzy logic may be found (Diez-Olivan et al.2019). Although fuzzy logic is not a recent technology (Nguyen et al.2018), new solutions for industrial scenarios have been recently reported. These solutions are sparse, but some new proposals for nonlinear and event-driven processes may be found (Pan and Yang, 2017). In this context, case studies about real implementations are also a relevant contribution (Theorin et al.2017; Golob and Bratina, 2018). These technologies are very powerful but are not flexible enough to deal with anarchist executions caused by human behaviour, as learning algorithms need to previously observe every execution model to be accepted.

Table 1 shows in a systematized manner the main advantages and disadvantages, and differences in terms of the problem addressed, of previously described works.

Table 1

Systematized state of the art review.

State of the art proposalsMain advantagesMain disadvantagesImprovements in the proposed solution
Supervisory SCADA-based solutions.They do not require computationally heavy algorithms to perform control.Devices must be provided with a control interface. Stability problems appear in large and complex architectures.Autonomous devices without control interface and humans can be controlled. A wider range of applications can be considered (not only classic industrial scenarios).
Discrete event systems.Autonomous agents are supported.Solutions are not flexible and only compatible with one kind of agent (robots, vehicles, etc.).The solution can be applied to heterogenous scenarios where several types of agents are present.
Logic rules supported systems.Autonomous agents are supported. Configurable control policies may be defined.They only show a good performance with rigid typical industrial processes.Deformation metrics and soft models allow controlling flexible and “anarchist” processes.
Horizontal embedded control solutions.Complex systems can be easily controlled in a very precise manner.Devices must be provided with a control interface fully functional. Proprietary closed solutions are not compatible with these schemes.Autonomous devices without control interface and humans can be controlled. Compatible with all kind of devices. Any future engineered system is controlled without a transversal adaptation middleware
Vertical embedded control solutionsComplex and distributed processes can be controlled through several decomposition and transformation phases.
Fuzzy logic and artificial intelligence-based control.They can support very complex control schemes through powerful algorithms.These solutions cannot be flexible and precise at the same time. Typically, they are specifically designed for individual applications.The proposed solution can be deployed in a large catalogue of scenarios as it balances pattern recognition techniques with other more traditional approaches.

3New Supervisory Control Solution for Distributed Industry 4.0 Processes

This section describes the new proposal for supervisory control in Industry 4.0 scenarios, where processes are described using soft models, logic rules and deformation functions and metrics. Section 3.1 describes the global overview of the proposed solution. Section 3.2 presents the new soft models to represent processes at prosumer level. Section 3.3 analyses how logic rules (at business level) may be employed to verify in a flexible manner the process execution performed by autonomous agents and humans. And Section 3.4 describes proposed technologies for production level, where deformation functions and metrics are deployed to evaluate workflows and feed verification engines at business level.

3.1General Overview

Figure 1 shows the proposed architecture for the described new control mechanism. At the highest level, managers are defining processes ① using any of the existing process description technologies, such as BPMN or YAWL. Industrial production processes are usually very large and complex, including many activities and tasks which may be partially related (or not). Besides, some technologies enable defining quality indicators for tasks’ outputs, so the activity execution may be rejected if those indicators are not met. Process definitions at this level (named as prosumer level, as managers act as producers and consumers in the proposed technology) are typically graphic, and only relations among tasks and quality measurements are described.

Fig. 1

Proposed architecture for new supervisory control solution in Industry 4.0 scenarios.

Proposed architecture for new supervisory control solution in Industry 4.0 scenarios.

The process model based on standard technologies (hereinafter we are considering YAWL language) is then analysed and decomposed ② to “soft” and relax the model. The idea is to transform a hard or rigid process definition where deformations and variations are not expressed (and, then, not accepted by the control solution) into a soft model ③ where global quality and organization restriction are equivalent but dynamic changes, variations and deformation are admissible.

To perform this transformation, a specific engine ② analyses the YAWL process model and identifies the key branches or subprocesses whose exterior structure cannot be modified. For example, a subprocess generating a subproduct as output which is the input of a second subprocess must be executed strictly before the second subprocess. However, tasks inside both subprocesses could be executed in any order, and the same subproduct is created.

After this analysis and decomposition procedure, two different soft process descriptions are generated. The first one is a global description based on logic rules ③. To generate this description, each subprocess is represented by a discrete state. And the global state map is regulated by a set of logical rules extracted and instantiated from a logic predicate repository ④, where different general conditions are stored (temporal order, necessary conditions, etc.). Those logical rules represent the minimum restrictions that the production process must fulfill, but all additional restrictions artificially introduced by the limitations or graphic process representations are removed. The second one is a set of reports ⑤ where deformation metrics about each subprocess are indicated. Quality indicators about each task are considered together and three global metrics for the whole subprocess are obtained: stiffness, strength and ductility.

The global soft description is transferred to a verification engine ⑥, where logical rules are validated (or not) at real-time. This global understanding about the process takes place at business level, where information from managers (process models) and from autonomous hardware platforms are matched to determine if production processes are being correctly executed or not. This verification engine takes as input two information sources: events directly generated by hardware devices ⑦ following an event-driven paradigm; and events generated ⑧ by control component in lower layers (the production layer). These events are employed to validate logical rules and determine if minimum conditions to consider the process execution is valid are being met.

All reports about subprocesses ⑨ are transferred to a lower level (the production level), where low-level (physical) information is collected to evaluate if process deformation is above the proposed global metrics. To do that, information from different recognition technologies (human-oriented, device-oriented, etc.) is collected ⑩. These recognition technologies are not addressed in this paper, as any of the previously existing mechanisms may be integrated (Bordel et al.2019; Bordel and Alcarria, 2017). All information sources are integrated ⑪ to perform a global evaluation of process deformation. To obtain those global metrics a mathematical algorithm ⑫ is employed. This algorithm is deducted by understanding production processes as discrete flexible solids in a generalized phase space. Dynamic behaviour of autonomous devices is represented as external forces acting on the solid (process) and deforming it. Using different deformation functions (and the provided deformation metrics), it is evaluated if the final process execution is similar enough to the model to be considered as a valid execution or not. A decision-making engine ⑬ performs these functions. A report ⑭ and event-like result (whose format may be adapted in an event generation module) is sent to the rule verification engine ⑥ (as one of the information sources described before).

3.2Process Description at Prosumer Level

Using YAWL language, managers may define processes in a very easy manner with graphic instruments (see Fig. 2). At the highest level (prosumer level) managers (prosumers) are defining a complex production process or workflow W. This workflow is an ordered sequence T of MT tasks, connected through ME oriented edges, E (1). Each task ti is labelled with a discrete timestamp ni indicating the planned temporal order for the workflow. On the other hand, each edge eiE is labelled with a collection SPi indicating the list of subproducts flowing (as input and/or outputs) between tasks along the edge (2). Labelling applications λT and λE are employed to relate labels and edges and tasks. To learn which tasks ti are connected through oriented edges ei, and incidence application γW is defined (3). This application relates each edge ei with the order pair of tasks ti,tj this edge connects. The edge is outgoing from ti and incoming in tj:

(1)
W={T,E},T={ti,i=1,,MT},E={ei,i=1,,ME},
(2)
λT(ti)=ni,λE(ei)=SPi={spij,j=1,,MSP},
(3)
γW:ET×T,γW(ei)={ti,tj}.

Besides, in YAWL, tasks may be labelled with a list Ii of MI quality indicators ij, employed to accept or reject the actual tasks execution (4). These indicators may refer maximum delays, jitter, errors in physical dimensions, etc. A labelling application λind is employed to relate indicators and tasks (5). This list must be exhaustive: every possible indicator must be present in every task. If an indicator does not apply to a particular task, this value in the list should be empty but still present.

(4)
Ii={ij,j=1,,MI},
(5)
λind(ti)=Ii,
(6)
Ci={cij,j=1,,MC},λcond(ei)=Ci.

Moreover, bifurcation conditions can be considered in YAWL process definitions (see Fig. 2). These conditions are modelled as lists Ci of semantic annotations associated to edges through a label application λcond (6). These annotations cij are logical predicates referred to output characteristics, time, etc. This whole process description (including all labels), then, is analysed and decomposed in a specific engine. The first step is to calculate the key process branches or subprocesses ωi (7):

(7)
W=iωi.

Fig. 2

Graphic representation of a YAWL process.

Graphic representation of a YAWL process.

A key subprocess is characterized by only one input point (only one task receives inputs from the physical world) and only one output point (only one task generates a physical service or product). Besides, relations among tasks within the subprocess do not have associated subproducts SPi=, but input and output edges to the subprocess do have associated subproducts. Inside a subproduct, then, the temporal order is artificially induced by YAWL language, as it requires all tasks to be connected in sequences. But the process does not require tasks to be executed in that order and it could be altered. When a key subprocess is detected, it is extracted to an independent report to be analysed separately using deformation metrics, and in the global description the subprocess is replaced by a state sj, taken from a set ΣW of possible states for this workflow. And a transaction relation ρW is built to connect all states sj according to restrictions in the original process (8). Algorithm 1 describes the proposed mechanism to calculate subprocess and new states and the transaction relation.

On the other hand, annotations Ci must be transformed into logic predicates referred to states sj and/or the discrete timestamps ni. To do that, a semantic algorithm may be employed. This algorithm is not described in this paper, but similar solutions may be found in the state of the art [8]. To do that, a repository of generic logic predicates is considered. These generic predicates are instanced according to annotations Ci to generate a set R of atomic propositions ri or logical rules. An interpretation function LW is finally defined, indicating what predicates from R are true for each state sj (8).

The set of states ΣW and both the interpretation LW and the transaction ρW applications define a Kripke structure K (9) which is transferred to the business layer for further processing. This Kripke structure is formally defined if only one additional element is considered: the set of states from which the workflow may be initiated Σinit.

(8)
ρW(si)=si+1,LW:ΣWR,LW(sj)={ri},
(9)
K:ΣW,Σinit,ρW,LW.

Algorithm 1

Key subprocess identification and Kripke structure creation

Key subprocess identification and Kripke structure creation

A second element is generated in the analysis and decomposition engine. For each key subprocess ωi, each task ti is labelled with an exhaustive set of indicators Ii. These indicators are split into two different vectors Iia and Iir (10). On the one hand, absolute indicators Iia referring state variables (such as time, temperature, etc.) are preserved (including empty positions). On the other hand, relative indicators Iir referring tolerances, errors, etc. are processed to calculate three basic and global deformation metrics: stiffness, strength and ductility.

(10)
Ii=IiaIir,Iia={iajj=1,,MIA},Iir={irjj=1,,MIR}.

  • Stiffness (F) represents the ability of a process to resist to deformations. In other words, it presents how much a process may absorb variations in the input parameters and conditions and still keep the planned values in the state variables.

  • Strength (G) represents the ability of a process to prevent unsatisfactory executions. It describes how resistant the internal organization of the process is, producing correct executions even if inputs and/or state variables suffer great changes with respect to the original model.

  • Ductility (D) represents the ability of a process to be deformed and, still, produce correct executions. This parameter indicates how flexible a process is, so even large changes in the state variables are considered as valid executions.

Stiffness is related to security margins and safeguards between consecutive tasks in the process model (11). As these security margins are larger, it is easier for the process to resist to global deformations, as the effect of inputs on the first tasks are absorbed by security margins. Function calculating the stiffness of a process σf must, then, be strictly increasing: linear, logarithmic or exponential functions could be employed depending on the scenario.

Strength is related to the number of restrictions and indicators in the list Iir (12). As more restrictions are considered, the process is more probable to generate an unsatisfactory execution. Besides, if we consider all restrictions as independent effects, the probability of unsatisfactory executions grows up exponentially with the number of restrictions. Then, function calculating the process strength σf is exponentially decreasing.

Finally, ductility is obtained from the tolerances and relative errors in the list Iir (13). As tolerances go up, processes can be deformed at a higher level, but the process execution is still valid. Then, the function calculating the ductility of the process σd is strictly increasing: linear, logarithmic or exponential functions could be employed depending on the scenario

(11)
Fi=σf(|iaj+1iaj|j=1,,MIA),
(12)
Gi=σg(MIR),
(13)
Di=σd(|irj|j=1,,MIR).

Then, the subprocess descriptions {ωi} together with the vector {Iia} and the deformation indicators {(Fi,Gi,Di)i} are transferred to the production layer for further processing.

3.3Global Process Control and Rule Validation at Business Level

At business level, a Kripke structure (9) describing the process model is received. On the other hand, from this level, the control solution is viewed as a discrete event system (DES), Z (14). Where X is the set of possible states in the system; E is the finite set of events; f is the transition function indicating the next state for a given current state and a received event (15); Γ is the function indicating what events are active for a given state (16) and P(E) is the power set of E; X0 is the initial state of the system and Xm is the set of market states (of allowed final states).

(14)
Z=(X,E,f,Γ,X0,Xm),
(15)
f:X×EX,
(16)
Γ:XP(E).

In this DES, E is the Kleene closure of E which is the set of all possible string in the system (including the empty string); being a string the juxtaposition of states in X describing the evolution of the system; i.e. E is the set of all possible evolutions the system may follow. Two different languages (sets of strings) can be defined, then. The language marked (17) Lm contains all strings that finished with a marked state. The language generated (18) Lg contains all possible strings, i.e. all possible system evolution although they may be not acceptable. In traditional supervisory control systems, the underlaying event-driven infrastructure is assumed to be fully controllable (all events are controllable) or partially controllable (where only some events are uncontrollable). However, in future Industry 4.0 all events tend to be uncontrollable (as devices and people are totally dynamic and autonomous). In that case, common supervisory control theory is not applicable, as its objective is to avoid certain undesired controllable events (and, in Industry 4.0, they are all incontrollable). On the contrary, the objective of the proposed supervisory mechanism is to create reports when undesired states are reached.

(17)
Lm(Z)={m}={εE|last element inεXm},
(18)
Lg(Z)={g}={εE}.

Moreover, in this scenario, events and states in the low-level platform are unknown and, probably, infinite. Then, supervisory control cannot be performed directly over the workers and hardware devices, but over the Kripke structure. Actually, this structure is a fair Kripke structure are two additional conditions are met in every process model:

  • Justice or weak fairness requirements: Many states sj activate the same logical predicate.

  • Compassion or strong fairness requirements: If many states sj have the same logical predicate; many states sk do not activate that logical predicate.

Fig. 3

Supervisory control solution.

Supervisory control solution.

In that way, a two-level verification is performed in the “rule verification engine” (see Fig. 3). In this solution, states E in DES Z and in the Kripke structure K are the same, although the state set ΣW in the Kripke structure includes a special state snull to represent unknown states (19). This state snull does not activate any logical predicate (20). The initial states are the same (21). Discrete events ε in the DES Z are words g of the language general Lm(Z) (where the empty string corresponds to the unknown state in the Kripke structure), describing the states the Kripke structure has reached (22) before the current state scurrent. Then, the transition function in the DES may be easily constructed from the transaction relation in K (23):

(19)
E=ΣWsnull,
(20)
LW(snull)=,
(21)
Σinit=X0,
(22)
εi=gi={si|siΣWsiρW1ρW1(scurrent)},
(23)
f(si,ϵi)=ρW(si)=si+1.

In those conditions, the proposed supervisory control algorithm operates as follows. First, physical events Ephy (obtained through the event generation module from physical information and reports) are employed to determine in the Kripke structure which states sj are allowed in the production process (several of them, because of the justice and compassion requirements). This action is performed using the interpretation function (24). If only one state is allowed, a new DES event εi is generated, juxtaposing the new event to the previous string prefix gi (25). This information is introduced as strings’ prefix in the DES  Z, and the supervisory control module determines which future states A(E) are allowed by the business model (26) and, eventually, reports if the executed process is incompatible with business requirements (27). Algorithm 2 describes the proposed control solution.

(24)
{sj}=LW1({rj}),
(25)
εi+1=εi|scurrent=gi|scurrent=gi+1,
(26)
A(E)={εiEsuch thatεi+1|εis prefix ofmLm(Z)},
(27)
A(E)=.

Algorithm 2

Supervisory control

Supervisory control

Different solutions to validate logic rules have been reported (Leucker, 2017), any of them may be integrated into the proposed mechanism.

3.4Subprocess Control Using Deformation Functions at Production Level

Finally, at production level, a collection of labelled key subprocess {ωi} and deformation metrics {(Fi,Gi,Di)i} are received. At this point, every task ti in any subprocess ωi is totally defined by the set of absolute quality indicators Iia. This set is a MIA-dimensional vector which, eventually, can be represented as a point in a generalized MIA-dimensional phase space. To do that, it is enough to consider the position vector xi of that point (28) in a general Euclidian space. Besides, in the most common case, every subprocess ωi is composed of a very large set of tasks, so if the same procedure is repeated for every task, finally the whole subprocess is represented as a point cloud in the phase space. This point cloud may be understood as a discrete solid Ωi in the phase space (29). This solid, moreover, is flexible, as from the beginning we have assumed that tasks inside each subprocess may be altered (see Fig. 4). In this context, Mωi is the number of tasks in the subprocess  ωi. In those conditions, the subprocess execution may be understood as a function TD (30) applied over the solid Ωi to be transformed into a new and deformed solid Ωi in the original phase space (see Fig. 5). Rotations, extrusions, and any other deformation could appear.

Fig. 4

Subprocess representation in a general phase space.

Subprocess representation in a general phase space.

The question to address, then, is if the suffered deformation is high enough to consider whether the execution is unsatisfactory or not.

(28)
xi=IiaO=(iaj,j=1,,MIA),
(29)
Ωi={IiaO,i=1,,Mωi},
(30)
TD:ΩiRMIAΩiRMIA.

Fig. 5

Subprocess representation in a general phase space.

Subprocess representation in a general phase space.
Fig. 6

External forces acting on the flexible solid in the phase space.

External forces acting on the flexible solid in the phase space.

In the proposed model, a set of external forces Δik (31) is actuating on the solid Ωi in the point tk. These forces represent the dynamic, variable, and anarchic behaviour of autonomous devices and humans, which tend to deform the original process model (see Fig. 6). If forces were acting only in one dimension δikj, the global deformation suffered in that dimension Φikj could be easily calculated through the Hook law (32); where Fi is the stiffness parameter calculated in the prosumer layer. Although it is not completely correct in terms of physical meaning, we are generalizing this law by taking modular functions, so the global aggregated deformation in all directions of the phase space may be also estimated through the Hook law (33) and, even, the global deformation for the entire solid, by integrating along the entire solid’s surface (34). Considering, now, ductility Di and strength Gi we can extend the common Hook law to include inelastic areas and break zones (where executions are considered unsatisfactory). This new deformation function includes three different branches (35), see Fig. 7.

(31)
Δik=(δikj,j=1,,MIA),
(32)
δikj=Fi·ϕikj,
(33)
ΔikFi·Φik,
(34)
Δi=ΩiΔik=1Mωik=1MωiΔikΔi=Fi·ΩiΦik=Fi·k=1MωiΦik=Fi·Φi.
(35)
Δi=Fi·Φi,ifΦiGiFi,Gi,ifGiFi<ΦiDi,breakotherwise.

Fig. 7

Generalized Hook law for process verification.

Generalized Hook law for process verification.

As can be seen in Fig. 7, it is not necessary to evaluate the external forces Δi to learn if global subprocess deformation is too high to consider the execution is valid. These forces are extremely complicated to analyse, as almost no data about them is available. On the contrary, global deformation Φi can be evaluated through the absolute quality indicators Iia, which are monitored through the recognition technologies at low-level. For every deformation value Φi in the break area, the execution is considered unsatisfactory. Otherwise, it is considered valid. For deformation values in the elastic area, external forces are only deforming the process in a transitory manner, i.e. only some tasks are affected (those which are directly affected by forces). Security margins are absorbing the secondary effects over other tasks, and no permanent global deformation is suffered. On the contrary, when deformation values are in the inelastic area, security margins are not enough to compensate the external forces, and permanent and global deformation appear. Nevertheless, executions are still valid although deformed. Moreover, two contradictory situations must be considered before deformation calculation.

On the one hand, in the inelastic area, small variations in the estimated global deformation Φi may cause the subprocess execution to be rejected. So, high precision algorithm to perform those analyses are needed. On the other hand, Industry 4.0 scenarios present real-time requirements, so algorithms must be fast and efficient, and complex high-precision mechanisms must be employed.

Thus, we proposed a two phases deformation evaluation: in the first step, a very simple algorithm where all state variables and quality indicators are considered independent is employed; only if this algorithm places the global deformation in the inelastic area, a second step where a complex and much more precise algorithm is employed would be triggered.

To calculate the first and approximate global deformation Φiind of the subprocess ωi after an execution, we are considering all quality indicators Iia to be totally independent. Then, deformation ϕikj along each indicator and on each task can be calculated independently. This calculation is based on the ratio between the planned value for each quality indicator iaj, and the final obtained result iaj, (36). Using this radio, deformation ϕikj can be obtained through different expressions: unitary deformation (37), Almansi deformation (38), Green deformation (39), Hencky deformation (40) or any other application specific algorithm. Each one of these expressions is adequate for a range of deformation values (see Table 2). Then, we are always initially calculating the unitary deformation, and, after that, we can recalculate the value using a more adequate expression if needed.

(36)
μikj=iajiaj,,
(37)
ϕikj=μikj1,
(38)
ϕikj=12(11(μikj)2),
(39)
ϕikj=12((μikj)21),
(40)
ϕikj=ln(μikj).

As can be seen in Table 2, using the unitary deformation we can evaluate the subprocess rotation and absolute deformation. Then, for processes where rotation and large deformation values are present, Hencky function is the most precise calculation method. Equally, for null rotation and large deformation values, specific logarithmic algorithms and expressions may be proposed depending on the scenario. Finally, for small values of deformation we can use the Green or Almansi function (as desired) if rotation is present, or the unitary deformation if no rotation and small deformation values are observed.

After calculating the global deformation Φi for the subprocess, if this value is in the elastic area, the execution is accepted. If the subprocess execution is in the break area the execution is rejected; and if the value is in the inelastic area, a more precise evaluation is needed. In this more precise calculation mechanism, we are assuming the realistic scenario, where quality indicators are dependent on each other. Then, deformation parameters are obtained through the deformation tensor Ψik (41), where differential expressions are approximated using numerical expressions (42). Basically, for initial task we are employing the previous expressions (37)–(40) to start the calculation algorithm. Other tasks are evaluated through the backward differences. There exists also a tensor expression for Almansi deformation (43) and for Green–Lagrange deformation (44) which can be employed for depending on the scenario under study (selecting that which is closer to reality in all cases).

Algorithm 3

Deformation evaluation

Deformation evaluation
Table 2

Different deformation calculation expressions.

RotationDeformation: LargeDeformation: Small

(0.1<|ϕikj|<10)
Yes

(|ϕikjϕik+1j|<104k)
Hencky deformation

ϕikj=ln(μikj)
Green deformation

ϕikj=12((μikj)21)
Almasi deformation

ϕikj=12(11(μikj)2)
NoAd hoc logarithmic definitionsUnitary deformation

ϕikj=μikj1

Module calculation (34), in this case, will refer to the tensor module calculation; contrary to the previous case where vector module was employed.

(41)
Φik=Ψik=TD=ia1,ia1ia1,iaMIAiaMIA,ia1iaMIA,iaMIA,
(42)
iak,iar=iak,iak1,iariar1,forr,k>1,
(43)
Φik=12(1ΨikTΨik1),
(44)
Φik=12(ΨikTΨik1).

With this much more precise value, the caused deformation is finally evaluated. If it remains in the inelastic area, the execution is approved. In the contrary event, the execution is rejected. Algorithm 3 describes the entire described solution at production level.

4Experimental Validation

In order to evaluate the performance of the proposed solution, an experimental validation was designed and carried out. The experiments were based on pervasive sensing and computing platforms, deployed in a laboratory, emulating an Industry 4.0 manufacturing environment. In this environment, workers and work automation devices were expected to perform a certain workflow along the day, composed of certain production activities, taken from food manufacturing companies such as baking companies (see Table 4). These production activities, besides, are composed of a quite large collection of tasks, which are finally monitored.

The scenario was deployed at Universidad Politécnica de Madrid, where a 30 m2 space was conditioned as working environment. Because of sanitary restrictions during 2020 in Europe, only three people could be at the same time in the laboratory. Then, if a large amount of people were involved in the experiment, different experiment realization (each one with only three participants) would be performed. Electronic devices present in this installation were configured to simulate a food production environment.

Three different information sources and devices were considered in order to monitor people and device activities: RFID tags, infrared barriers and accelerometers and general sensors (temperature, humidity, light, etc.). Table 3 shows the composition of the deployed infrastructure. All these elements were built together with an Arduino Nano platform and connected through Bluetooth technologies to a gateway supported by a Raspberry Pi device. This gateway, finally, communicates all information to a central server in the cloud for data storage and back-end deployment. The server was deployed in a Linux architecture (Ubuntu 18.04 LTS) with the following hardware characteristics: Dell R540 Rack 2U, 96 GB RAM, two processors Intel Xeon Silver 4114 2.2G, HD 2TB SATA 7,2K rpm.

Autonomous devices in the proposed scenario belonged to three different types (see Table 3):

  • Autonomous robots: Cleaning robots and software-based robots, acting in an autonomous manner to help workers in the management of production facility. Software-based robots consisted on agents interacting with virtual PLCs simulating bakery processes, service interruptions, etc.

  • Electronic ink displays: Low-cost and low consumption devices employed to display information about next activities to be performed, warning and alerts, etc. All these displays were controlled from the Raspberry local gateway.

  • Cyber-Physical Systems and pervasive system to control living conditions: All environmental conditions, such as temperature or humidity, were monitored by pervasive computing platforms and traditional feedback control loops.

Table 3

Infrastructure composition during the experimental validation.

Infrastructure subsystemDeviceQuantity
Monitoring subsystemRFID tags19
Infrared barrier6
Accelerometers and general sensors33
Pervasive platformAutonomous robots4
Electronic ink displays12
CPS for the living environment4

In order to recognize activities being performed, two different technologies were employed. To recognize human activities, we are using a two-phase solution composed of different machine learning and pattern recognition layers (Wonham and Cai, 2019). To recognize activities performed by autonomous devices we employed previous works on artificial intelligence mechanisms for Internet-of-Things applications based on signal processing (Bordel et al.2020b). These components were deployed together with the proposed solution in the referred cloud server. This server also offered a prosumer webpage, where YAWL-based workflows of production activities could be generated. Workflows are based in bakery production activities of the state of the art (Katz et al.1963), and most common activities are showed in Table 4.

The experiment considered four different workflows with a variable number of tasks and production activities. All workflows were executed by autonomous devices and twelve people in an eight-hour session (standard labour schedule). People were selected as a homogenous community, with respect to the gender and age parity. The experiment included three different phases:

  • Training phase: Participants received some training about the activities and workflows they had to perform and the context and experiment conditions.

  • Process execution phase: Each participant was interviewed in this phase. The participant was asked to execute a certain process, which is described in a short document, as any company would do.

  • Evaluation phase: In this phase, experts evaluated the records obtained by the system about the validity of executions, and they were compared to observations made by experts (which finally determined whether an execution was correct or not).

Table 4

Most common production activities in the experimental validation.

WorkflowMost important activities
Supervising bakery productsVisual inspection of products, quality assessment and product discard.
Supervising bakery productionMonitoring of production processes, operating status, controlling speed, stops and notifications.
Pastry packagingDispense, grouping, labeling and packaging.
Auxiliary operations in food industryCustomer service, warehouse cleaning, etc.

Two different experiments were performed using the described infrastructure. During the first experiment, the precision and success rate of the proposed supervisory control mechanism was evaluated. The number (percentage) of activities and workflows correctly detected as successful or unsatisfactory executions is calculated, including the percentage of false positives and false negatives. Experiments are repeated for workflows with different number of tasks, and they are also compared to traditional top-down control approaches (Bordel et al.2018c). During the second experiment, the scalability of the proposed control solution with respect to the complexity (number of tasks) of the executed workflow and quality indicators in the task description is analysed. The main indicator analysed in this second experiment is the processing delay.

5Results and Discussion

Figure 8 shows the results from the first experiment, comparing the success rate in the proposed control mechanism (rule verification engine) ⑥ to previously existing traditional solutions [5], for workflows with a different number of tasks MT.

Fig. 8

Execution correctly valuated and accepted (or rejected).

Execution correctly valuated and accepted (or rejected).

As can be seen, the proposed supervisory control mechanism reaches a higher successful rate in all cases. In a first zone, for workflows where MT<400, the success rate is improved round 10% with respect to traditional control solutions, reaching a success rate of 82% (approximately). However, as the number of activities in the workflow goes up, and the success rate goes down for both approaches. In fact, a higher number of activities implies a higher variability and makes it much more complex to control anarchic executions. In any case, traditional solutions get worse faster than the innovative proposed mechanism. Thus, for workflows with a number of tasks MT=600, the proposed solution presents a success rate of 58% (approximately), while traditional mechanisms present a performance of almost a 300% worse. After this point, both mechanisms reach the minimum precision and they remain stable.

An interesting result to be evaluated is how the wrongly evaluated process executions are split into false positive detection and false negative detections. Figure 9 shows those results.

Fig. 9

Distribution of errors in the first experiment.

Distribution of errors in the first experiment.

In general, as can be seen, traditional approaches present a much higher number of false negative detections (executions that are valid are considered unsatisfactory), while the proposed solution tends to create false positive evaluations (executions that are not valid but are finally accepted) in a higher rate than state of the art technologies.

Finally, Fig. 10 shows the results from the second experiment. The scalability in terms of processing delay (from deformation calculation ⑫ to rule verification ⑥) is analysed, considering different number of tasks MT and a number of absolute state variables and quality indicators MIA. As can be seen, evolution of processing delay with respect to the number of tasks in the workflow is almost linear. And, then, the temporal order of the proposed algorithm in that variable is O(n). On the other hand, processing delay according to the number of quality indicators describing each task MIA follows a function with a higher growing rate than linear functions, which can be approximated by O(n·log(n)). In both cases we are avoiding exponential or high-order polynomial temporal order which could make the proposed solution impractical for high complexity Industry 4.0 production scenarios. Processing delay, in any case, is in the order of hundreds of milliseconds.

Fig. 10

Processing delay evolution: scalability. (a) Number of tasks in the workflow. (b) Number of quality indicators describing each task.

Processing delay evolution: scalability. (a) Number of tasks in the workflow. (b) Number of quality indicators describing each task.

6Conclusions

Industry 4.0 solutions are composed of autonomous engineered systems where heterogenous agents act in a choreographed manner to create complex workflows. In this work, supervisory control techniques are employed to guarantee a correct execution of distributed and choreographed processes in Industry 4.0 scenarios. At prosumer level, processes are represented using soft models where logic rules and deformation indicators are used to analyse the correctness of executions. These logic rules are verified using specific engines at business level. These engines are fed with deformation metrics obtained through tensor deformation functions at production level. To apply deformation functions, processes are represented as discrete flexible solids in a phase space, under external forces representing the variations in every task’s inputs.

Experimental validation shows that the proposed mechanism improves the performance of traditional control solutions in a percentage between 10% and 300%, depending on the workflow to be executed. Besides, scalability of the proposed solution is almost linear with respect to the number of tasks in the workflow, and n·log(n) with respect to the number of quality indicators or state variables describing tasks.

As future works, it is intended to do a proof of concept in a relevant environment, a real food processing facility for the monitoring of production activities in a non-intrusive way. For this, it is necessary to consider, in addition to the technical challenges of integration with sensorization devices present in this relevant environment, ethical and privacy considerations for the workers who are in these facilities. For these tests and the continuation of this line of research we have a research framework with relevant food processing companies, established in accordance with the DEMETER project.

References

1 

Aminifar, A. ((2016) ). Analysis, Design, and Optimization of Embedded Control Systems, Vol. 1746: . Linköping University Electronic Press, Linköping, Sweden. https://doi.org/10.3384/diss.diva-124319.

2 

Aven, T. ((2017) ). How some types of risk assessments can support resilience analysis and management. Reliability Engineering and System Safety, 167: , 536–543. https://doi.org/10.1016/j.ress.2017.07.005.

3 

Bagheri, B., Yang, S., Kao, H.A., Lee, J. ((2015) ). Cyber-physical systems architecture for self-aware machines in Industry 4.0 environment. In: IFAC-PapersOnLine, Vol. 28: . Elsevier, Berlin, pp. 1622–1627. https://doi.org/10.1016/j.ifacol.2015.06.318.

4 

Bensoussan, A., Çakanyildirim, M., Sethi, S.P. ((2009) ). Optimal ordering policies for inventory problems with dynamic information delays. Production and Operations Management, 16: (2), 241–256. https://doi.org/10.1111/j.1937-5956.2007.tb00178.x.

5 

Bordel, B., Alcarria, R. ((2017) ). Assessment of human motivation through analysis of physiological and emotional signals in Industry 4.0 scenarios. Journal of Ambient Intelligence and Humanized Computing, 1: , 1–21. https://doi.org/10.1007/s12652-017-0664-4.

6 

Bordel, B., Alcarria, R., Jara, A. ((2017) a). Process execution in humanized cyber-physical systems: Soft processes. In: Iberian Conference on Information Systems and Technologies, CISTI. IEEE Computer Society, Washington DC, USA. https://doi.org/10.23919/CISTI.2017.7975901.

7 

Bordel, B., Alcarria, R., Robles, T., Martín, D. ((2017) b). Cyber-physical systems: extending pervasive sensing from control theory to the internet of things. Pervasive and Mobile Computing, 40: , 156–184. https://doi.org/10.1016/j.pmcj.2017.06.011.

8 

Bordel, B., Alcarria, R., Martín, D., Robles, T., de Rivera, D.S. ((2017) c). Self-configuration in humanized cyber-physical systems. Journal of Ambient Intelligence and Humanized Computing, 8: (4), 485–496. https://doi.org/10.1007/s12652-016-0410-3.

9 

Bordel, B., Miguel, C., Alcarria, R., Robles, T. ((2018) a). A hardware-supported algorithm for self-managed and choreographed task execution in sensor networks. Sensors, 18: (3), 812. https://doi.org/10.3390/s18030812.

10 

Bordel, B., Alcarria, R., Sanchez De Rivera, D., Martín, D., Robles, T. ((2018) b). Fast self-configuration in service-oriented smart environments for real-time applications. Journal of Ambient Intelligence and Smart Environments, 10: (2), 143–167. https://doi.org/10.3233/AIS-180479.

11 

Bordel, B., Alcarria, R., de Rivera, D.S., Robles, T. ((2018) c). Process execution in cyber-physical systems using cloud and cyber-physical internet services. Journal of Supercomputing, 74: (8), 4127–4169. https://doi.org/10.1007/s11227-018-2416-4.

12 

Bordel, B., Alcarria, R., Sánchez-de-Rivera, D. ((2019) ). A two-phase algorithm for recognizing human activities in the context of industry 4.0 and human-driven processes. In: Advances in Intelligent Systems and Computing, Vol. 931: . Springer-Verlag, Berlin, pp. 175–185. https://doi.org/10.1007/978-3-030-16184-2_18.

13 

Bordel, B., Alcarria, R., Robles, T. ((2020) a). Supervising industrial distributed processes through soft models, deformation metrics and temporal logic rules. In: Advances in Intelligent Systems and Computing, Vol. 1160 AISC: . Springer, Berlin, pp. 125–136. https://doi.org/10.1007/978-3-030-45691-7_12.

14 

Bordel, B., Alcarria, R., Robles, T., You, I. ((2020) b). A predictor-corrector algorithm based on laurent series for biological signals in the internet of medical things. IEEE Access, 8: , 109360–109371. https://doi.org/10.1109/ACCESS.2020.3001275.

15 

Boyer, S.A. ((2016) ). SCADA: Supervisory Control and Data Acquisition, Fourth Edition. International Society of Automation, Pennsylvania, USA, pp. 257.

16 

Branger, J., Pang, Z. (2015). From automated home to sustainable, healthy and manufacturing home: a new story enabled by the Internet-of-Things and Industry 4.0. Journal of Management Analytics, 2(4). https://doi.org/10.1080/23270012.2015.1115379.

17 

Calderón Godoy, A., González Pérez, I. ((2018) ). Integration of sensor and actuator networks and the SCADA system to promote the migration of the legacy flexible manufacturing system towards the Industry 4.0 concept. Journal of Sensor and Actuator Networks, 7: (2), 23. https://doi.org/10.3390/jsan7020023.

18 

Chhetri, S.R., Rashid, N., Faezi, S., Faruque, M.A.A. ((2017) ). Security trends and advances in manufacturing systems in the era of Inxdustry 4.0. In: IEEE/ACM International Conference on Computer-Aided Design, Digest of Technical Papers, ICCAD, Vol. 2017-November: . Institute of Electrical and Electronics Engineers Inc., New Jersey, USA, pp. 1039–1046. https://doi.org/10.1109/ICCAD.2017.8203896.

19 

Diez-Olivan, A., Del Ser, J., Galar, D., Sierra, B. ((2019) ). Data fusion and machine learning for industrial prognosis: trends and perspectives towards Industry 4.0. Information Fusion, 50: , 92–111. https://doi.org/10.1016/j.inffus.2018.10.005.

20 

Disney, S.M., Towill, D.R., Warburton, R.D.H. ((2006) ). On the equivalence of control theoretic, differential, and difference equation approaches to modeling supply chains. International Journal of Production Economics, 101: (1 spec. iss.), 194–208. https://doi.org/10.1016/j.ijpe.2005.05.002.

21 

Dolgui, A., Ivanov, D., Sethi, S.P., Sokolov, B. ((2019) ). Scheduling in production, supply chain and Industry 4.0 systems by optimal control: fundamentals, state-of-the-art and applications. International Journal of Production Research, 57: (2), 411–432. https://doi.org/10.1080/00207543.2018.1442948.

22 

Ebling, M.R., Want, R. (2017). Pervasive Computing Revisited. Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/MPRV.2017.2940959.

23 

Fabre, E., Jezequel, L. ((2009) ). Distributed optimal planning: an approach by weighted automata calculus. In: Proceedings of the IEEE Conference on Decision and Control. IEEE, New Jersey, USA, pp. 211–216. https://doi.org/10.1109/CDC.2009.5400084.

24 

Foruzan, E., Asgarpoor, S., Bradley, J.M. ((2016) ). Hybrid system modeling and supervisory control of a microgrid. In: NAPS 2016 – 48th North American Power Symposium, Proceedings. Institute of Electrical and Electronics Engineers Inc., New Jersey, USA. https://doi.org/10.1109/NAPS.2016.7747840.

25 

Frazzon, E.M., Kück, M., Freitag, M. ((2018) ). Data-driven production control for complex and dynamic manufacturing systems. CIRP Annals, 67: (1), 515–518. https://doi.org/10.1016/j.cirp.2018.04.033.

26 

Garcia, C.A., Ibeas, A., Herrera, J., Vilanova, R. ((2012) ). Inventory control for the supply chain: an adaptive control approach based on the identification of the lead-time. Omega, 40: (3), 314–327. https://doi.org/10.1016/j.omega.2011.07.003.

27 

Geiger, M., Harrer, S., Lenhard, J., Wirtz, G. ((2018) ). BPMN 2.0: the state of support and implementation. Future Generation Computer Systems, 80: , 250–262. https://doi.org/10.1016/j.future.2017.01.006.

28 

Golob, M., Bratina, B. ((2018) ). Web-based control and process automation education and Industry 4.0. International Journal of Engineering Education, 34: (4), 1199–1211.

29 

Gonzalez, A.G.C., Alves, M.V.S., Viana, G.S., Carvalho, L.K., Basilio, J.C. ((2018) ). Supervisory control-based navigation architecture: a new framework for autonomous robots in Industry 4.0 environments. IEEE Transactions on Industrial Informatics, 14: (4), 1732–1743. https://doi.org/10.1109/TII.2017.2788079.

30 

Goryca, J., Hill, R.C. ((2013) ). Formal synthesis of supervisory control software for multiple robot systems. In: Proceedings of the American Control Conference. Institute of Electrical and Electronics Engineers Inc., New Jersey, USA, pp. 125–131. https://doi.org/10.1109/acc.2013.6579825.

31 

Gu, C., Wang, X., Li, Z. ((2019) ). Synthesis of supervisory control with partial observation on normal state-tree structures. IEEE Transactions on Automation Science and Engineering, 16: (2), 984–997. https://doi.org/10.1109/TASE.2018.2880178.

32 

Hill, R.C., Lafortune, S. ((2017) ). Scaling the formal synthesis of supervisory control software for multiple robot systems. In: Proceedings of the American Control Conference. Institute of Electrical and Electronics Engineers Inc., New Jersey, USA, pp. 3840–3847. https://doi.org/10.23919/ACC.2017.7963543.

33 

Igure, V.M., Laughter, S.A., Williams, R.D. ((2006) ). Security issues in SCADA networks. Computers and Security, 25: (7), 498–506. https://doi.org/10.1016/j.cose.2006.03.001.

34 

Iqbal, J., Afzal Khan, S., Ahmad Zafar, N., Ahmad, F. (2012). Modeling Supervisory Control of Autonomous Mobile Robots Using Graph Theory, Automata and Z Notation. Technical Report 12.

35 

Ivanov, D., Dolgui, A., Sokolov, B., Werner, F., Ivanova, M. ((2016) a). A dynamic model and an algorithm for short-term supply chain scheduling in the smart factory Industry 4.0. International Journal of Production Research, 54: (2), 386–402. https://doi.org/10.1080/00207543.2014.999958.

36 

Ivanov, D., Dolgui, A., Sokolov, B., Werner, F. ((2016) b). Schedule robustness analysis with the help of attainable sets in continuous flow problem under capacity disruptions. International Journal of Production Research, 54: (11), 3397–3413. https://doi.org/10.1080/00207543.2015.1129467.

37 

Ivanov, D., Dolgui, A., Sokolov, B. ((2019) ). The impact of digital technology and Industry 4.0 on the ripple effect and supply chain risk analytics. International Journal of Production Research, 57: (3), 829–846. https://doi.org/10.1080/00207543.2018.1488086.

38 

Katz, S., Ford, A.B., Moskowitz, R.W., Jackson, B.A., Jaffe, M.W. ((1963) ). Studies of illness in the aged: the index of ADL: a standardized measure of biological and psychosocial function. JAMA: The Journal of the American Medical Association, 185: (12), 914–919. https://doi.org/10.1001/jama.1963.03060120024016.

39 

Kloetzer, M., Mahulea, C. ((2016) ). Multi-robot path planning for syntactically co-safe LTL specifications. In: 2016 13th International Workshop on Discrete Event Systems, WODES 2016. Institute of Electrical and Electronics Engineers Inc., New Jersey, USA, pp. 452–458. https://doi.org/10.1109/WODES.2016.7497887.

40 

Kretschmer, F., Friedl, S., Lechler, A., Verl, A. ((2016) ). Communication extension for cloud-based machine control of simulated robot processes. In: Proceedings of the IEEE International Conference on Industrial Technology, Vol. 2016-May: . Institute of Electrical and Electronics Engineers Inc., New Jersey, USA, pp. 54–58. https://doi.org/10.1109/ICIT.2016.7474725.

41 

Lalwani, C.S., Disney, S.M., Towill, D.R. ((2006) ). Controllable, observable and stable state space representations of a generalized order-up-to policy. International Journal of Production Economics, 101: (1 spec. iss.), 172–184. https://doi.org/10.1016/j.ijpe.2005.05.014.

42 

Lasi, H., Fettke, P., Feld, T., Hoffmann, M. (2014). Industry 4.0. Business & Information Systems Engineering, 6(4).

43 

Leucker, M. ((2017) ). Runtime verification for linear-time temporal logic. In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol. 10215: . Springer-Verlag, Berlin, pp. 151–194. https://doi.org/10.1007/978-3-319-56841-6_5.

44 

Lin, J., Spiegler, V.L.M., Naim, M.M. ((2018) ). Dynamic analysis and design of a semiconductor supply chain: a control engineering approach. International Journal of Production Research, 56: (13), 4585–4611. https://doi.org/10.1080/00207543.2017.1396507.

45 

Lu, Y. ((2017) ). Industry 4.0: A survey on technologies, applications and open research issues. Journal of Industrial Information Integration, 6: , 1–10. https://doi.org/10.1016/j.jii.2017.04.005.

46 

Meissner, H., Ilsen, R., Aurich, J.C. ((2017) ). Analysis of control architectures in the context of Industry 4.0. Procedia CIRP, 62: , pp. 165–169. https://doi.org/10.1016/j.procir.2016.06.113.

47 

Merchan, D.F., Peralta, J.A., Vazquez-Rodas, A., Minchala, L.I., Astudillo-Salinas, D. ((2018) ). Open source SCADA system for advanced monitoring of industrial processes. In: Proceedings – 2017 International Conference on Information Systems and Computer Science, INCISCOS 2017, Vol. 2017-November: . Institute of Electrical and Electronics Engineers Inc., New Jersey, USA, pp. 160–165. https://doi.org/10.1109/INCISCOS.2017.9.

48 

Mohammad, U., Yee Low, C., Abd Rahman, R., Akmal Johar, M., Theng Koh, C., Dumitrescu, R., Rabe, M., Asmar, L., Kamaruddin, S. ((2019) ). Smart factory reference model for training on Industry 4.0. Journal of Mechanical Engineering, 16: (2), 129–144.

49 

Moreira, M.V., Basilio, J.C. ((2014) ). Bridging the gap between design and implementation of discrete-event controllers. IEEE Transactions on Automation Science and Engineering, 11: (1), 48–65. https://doi.org/10.1109/TASE.2013.2281733.

50 

Nguyen, H.T., Walker, C.L., Walker, E.A. ((2018) ). A First Course in Fuzzy Logic, 4th ed. Chapman and Hall/CRC, United Kingdom, pp. 448.

51 

Pan, Y., Yang, G.H. ((2017) ). Event-triggered fuzzy control for nonlinear networked control systems. Fuzzy Sets and Systems, 329: , 91–107. https://doi.org/10.1016/j.fss.2017.05.010.

52 

Rossit, D.A., Tohmé, F., Frutos, M. ((2019) ). Industry 4.0: smart scheduling. International Journal of Production Research, 57: (12), 3802–3813. https://doi.org/10.1080/00207543.2018.1504248.

53 

Roszkowska, E. ((2002) ). Supervisory control for multiple mobile robots in 2D space. In: Proceedings of the 3rd International Workshop on Robot Motion and Control, RoMoCo 2002. Institute of Electrical and Electronics Engineers Inc., New Jersey, USA, pp. 187–192. https://doi.org/10.1109/ROMOCO.2002.1177106.

54 

Sajid, A., Abbas, H., Saleem, K. ((2016) ). Cloud-assisted IoT-based SCADA systems security: a review of the state of the art and future challenges. IEEE Access, 4: , 1375–1384. https://doi.org/10.1109/ACCESS.2016.2549047

55 

Sampath, M., Sinnamohideen, K., Lafortune, S., Teneketzis, D. ((1995) ). Diagnosability of discrete-event systems. IEEE Transactions on Automatic Control, 40: (9), 1555–1575. https://doi.org/10.1109/9.412626.

56 

Sánchez, B.B., Alcarria, R., Sańchez-De-Rivera, D., Sańchez-Picot, Á. ((2016) ). Enhancing process control in Industry 4.0 scenarios using Cyber-Physical systems. Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications, 7: (4), 41–64. https://doi.org/10.22667/JOWUA.2016.12.31.041.

57 

Scholz-Reiter, B., Dashkovskiy, S., Görges, M., Naujok, L. (2011). Stability analysis of autonomously controlled production networks. International Journal of Production Research, 49: (16), 4857–4877. https://doi.org/10.1080/00207543.2010.505215.

58 

Silva, D.R.C., Oliveira, G.M.B., Silva, I., Ferrari, P., Sisinni, E. ((2018) ). Latency evaluation for MQTT and WebSocket protocols: an Industry 4.0 perspective. In: Proceedings – IEEE Symposium on Computers and Communications, Vol. 2018-June: . Institute of Electrical and Electronics Engineers Inc., New Jersey, USA, pp. 1233–1238. https://doi.org/10.1109/ISCC.2018.8538692.

59 

Sokolov, B., Dolgui, A., Ivanov, D. ((2018) ). Optimal control algorithms and their analysis for short-term scheduling in manufacturing systems. Algorithms, 11: (5), 57. https://doi.org/10.3390/a11050057.

60 

Spiegler, V.L.M., Naim, M.M. ((2017) ). Investigating sustained oscillations in nonlinear production and inventory control models. European Journal of Operational Research, 261: (2), 572–583. https://doi.org/10.1016/j.ejor.2017.02.010.

61 

Theorin, A., Bengtsson, K., Provost, J., Lieder, M., Johnsson, C., Lundholm, T., Lennartson, B. ((2017) ). An event-driven manufacturing information system architecture for Industry 4.0. International Journal of Production Research, 55: (5), 1297–1311. https://doi.org/10.1080/00207543.2016.1201604.

62 

Van Der Aalst, W.M.P., Ter Hofstede, A.H.M. ((2005) ). YAWL: Yet another workflow language. Information Systems, 30: (4), 245–275. https://doi.org/10.1016/j.is.2004.02.002.

63 

Wollschlaeger, M., Sauter, T., Jasperneite, J. ((2017) ). The future of industrial communication: automation networks in the era of the internet of things and Industry 4.0. IEEE Industrial Electronics Magazine, 11: (1), 17–27. https://doi.org/10.1109/MIE.2017.2649104.

64 

Wonham, W.M., Cai, K. ((2019) ). Supervisory Control of Discrete-Event Systems. Communications and Control Engineering. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-319-77452-7.

65 

Wonham, W.M., Cai, K., Rudie, K. ((2018) ). Supervisory control of discrete-event systems: a brief history. Annual Reviews in Control, 45: , 250–256. https://doi.org/10.1016/j.arcontrol.2018.03.002.

66 

Yang, T., Fan, W. ((2016) ). Information management strategies and supply chain performance under demand disruptions. International Journal of Production Research, 54: , 8–27. https://doi.org/10.1080/00207543.2014.991456.

67 

Zhang, Q., Liu, P., Pannek, J. ((2019) ). Combining MPC and integer operators for capacity adjustment in job-shop systems with RMTs. International Journal of Production Research, 57: (8), 2498–2513. https://doi.org/10.1080/00207543.2018.1521022.