2022 № 4(57)

CONTENTS


A.Y. Zubarev

A. P. Ershov Institute of Informatics Systems, 630090, Novosibirsk, Russia

COMPARISON OF TRACE AND BISIMULATION EQUIVALENCES ON TIME PETRI NETS WITH WEAK TIME POLICY

DOI: 10.24412/2073-0667-2022-4-5-27

EDN: BQQUXA

The security of many information and computer systems is critically important, since their failure can lead to large losses. Behavioral equivalences play an important role in the specification and verification both to compare distinct systems and to reduce the structure of a system without affecting its behavior. Various behavioral equivalences for concurrent and distributed systems have been promoted in the literature and the relationships between them have been fairly well-studied. Two dichotomies are usually considered when comparing the equivalences. The first dichotomy is the ‘linear time - branching time’ spectrum. Here, various possibilities of taking into account the choice points between the alternative actions of systems are discussed.

Alternative choice points are completely ignored in the linear time semantics. As a result, the system behavior is described by the set of all sequences of system actions. Trace equivalence is a typical example of linear time equivalence. The standard example of branching time equivalence is bisimulation assuming that systems should behave in the same way, mutually simulating each other. Bisimulation carefully takes into account the branching structure of the systems behaviors. The second dichotomy is the so-called ‘interleaving - partial’ order spectrum. Here, it determines how to take into account partial order representing causal dependences between system actions. The essence of the interleaving approach is to represent the concurrency relation as a nondeterministic choice between the executions of linearly ordered actions, rather than directly. The advantage of the interleaving interpretation of concurrency is its simplicity, mathematical elegance, and allowing for the analysis of some safety and liveness properties of systems. However the difference between concurrency and nondeterminism is lost. A way to overcome the limitations of the interleaving approach is to model concurrency as a lack of causality dependence between system actions, which is presented, as a rule, by a partial order.

In the step semantics located between the boundaries of the spectrum, concurrency is presented as a step - a set of concurrent actions.

Petri nets (PN) is one of the generally accepted models for analysis of concurrent and distributed systems. The PN consists of two different sets of elements - places and transitions and a labeling function mapping each transition to an action. A state of the PN is called a marking - a subset of places that receive tokens when the net functions. A transition is enabled at a marking if the input places of the transition contain tokens. The firing of a transition enabled at a marking results in the new marking in which tokens are consumed from the input places and tokens are produced to the output places of the transition.

In verification systems, there is an obvious need for considering time. Different timed extensions of Petri nets have been proposed. Time Petri Nets (TPNs) are now a well-established model to describe and study safety-critical systems that require verification of real time (quantitative) characteristics, in addition to functional (qualitative) properties. In the TPN, each transition is associated with a time interval. With that, each transition is assumed to have its own local clock. A state of the TPN contains a current marking and readings of the local clocks of enabled transitions. A transition can fire from a state only if the transition is enabled at the corresponding marking and its clock reaches a moment in time that is within the interval associated. So, the firing of an enabled transition can be suspended for a certain time. Along with that, the firing itself takes no time. State changes are divided in two types: either time elapses, i.e. the clocks of enabled transitions go forward, or a transition fires, i.e. a current marking is changed to a new one and the clocks of the transitions that become enabled at the new marking (newly enabled transitions) are reset to zero.

There are two policies of time elapsing in TPNs, which define strong and weak semantics. In the former semantics, time elapsing cannot exceed the upper bounds of enabled transitions and, therefore, an enabled transition must fire no later than the upper bound of its time interval is reached. On the contrary, any time elapsing is allowed in the latter semantics and, therefore, enabled transitions are not forced to fire. It is known that the two semantics are incomparable w.r.t. timed weak bisimulation.

Memory policies in TPNs determine when the local clocks of enabled transitions are reset. Intermediate and atomic memory policies are put forward in the literature. The former treats intermediary marking, i.e. the marking after consumption of tokens from the input places and before production of tokens to the output places of a transition t that fires. A transition t' is regarded as newly enabled and its clock is reset to zero after the firing of t whenever t' is disabled at the intermediary marking and becomes enabled at the new marking, i.e. after production of tokens to the output places of t. Instead, the latter policy considers a firing as one-step. The clock of t' is reset to zero only if it is disabled at the marking before t fires and becomes enabled at the new marking after t fires. It is known that the marking reachability/coverability and boundedness problems are undecidable for time Petri nets with strong semantics and any memory policy, whereas the problems are decidable in the case of TPNs with weak intermediate semantics but not with weak atomic semantics.

The aim of this paper is to define trace and bisimulation equivalences in the dichotomy of ‘interleaving — partial order’ in terms of time Petri nets with weak time and intermediate memory policies.

Key words: time Petri nets, weak time semantics, intermediate memory policy, behavioral equivalences, interleaving semantics, step semantics, partial order semantics, time processes, trace and bisimulation equivalences.

References

1. Tarasyuk, I.V. Equivalences for behavioral analysis of concurrent and distributed computing systems // Academic Publishing House Geo, 2007.
2. Boyer M., Roux O.H. Comparison of the expressiveness of arc, place and transition time Petri nets // International Conference on Application and Theory of Petri Nets. 2007. pp. 63-82.
3. Berard B., Cassez F., Haddad S., Lime D., Roux O.H. Comparison of different semantics for time Petri nets // International Symposium on Automated Technology for Verification and Analysis. 2005. pp. 293-307.
4. Reynier P.A., Sangnier A. Weak time Petri nets strike back! // International Conference on Concurrency Theory. 2009. pp. 557-571.
5. Virbitskaite I. B., Zubarev A. Y. ‘True concurrency’ semantics for time Petri nets with weak time and persistent atomic policies // Programming and Computer Software, 47(5), 2021. pp. 389-401.
6. Virbitskaite I., Bushin D., Best E. True concurrent equivalences in time Petri nets // Fundamenta Informaticae, 149(4), 2016. pp. 401-418.

Bibliographic reference:  Zubarev A. Y. Comparison of trace and bisimulation equivalences on time Petri nets with weak time policy //journal “Problems of informatics”. 2022, № 4. P.5-27. DOI:10.24412/2073-0667-2022-4- 5-27. EDN: BQQUXA


A. M. Kalney

Technograd Plus Ltd, 630087, Novosibirsk, Russia, Institute of Computational Mathematics and Mathematical Geophysics SB RAS, 630090, Novosibirsk, Russia

OPTIMIZING PLACEMENT OF CONTROL DEVICES ON CHANNELS IN MONITORING NETWORKS

DOI: 10.24412/2073-0667-2022-4-28-38

EDN: FBQEUC

The main task of any information control in the network is to analyze the behavior of the network and simulate options for its development based on real information. The more complex the network topology and configuration, the more information is required to conduct its adequate analysis. When analyzing or designing large monitoring networks, the problem of chosen control devices (b-nodes) for gathering information is often arised. After some pre-processing or directly b-nodes transfer information to central node (c-node) by reliable channels. One of the main indices of quality for such networks is a size of area that is under reliable monitoring, which may be estimated by MENC - a mathematical expectation of a number of nodes that are connected with one special node.

Earlier this problem of optimal placement of control devices was described in [1]. But model was limited by graph only and no algorithm was offered for optimization. In present paper random hypernet with unreliable branches are considered. Simulated annealing algorithm was applied to optimize the cost-consuming placement of b-nodes.

As need in adequate presentation of multi-layered networks is urgent, many researchers propose their models, mainly while solving special tasks. Yet there are some papers that consider the problem in general terms. Most known are such models as sandwiching graphs [2], graphs with different kinds of edges [3], and layered complex networks (LCN) [4]. All these models lacks fundamental and systematical mathematical description. At the same time, more than 30 years ago Russian scientist Vladimir Popkov invented conception of hypernets that allows description of multi-layered networks of different kinds using common basics. Unfortunately,

First mention of this model in English occurs in 2005 only [12]. Later some papers had been published concerning solving some special problems based on the hypernet model: the task of optimal placement of the monitoring devices on channels of communication network [6]; design of utility network [7, 8]; optimizing transport network [9]. Last two use random and fuzzy hypernet, respectively.

The task of accurately calculating the connectivity probability of a random network with unreliable elements belongs to the class of NP-hard ones; therefore, various approximate methods were previously used in practice, while exact calculation methods were mostly of purely academic interest. However, the development of computer technology has led to a revival of interest in the use of exact methods in practice, since it became possible in a reasonable time to calculate the reliability of small and medium-sized networks (up to tens of nodes). Of the exact methods for determining the probability of a network being connected with unreliable elements, the factorization or Moore- Shannon method is most widely known. This method consists in recursively breaking a hypernet into several simpler branches, respectively, where the vertex is “reliable” (the probability of presence becomes equal to one) and where it is removed. Recursion continues until a reliable path connecting the selected vertices is obtained, or until an unconnected secondary network is obtained; the recursion also ends when a two-vertex hypernet is received. Due to the fact that the number of recursions grows exponentially with the number of vertices, additional techniques are required to accelerate this method. The following methods are used: before calling a recursion, edges in hanging trees and in connected components that do not contain selected vertices in the secondary network are deleted, when the vertex is deleted, the network connectivity is checked and if both selected vertices are in the same connected component, then the reliability is calculated in this component, if the selected vertices lie in different components, then we get a disconnected network. In both of the above methods, the number of elements of the hypernet decreases, the reliability indicator does not change.

Classic cost-reliability model is considered for optimization. Each placement of b-node on branch changes hypernet elements. First we get b-node that split branch and also creates new edges in secondary network.

It is required to design a hypernet taking throughputs of channels and route length into account. For solving this problem we search the shortest route for each pair of nodes from X on branches of a primary network. Breadth-first search algorithm is used. By solving this problem the embedding of the secondary network into the primary is obtained, and therefore hypernet is obtained.

Various heuristic algorithms have been considered. Simulated annealing (SA) algorithm seems quite appropriate because it does not require too much calculation MENC in condition (3). As a negative example, a genetic algorithm can be cited, which requires frequent computation of this condition. On every step of SA algorithm we found solution that satisfy condition (3) by one-point random change.

This paper continues research of finding optimal placement in monitoring network. Instead of well known graph model, hypernet model are considered. Also was offered appropriate optimization algorithm. Future works may concern parallel realizations of algorithm, extension of model (e.g. adding lengths for channels), consideration of reliability maximization  problem with limited number of b-nodes.

Key words: Network reliability, hypernets, optimizing placement of control devices.

Bibliographic reference:  Kalney A. M. Optimizing Placement of Control Devices on Channels in Monitoring Networks  //Problems of informatics. 2022, № 4. P.28-38. DOI:10.24412/2073-0667-2022-4- 28-38. EDN: FBQEUC


A. S. Rodionov

Institute of Computational Mathematics and Mathematical Geophysics SB RAS, 630090, Novosibirsk, Russia

IS IT POSSIBLE TO ACHIEVE FURTHER ACCELERATION OF THE CALCULATION OF THE CONNECTIVITY CHARACTERISTICS OF A RANDOM GRAPH?

DOI: 10.24412/2073-0667-2022-4-39-52

EDN: FMEYBE

The article discusses new methods for accelerating the calculation of some characteristics of the connectivity of a random graph (the probability of a subset of vertices being connected (all-terminal reliability, ATR), the average probability of pairwise connectivity (ARC), the mathematical expectation of the size of a connected subgraph containing a selected vertex (MENC), and some others). These problems have a proven non-polynomial complexity and, as a rule, approximate solutions are sought. However, with the development of computer technology and the development of parallel algorithms, finding exact solutions has become possible for graphs of a sufficiently large dimension to solve practical problems (up to hundreds of vertices in the case of a small average degree). In addition, the solutions to these problems found over the years, including various methods of reduction and decomposition by the author of the article [1-10], made it possible to further increase the dimension of the calculated graphs. Exact solutions are also needed to assess the quality of approximate algorithms.

Most explored is the task of finding ATR and two-terminal reliability (pairwise connectivity, s — t connectivity), while various tasks may require and other characteristics of network’s reliability [11] and consider different constrains, graph diameter in particular [12].

Various techniques are proposed for developing the well-known factorization method, when instead of considering one resolving edge, a certain small subset of edges chosen in a special way is considered. It is shown how to use cut edges as such a subset, leading to the possibility of effective use of known decomposition formulas for cut of 1, 2 or 3- vertices [13].

Key words: random graph, network reliability, algorithm.

This work was carried out under state contract with ICM&MG SB RAS 0251-2021-0005).

 

Bibliographic reference:  Rodionov A. S. Is it possible to achieve further acceleration of the calculation of the connectivity characteristics of a random graph? //Problems of informatics. 2022, № 4. P.39-52. DOI:10.24412/2073-0667-2022-4- 39-52. EDN: FMEYBE


E. Kh. Gimadi*’**, A. A. Shtepa**

*Sobolev Institute of Mathematics SB RAS, 630090, Novosibirsk, Russia
**Novosibirsk State University, 630090, Novosibirsk, Russia

ASYMPTOTICALLY OPTIMAL APPROACH FOR THE MAXIMUM SPANNING TREE PROBLEM WITH GIVEN DIAMETER IN A COMPLETE UNDIRECTED GRAPH ON UNI(0; 1)-ENTRIES

DOI: 10.24412/2073-0667-2022-4-53-62

EDN: GMGWHI

We consider two well-known optimization problems: the Minimum Spanning Tree Problem and the Maximum Spanning Tree Problem. There are some extensions of these problems, for example, if we want to find extremal spanning tree with bounded maximum degree of vertices or we search for extremal spanning tree with bounded diameter from above or from below by some integer. The diameter of a tree is the number of edges in the longest simple path within the tree connecting a pair of vertices. In current work, we consider the intractable problem of finding a maximum-weight spanning tree with a given diameter in a complete undirected graph. We construct O(n2)-time approximation algorithm solving the Maximum Spanning Tree Problem with a given diameter in a complete undirected n-vertex graph, and prove the sufficient conditions of asymptotic optimality for this algorithm in the case of independent uniformly distributed UNI(0; 1)-entries. This algorithm uses the algorithm for the Minimum Spanning Tree Problem with given diameter in a complete undirected graph.

Key words: Maximum Spanning Tree, Minimum Spanning Tree, approximation algorithm, probabilistic analysis, asymptotic optimality.

The study was carried out within the framework of the state contract of the Sobolev Institute of Mathematics SB RAS (project FWNF-2022-0019) and supported by Russian Foundation for Basic Research (project 20-3190091).

Bibliographic reference:  Gimadi E.Kh., Shtepa A. A. Asymptotically Optimal Approach for the Maximum Spanning Tree Problem with Given Diameter in a Complete Undirected Graph on UNI(0; 1)-Entries //journal “Problems of informatics”. 2022, № 4. P.53-62. DOI:10.24412/2073-0667-2022-4- 53-62. EDN: GMGWHI


M. P. Bakulina

Institute of Computational Mathematics and Mathematical Geophysics SB RAS, 630090, Novosibirsk, Russia

EFFICIENT LOSSLESS COMPRESSION OF LARGE INFORMATION ARRAYS

DOI: 10.24412/2073-0667-2022-4-63-69

EDN: HIQWPK

The problem of efficient lossless compression of large information arrays is considered. The use of efficient coding for such data allows not only to reduce their physical size, but also to increase the speed of query execution speed. In this paper, a new coding large information data arrays is offered. It allows you to efficiently compress both numeric and string data. An experimental data confirm of compression increase and coding speed increase of the proposed method.

Key words: lossless coding, information array, compression ratio, coding time, method efficiency.

References

1. Bakulina M. P. IspoEzovanie zakona Tcipfa dlia szhatiia tekstov. Diskretny‘i‘ analiz i issledovanie operate!?. Scriia 2, tom 14, N 2, s. 3-13, 2007.
2. Riabko B. la. E‘ffektivny‘i‘ metod kodirovaniia istochnikov informatcii, ispol‘zuiushchii‘ algoritm by‘strogo umnozheniia // Problcmy‘ pcredaclii informatcii. T. 31, vy‘pusk 1, s. 3-12, 1995.
3. Li J., Rotem D., Wong H. A New Compression Method with Fast Searching on Large Databases // Proceedings of 13th International Conference on Very Large Data Bases, Brighton, p. 311-318, 1987.
4. Eggers S., Shoshani A. Efficient Access of Compressed Data Performance // Proc. VLDB, Montreal, p. 205, 1980.
5. Eggers S., Olken F., Shoshani A. A Compression Technique for Large Statistical databases // Proc. VLDB Conf. p. 114, 1981.
6. Li J., Rotem D., Wong EL A New Compression Method with Fast Searching on Large Databases // Proceedings of 13th International Conference on Very Large Data Bases, Brighton, pp. 311-318, 1987.
7. Ziv J., Lempel A. Compression of individual sequences via variable-length coding // IEEE Trans.Inform.Theory. V. IT-24, N 5, pp. 530-536, 1978.
8. Elias P. Interval and recency rank source encoding: two on-line adaptive variable-length schemes // IEEE Trans. Inform. Theory. V. 33, N 1, pp. 3-10, 1987.
9. Bell T. C., Cleary J. EL, Witten I. EL Text Compression. Prentice Hall. Englewood Cliffs, 1990.
10. Zipf G. K. Human behavior and the principle of least effort. Cambridge: Addison Wesley, 1949.
 

Bibliographic reference:  Bakulina M. P. Efficient lossless compression of large information arrays // Problems of informatics. 2022. № 4. P.63-69. DOI:10.24412/2073-0667-2022-4- 63-69. EDN: HIQWPK


S.V. Bredikhin, V. M. Lyapunov, N.G. Scherbakova

Institute of Computational Mathematics and Mathematical Geophysics SB RAS, 630090, Novosibirsk, Russia

THE HYPER-NETWORK OF SCIENTIFIC CO-AUTHORSHIP. DB REPEC DATA ANALYSIS

DOI: 10.24412/2073-0667-2022-4-70-83

EDN: MJFKPC

The article deals with the modeling of a complex network of co-authorship, presented in the form of a hypergraph. The formal information necessary to describe multiple relationships between coauthors is given and two models of the analyzed object are presented. Based on real information extracted from the bibliographic database RePEc, a hypergraph of the co-authorship network is constructed.

Most of the previous studies consider the co-authorship relation between two authors as a collaboration. So a network is represented as a simple graph in which link relates only a pair of authors that are coauthors of at least one scientific paper (SP). These pairwise networks have been studied from many aspects such as degree distribution analysis, community extraction, authors ranking, see, for example, [4-8]. Such networks does not provide a complete description of the collaboration because we only know whether scientists have collaborated or not but we can’t know whether a group of authors linked together in the network were coauthors of the same paper or not. As a variant of the representation that takes into account n-ary relations between authors, a bipartite graph may be considered, in which one partite set represents the authors, the other — SPs prepared by these authors. This makes it possible to use the apparatus of graph theory, but at the same time, heterogeneity in the definition of nodes makes more complicated the study of such topological properties as connectivity and clustering. Therefore, in [10], it is proposed to use a graph generalization, a hypergraph [П], to represent a complex system and call it a hyper-network. Edges of a hypergraph can relate groups of more than two nodes.

Key words: complex networks, scientific co-authorship, hypergraph, bipartite graph, bibliometry.

This work was carried out under state contract with ICMMG SB RAS (0251-2021-0005).

Bibliographic reference:  Bredikhin S. U, Lyapunov V. M., Scherbakova N. G. The hyper-network of scientific co-authorship. DB RePEc data analysis //Problems of informatics. 2022, № 4. P.70-83. DOI:10.24412/2073-0667-2022-4- 70-83. EDN: MJFKPC


R. Milihat*, M. Kalimoldaev**, A. Abdildaeva**, M. Osyman***’****

*Vysshaya shkola informatcionnykh tekhnologiy i inzhenerii Mezhdunarodnogo universiteta Astana, Astana, Kazakhstan
**Institut informatcionno-vychislitelnykh tekhnologiy, Almaty, Kazakhstan
***Kafedra kommunikatcionnykh tekhnologiy i setey, University Putra Malaysia, 43400 UPM Serdang, Selangor Darul Ehsan, Malaysia
****Laboratoriia vychislitelnoi nauki i matematicheskoi fiziki, Institut matematicheskikh isslcdovanii, University Putra Malaysia, 43400 UPM Serdang, Selangor DE, Malaysia
 

HOW ARTIFICIAL INTELLIGENCE IS REVOLUTIONIZING THE MARKETING

DOI: 10.24412/2073-0667-2022-4-84-106

EDN: NSTTIT

The power of artificial intelligence (AI) with machine learning algorithms are rapidly revolutionizing the business world and generating new trends in marketing. Many high- tech companies have already started benefitting from the power of AI in Marketing. Tech Giants such as Alphabet (Google), Amazon, Apple, and Meta (Facebook) are earning multi- billion dollar incomes with the assistance of AI based algorithms. Recent years, the great results of AI in marketing have attracted many scientists around the globe.

In this paper, we review the role of AI in marketing and its strategic implementation framework, as well as main research directions of AI in businesses. Key challenges and opportunities of state-of-the-art technologies in the time of the fourth industrial revolution. We argue that AI will substantially change both marketing strategies and customer behaviors, as well as it opens new avenues for industries and businesses. Also, we describe the key challenges of AI in marketing. In the future, AI will become a new branch of business management and commerce. Integration of computer science and business management will be a novel trend of the future.

Key words: Artificial intelligence, Machine learning, Marketing, Digital Marketing, Marketing strategy.

Bibliographic reference:  Milihat R., Kalimoldaev M., Abdildaeva A., Otman M. How Artificial Intelligence is Revolutionizing the Marketing //Problems of informatics. 2022, № 4. P.84-106. DOI:10.24412/2073-0667-2022-4-84-106. EDN: NSTTIT


V.E. Malyshkin * ** ***A.V. Chmil**, V. A. Perepelkin*’** ***

*Novosibirsk State University, 630090, Novosibirsk, Russia
**Institute of Computational Mathematics and Mathematical Geophysics SB RAS, 630090, Novosibirsk, Russia
*** Novosibirsk State Technical University, 630073, Novosibirsk, Russia

DYNAMIC LOAD BALANCING ALGORITHMS APPLICATION AUTOMATION SUBSYSTEM DEVELOPMENT FOR LUNA SYSTEM

DOI: 10.24412/2073-0667-2022-4-107-119

EDN: SKJWTU

The imbalance of computational load is one of the key problems of parallel programming. The imbalance of the computational load occurs due to various factors. The causes of the imbalance can be both hardware and software. Hardware imbalance occurs due to heterogeneity of computing system resources. The program imbalance is associated with such factors as the dynamics of the simulated phenomenon and an inefficiently parallelized program containing excessive communications, an unsuccessful distribution of calculations between nodes, etc.

Parallel implementation on supercomputers of large numerical models which requires dynamic load balancing on computing nodes is a complex task of system parallel programming. Solving this problem requires certain qualifications. Ordinary users of supercomputers in the field of scientific numerical modeling usually do not have such qualifications, which makes it difficult to use supercomputers in the relevant tasks. The problem is partially solved by using specialized software, where dynamic load balancing has already been implemented. However, the use of such software is not always possible, especially for new numerical models.

Dynamic load balancing is a relatively well-developed topic. There are many publications on this topic. There are algorithms, methods, software implementations and studies of their properties. However, the task of ensuring efficient and balanced performance of supercomputer computing resources is still time-consuming and requires relevant qualifications. Even a "simple" adjustment of the parameters of the load balancing algorithm can become an insurmountable problem in practice. The problem is especially relevant for modern supercomputers of the peta and exaflops ranges, since it is not trivial to provide a sufficiently full load of computing resources of such supercomputers even for simple tasks.

The elimination of the imbalance is a non-trivial task, for which there is no single method. There are many algorithms aimed at eliminating the imbalance, but none of them is universal.

The principal solution is automation of dynamic balancing of the computing load. Automation in this case refers to a situation when various methods, algorithms and programs that perform dynamic load balancing accumulate in some database or library in a form that allows their automatic application. User creates his program in such a way that the corresponding methods, algorithms, and programs are applied automatically, without the need for the user to deeply understand the problems of dynamic load balancing. A specific case is the support for dynamic balancing of computational load in software such as Charm++ or PICADOR. A general case would be a situation where the programming system is not specialized, and all the knowledge about dynamic load balancing accumulated by researchers is available and automatically applied in the library.

It is significant that there are no universal dynamic balancing algorithms due to the algorithmic complexity of this problem in the general formulation. Therefore, various particular and heuristic methods and algorithms used in various practical tasks are being researched in the relevant field. Accordingly, the automation of dynamic load balancing is based on the accumulation of these particular and heuristic algorithms, as well as on the information about their appropriate usage, and on how to determine which case is more suitable in a particular situation.

The LuNA system of automatic design of parallel programs develops with an understanding of this circumstance. One of the tasks of the system is to ensure the accumulation and automatic application of knowledge about dynamic balancing of computing load. The paper reveals the question of how fundamentally the LuNA system is suited to provide this accumulation and automatic application, and also provides information about the extent at which this approach is currently implemented. In particular, the results of an experimental study of the performance characteristics of programs on LuNA systems using various dynamic load balancing algorithms are presented.

Key words: load balancing automation, fragmented programming technology, LuNA system.

References

1. Victor Malyshkin. Active Knowledge, LuNA and Literacy for Oncoming Centuries // LNCS, Vol. 9465. 2015. P. 292-303. DOI: 10.1007/978 - 3 - 319 - 25527 - 919
2. Kale L. V., Krishnan S. Charm++ A portable concurrent object oriented system based on C++ //Proceedings of the eighth annual conference on Object-oriented programming systems, languages, and applications. — 1993. — pp. 91-108.
3. Bastrakov S. et al. Particle-in-cell plasma simulation on heterogeneous cluster systems //Journal of Computational Science. — 2012. — 3(6) — pp. 474-479.
4. Malyshkin, V., Perepelkin, V.: LuNA Fragmented Programming System, Main Functions and Peculiarities of Run-Time Subsystem. In: Parallel Computing Technologies. LNCS 6873, pp. 53-61 (2011).
5. Malyshkin V. E., Perepelkin V. A., Schukin G. A. Distributed algorithm of data allocation in the fragmented programming system LuNA //International Conference on Parallel Computing Technologies. — Springer, Cham, 2015. — pp. 80-85.
6. Acar U. A., Chargueraud A., Rainey M. Scheduling parallel programs by work stealing with private deques //Proceedings of the 18th ACM SIGPLAN symposium on Principles and practice of parallel programming. — 2013. — pp. 219-228.
 

Bibliographic reference:  Chmil A. K, Perepelkin V.A. Dynamic Load Balancing Algorithms Application Automation Subsystem Development for LuNA System  //Problems of informatics. 2022, № 4. P. 107-119. DOI:10.24412/2073-0667-2022-4-107-119. EDN: SKJWTU


V.V. Shakhov*’**, H. Chen***, A. N. Yurgenson**, A.V. Loshkarev****

* Novosibirsk State Technical University, 630073, Novosibirsk, Russia,
**Institute of Computational Mathematics and Mathematical Geophysics SB RAS, 630090, Novosibirsk, Russia,
***China University of Petroleum, 266580, Qindao, China,
****Siberian State University of Telecommunications and Informatics, 630102, Novosibirsk, Russia

ON RELIABILITY OF LINEAR WIRELESS SENSOR NETWORKS

DOI: 10.24412/2073-0667-2022-4-120-128

EDN: XDGSRY

Many solutions in the field of architecture of the Internet of things are based on the results of wireless sensor networks (WSN) research. Currently, corporations and government agencies, especially in the US, Europe and the Middle East, are making serious efforts to research, develop and patent wireless sensor network-based technologies for monitoring long pipelines for various purposes. WSNs are able to carry out continuous monitoring of critical infrastructure objects, detect any anomalies in real time and deliver the necessary data to the decision-making center. However, in this case, it becomes necessary to solve a number of problems, such as rational placement of nodes, optimization of energy consumption, efficient organization of data flows, etc., for the solution of which it is necessary to use methods for assessing the reliability and lifetime of the WSN. After all, these networks can be operated in harsh climatic conditions, network nodes can be located in places that are difficult to access for regular inspection.

The main role of sensor nodes in the WSN used for monitoring is to periodically collect and transmit data to an intelligent central base station (sink, sink), where the collected data is processed to detect various anomalous events. In large- scale WSNs used to monitor long objects, such as pipelines, most sensor nodes are geographically distant from the base station and are usually equipped with self-contained low-cost batteries, the capacity of which is relatively small. This circumstance largely determines the main disadvantages of extended WSNs, since the periodic transmission of raw data over long distances through several hops to the base station leads to a rapid discharge of the battery of sensor nodes and reduces the life of the network. Other disadvantages include low reliability of wireless channels, high cost of bandwidth, low level of data security. Thus, there is a need to solve optimization problems for extended WSNs, where the objective function or constraints contain a reliability indicator.

Authors propose an approach to solving a number of problems that arise when monitoring long pipelines using wireless sensor networks.

A typical scenario used in a number of recent publications on this topic is considered. The WSN consists of several sensor nodes located on the surface of the pipe, the topology is a simple chain, at the end of which there is a base station. The sensors are responsible for collecting data, periodically sending packets to the base station. All nodes play an important role in data forwarding, the node closest to the base station transmits data directly to the sink, intermediate nodes are used to transmit packets from other nodes, i.e. data sent by the sender to the sink is relayed by nodes located between

the sender and the sink. The main power consumption is caused by traffic transmission. Obviously, this leads to excessive waste of energy of the sensor nodes located closer to the base station due to the high asymmetric load on these nodes.

Each sensor node performs periodic monitoring within its sensitivity range. All sensor nodes are initially in the same conditions, have similar communication capabilities, power consumption, their behavior is described by the same conceptual models. Each node transmits its packet to the neighboring node in the direction of the base station. The corresponding software module deployed on the base station receives data from all sensor nodes, decides on the presence or absence of a problem on its own, or via a highly reliable IP network, the data is transmitted to the decision center.

To formalize the reliability criteria, the corresponding mathematical models based on Markov processes have been developed. The obtained results make it possible to find a compromise between the reliability of the monitoring network and the costs of its deployment and maintenance.

Key words: wireless sensor networks, Internet of Things, structural health monitoring.

References

1. Uckelmann, D., Harrison, M., Michahelles, F. Architecting the Internet of Things. Springer: Berlin/Heidelberg, Germany, 2011.
2. Quintana-Suarez M., Sanchez-Rodriguez D., Alonso-Gonzalez I., Alonso-Hernandez J. A Low Cost Wireless Acoustic Sensor for Ambient Assisted Living Systems // Applied Sciences, vol. 7, no. 9, p. 877, Aug. 2017
3. Nkemeni V., Mieyeville F., and Tsafack P. A Distributed Computing Solution Based on Distributed Kalman Filter for Leak Detection in WSN-Based Water Pipeline Monitoring// Sensors, vol. 20, no. 18, p. 5204, Sep. 2020.
4. Ali S. et al. SimpliMote: A Wireless Sensor Network Monitoring Platform for Oil and Gas Pipelines // IEEE Systems Journal, vol. 12, no. 1, pp. 778-789, March 2018.
5. Albaseer A., Baroudi U. Cluster-Based Node Placement Approach for Linear Pipeline Monitoring// IEEE Access, vol. 7, pp. 92388-92397, 2019.
6. Tong F., He S., Pan J. Modeling and Analysis for Data Collection in Duty-Cycled Linear Sensor Networks With Pipelined-Forwarding Feature // IEEE Internet of Things Journal, vol. 6, no. 6, pp. 9489-9502, Dec. 2019.
7. Shakhov V. V., Yurgenson A. N. // Towards Edge Computing Based Monitoring for Smart Ports // Springer Lecture Notes in Computer Science (2021), vol. 12956.
8. Wang Q. Packet traffic: a good data source for wireless sensor network modeling and anomaly detection // IEEE Network, vol. 25, no. 3, pp. 15-21, May-June 2011.
9. Shakhov V., Koo I. Depletion-of-Battery Attack: Specificity, Modelling and Analysis // Sensors, vol. 18, no. 6, June 2018.
10. Shakhov V. V., Yurgenson A.N. Towards Edge Computing Based Monitoring for Smart Ports // Springer Lecture Notes in Computer Science, vol. 12958, 2021.
 

Bibliographic reference:  Shakhov V. K, Chen H., Yurgenson A. N., Loshkarev A. V. On reliability of linear wireless sensor networks //Problems of informatics. 2022, № 4. P.120-128. DOI:10.24412/2073-0667-2022-4-120-128. EDN: XDGSRY