Volume 2(59)

CONTENTS

  1. Gurieva Yu., Vasiliev E., Smirnov L. Conservation laws in a neural network approach to numerical solving of the nonlinear Schrodinger equation 
  2. Silenko D. I., Lebedev I. G. Global optimization algorithm that uses decision trees to find local extrema

Lobachevsky State University, 603022, Nizhny Novgorod, Russia

CONSERVATION LAWS IN A NEURAL NETWORK APPROACH TO NUMERICAL SOLVING OF THE NONLINEAR SCHRODINGER EQUATION

DOI: 10.24412/2073-0667-2023-2-5-20

EDN: LFBNWY

We consider a possible modification of a neural network approach to numerical solving of nonlinear partial differential equations (PDE), describing physical systems with integrals of motion. In this approach, we approximate solutions of the equations by deep neural networks, using physics-informed method.

Physics-informed neural network (FINN) approach proposes nonlinear function approximators that integrate the observational data, initial and boundary conditions and description of physical system in form of PDE by embedding the corresponding residuals into the loss function of a neural network. Therefore, the problem of solving nonlinear differential equations turns into the problem of minimizing the squared residuals over domain which is achieved by automatic differentiation and stochastic gradient descent.

The proposed modification of this method means consideration and implementation of corresponding conservation laws for training of the neural networks, and is expected to improve the physical properties of the trained nonlinear regression models. The purpose of this work is to modify a neural network using the conservation law constraint, such that the predicted solution will satisfy the continuity equation better and faster as well as speed up the rate of convergence and provide better accuracy. Improvement of the conservative properties of the approximation is provided by the specific loss function regularization: addition of the conserved quantities’ residuals to the loss function to train the neural network.

To test this method, we considered one-dimensional nonlinear Schrodinger equation and its conservation laws in integral form. Number of quants and energy were used as conserved physical quantities. In our experiments, their values were calculated in several equidistant time moments and compared with reference to find the corresponding residuals and implement the conservation constraint in the loss function. Therefore, the average residuals of number of quants and energy for the prediction are considered as quality metrics in the problem, as well as pointwise difference from the predicted and reference solution (validation error). Reference functions for validation datasets are derived from the analytical expressions for the exact solutions.

This modified neural network approach is applied to the different classes of analytic solutions of the nonlinear Schrodinger equation: one soliton, interaction of two solitons (in breather form), first-order rogue wave. For each solution, we apply three forms of the conservative regularization: quants’ number constraint, energy constraint and the sum of them. The training curves and predictions are compared with the solution obtained with the initial loss function (baseline).

It is shown that introduction of the additional conservative constraints to loss function reduces the conserved quantities’ residuals for training and prediction in all cases. For the simplest one-soliton

solution, the regularizations improve not only conservation quality metrics, but also pointwise difference with the reference in the same training time. The best result was obtained by the combination of constraints: validation error is reduced by more than three times. However, for more complex solution forms, such as two solitons and rogue wave, the results are not as good. The conservative constraints significantly change the form of loss function, so the training curves start to plateau, and the training process becomes more unstable. For the most complex two soliton interaction, it requires about two times more optimization steps to converge. The validation error is improved only for the energy constraint for both cases: for two-soliton solution, validation error is reduced by 13 %; for rogue wave, it is reduced by 67 %. Therefore, the effect of conservative modification of the deep learning approach for nonlinear partial differential equations is individual for different systems and conserved quantities. Generalization ability of such neural networks should be further investigated and tested for different problems.

Key words: deep learning, neural networks, nonlinear Schrodinger equation, conservation laws, solitons.

Bibliographic reference: Gurieva Yu., Vasiliev E., Smirnov L. Conservation laws in a neural network approach to numerical solving of the nonlinear Schrodinger equation //journal “Problems of informatics”. 2023, № 2. P.5-20. DOI:10.24412/2073-0667-2023-2-5-20

article


D.L Silenko, I. G. Lebedev

Lobachevsky State University of Nizhny Novgorod, 603022, Nizhny Novgorod, Russia

GLOBAL OPTIMIZATION ALGORITHM THAT USES DECISION TREES TO FIND LOCAL EXTREMA

DOI: 10.24412/2073-0667-2023-2-21-33

EDN: MLGKOX

The paper considers the algorithms for solving the multidimensional global optimization problems using decision tree to reveal the attraction regions of the local minima. We suppose, that the target function is defined as a “black box” and satisfied Lipschitz condition with unknown constant. We propose a method for selecting the local extrema neighborhood of the target function based on analysis of accumulated search information using machine learning methods. This allows us to make a decision to run a local method, which can speed up the convergence of the algorithm. The proposition was confirmed by the results of numerical experiments demonstrating the speedup when solving a series of test problems.

Key words: Global optimization, multiextremal functions, parallel computing, decision tree.

The work was supported by the Ministry of Science and Higher Education of the Russian Federation (project- no. FSWR-2023-0034), and by the Research and Education Mathematical Center (project no. 075-02-2022-883).

Bibliographic reference: Silenko D. I., Lebedev I. G. Global optimization algorithm that uses decision trees to find local extrema //journal “Problems of informatics”. 2023, № 2. P.21-33. DOI:10.24412/2073-0667-2023-2-21-33

article


A. A. Korotysheva*, S.N. Zhukov*, V. R. Milov**,Y. S. Yegorov**, A.Y. Chekusheva**, M.S. Dubov***

*Lobachevsky State University of Nizhny Novgorod, 603022, Nizhny Novgorod, Russia
**Nizhny Novgorod State Technical University n. a. R. E. Alekseev, 603950, Nizhny Novgorod, Russia
***LLC “Mabex”, 603122, Nizhny Novgorod, Russia

CLASSIFICATION OF UNLABELED BATTERY ON X-RAY IMAGES USING MACHINE LEARNING METHODS

DOI: 10.24412/2073-0667-2023-2-34-44

EDN: ACWWCK

The problem of identifying and classifying hazardous and valuable species of municipal solid waste (MSW), especially unlabeled cell batteries, has become increasingly important in the light of current global environmental policies, which emphasize the need for increased recycling and utilization of waste. With the introduction of a variety of environmental initiatives, it is essential to ensure that proper identification and classification of MSW is carried out in order to reduce the environmental impact of MSW. This includes identifying and classifying hazardous and valuable materials, such as cell batteries, to ensure they are reused and recycled rather than disposed of in landfills. Furthermore, the development of effective strategies for the detection and classification of MSW is essential in order to maximize the economic and environmental benefits associated with the recycling and utilization of waste. This article describes an approach to the standard-size, unlabeled cylindrical cell batteries identification, powered by computer vision. To achieve this goal, a video camera and an X-ray machine are used to analyze and process images. The images captured by the video camera are first processed by a series of steps involving data preprocessing, feature extraction and model training. All the extracted features are then combined to form a model which can be used to accurately detect and recognize the cell batteries in the MSW stream on the conveyor belt. The developed procedures ensure a sufficiently high-quality classification of intact label batteries, and thus can be used to effectively identify the batteries in multiple scenarios. An extra step of digital radiography image processing is proposed, which allows for recognition even when the marking is significantly damaged. This novel approach to image processing offers a dependable and accurate method for the classification of batteries, even when their markings are no longer clearly visible or are completely obscured. This is a great benefit, as previous techniques relied on the clarity of the markings, which created difficulties when those markings were faint or absent. The core of the batteries identification system is a neural network, trained on a data set containing X-ray images of various types of batteries along with the associated classes. This neural network MobilcNctV2 is used to extract features from the images, allowing it to correctly classify the batteries for further sorting. The proposed method of batteries neural network classification, including the optical images and X-ray images processing, thus forms the backbone of a software and hardware complex for automated MSW sorting lines. The use of this system to identify and sort batteries would greatly reduce manual labor, improve accuracy and increase the efficiency of the sorting process. Additionally, the use of this system could also potentially reduce the amount of time required to sort the batteries, as the neural network can process the images much faster than a human can. This system could thus revolutionize the MSW sorting process, making it more accurate and efficient than ever before.

Key words: machine learning, X-ray images, neural network, batteries, image classification.

This work was funded by the Fund for the Development- of Small Forms of Enterprises in the Scientific and Technical Sphere (Agreement No. 57GS1IIS12-D7/72200 of 21.12.2021)

Bibliographic reference: Korotysheva A. A., Zhukov S.N., Milov V.R., Yegorov Y.S., Chekusheva A.Y., Dubov M.S. Classification of unlabeled battery on X-ray images using machine learning methods //journal “Problems of informatics”. 2023, № 2. P.34-44. DOI:10.24412/2073-0667-2023-2-34-44

article


A. A. Kudryavtsev*, V. E. Malyshkin*’**’***, Yu. Yu. Nushtaev*, V.A. Perepelkin*’**’***, V.A. Spirin*

*Novosibirsk State University, 630090, Novosibirsk, Russia,
**Institute of computational mathematics and mathematical geophysics SB RAS, 630090, Novosibirsk, Russia,
***Novosibirsk State Technical University, 630073, Novosibirsk, Russia

EFFICIENT FRAGMENTED IMPLEMENTATION OF THE TWO PHASE FLUID BOUNDARY VALUE PROBLEM

DOI: 10.24412/2073-0667-2023-2-45-73

EDN: IWCDKX

Programs construction automation is an approach which can potentially reduce complexity and laboriousness of development, debugging and modification of numerical parallel programs for multicomputers. In high performance computing it is important not just to construct a valid program, but also to make it efficient, which is a challenging problem with no satisfactory general solution. Thus various programming systems are only capable of providing high efficiency of constructed programs for a limited range of applications. To achieve this the systems employ various heuristics and particular effective solutions. Evolution of parallel program construction automation means consists in accumulating such heuristics and particular solutions in order to improve efficiency of constructed programs, as well as to widen the range of applications the system can handle effectively. It is important to investigate various particular manual implementations of numerical programs from the perspective of the possibilities of further automation of such construction. Fragmented programming technology is an approach for numerical parallel programs development and construction automation. The approach is based on the theory of parallel programs synthesis on the basis of computational models. The approach is partially supported by LuNA system, which is a system for numerical parallel programs construction automation for distributed memory systems (multicomputers). The paper is devoted to study of a particular application — a two phase fluid boundary value problem solver for a 3D case and presence of wells. The application is implemented as a fragmented program in two versions: the first one is based on conventional means (MPI and OpenMP), and the second one is using LuNA system.

The basic idea behind fragmented programming is to consider a parallel program as an aggregate of sequential parts called computational fragments (CF). Each CF is implemented by a conventional sequential subroutine with no side effects. Input and output arguments of CFs are immutable pieces of data called data fragments (DFs). The execution process is considered as execution of a set of CFs in a data-flow manner, where each CF is ready for execution once all its input DFs are computed. CF’s execution produces a number of output DFs. If the program is represented as a set of CFs and DFs a system can be used to perform execution and provide dynamic properties of the execution, such as dynamic load balancing.

LuNA system offers a domain specific language LuNA to describe the set of CFs and DFs as LuNA- program. The system then translates the program into an intermediate representation, executable bv the runtime subsystem. The runtime subsystem is basically a distributed virtual machine, which implements CFs execution in data-flow manner. Such an approach significantly simplifies the process of parallel program construction, since the programmer does not do parallel programming as such. He only describes the set of CFs and DFs, provides conventional sequential subroutines which implement CFs in C++, and that’s all. No programming of communications, synchronizations, memory management and other low-level details is required. However, the efficiency of execution of LuNA programs may be significantly lower, than that of manually developed program using conventional parallel programming means. That is caused by the fact that construction of an efficient parallel program from its high-level specification is algorithmically hard in general case. To help LuNA system to construct more efficient programs the programmer is provided with means to tune the construction process. The means are called recommendations and directives. Usage of the means can significantly increase the efficiency of the constructed program by supplying the system with the programmer’s insight on how he suggests to execute fragments. Such information includes hints on CFs and DFs distribution and redistribution to nodes, order of CFs execution, garbage collection directives, etc.

In the paper an in-depth analysis of the considered application is provided to elaborate an efficient parallel implementation of the numerical algorithm in a multi-core distributed environment. Then an efficient conventional distributed program is developed and described in the paper. The program is developed using MPI and OpenMP. Then, a LuNA program is developed and optimized. The process of development and optimization of LuNA program is presented in the paper to allow reuse of the experience for future development of similar fragmented programs. Then the experimental study of the efficiency of the constructed programs is presented. The implementations were examined on a representative set of parameters for three different hardware environments, namely, Novosibirsk State University Computing Center and Joint Supercomputer Center of RAS with Ethernet and InfiniBand interconnects. The conventional distributed program has shown the speedup of 2.3x on 6 nodes, which is a satisfactory result for the application class. The LuNA program has shown about 3x slowdown on up to 16 computing nodes as compared to MPI implementation, which is a good result for an automatic parallel programs construction system.

To conclude, the research has resulted in development of an efficient MPI implementation of the application, based on an in-depth analysis of the numerical algorithm. Current version of LuNA system was tested for its ability to construct efficient parallel program in real life computations, and the tests showed, that LuNA is capable of it. All the implementations are described in the paper in details to allow other programmers to reuse the experience for implementation and optimization of other fragmented programs. The conducted research can also be used as the basis for development of system algorithms, capable of automatic optimization of efficiency of similar LuNA programs.

Key words: Fragmented programming, LuNA system, parallel programs construction automation, high performance computing, case study.

This work was carried out under state contract with ICMMG SB RAS 0251-2021-0005.

Bibliographic reference: Kudryavtsev A. A., Malyshkin V.E., Nushtaev Yu. Yu., Perepelkin V. A., Spirin V. A. Efficient fragmented implementation of the two phase fluid boundary value problem //journal “Problems of informatics”. 2023, № 2. P.45-73. DOI:10.24412/2073-0667-2023-2-45-73

article


N.A. Matolygina, M.L. Gromov, A. K. Matolygin

National Research Tomsk State University, 634050, Tomsk, Russia

APPLICATION OF THE TENSOR APPROACH TO THE SOFTWARE IMPLEMENTATION OF THE CELLULAR  AUTOMATON FLOW MODEL

DOI: 10.24412/2073-0667-2023-2-74-85

EDN: JIPOXl

Cellular-automaton models are actively used to model physical processes. The cellular automata in these models are large and require a large number of iterations to observe effects of interest. Researchers use parallel technologies to organize calculations in order to get the result quickly. The choice of technology usually depends on the level of the researcher’s skills in the field of parallel programming. Parallel programming is a complex skill, and to get it you need to learn a lot of theory and even more practice. We have proposed a special tensor approach to the software implementation of cellular automata to free the researcher from these difficulties and help to create a parallel software product.

A special framework that automatically parallelizes computations at NVIDIA GPU cores is central to the approach. The chosen framework is TensorFlow. The main data structure of TensorFlow is a tensor (multidimensional matrix). Thus, in order to implement a cellular automaton using the tensor approach, it is necessary to represent the cellular automaton as a tensor, and the logic of the automaton transition from one state to another as operations on tensors.

There are two options for applying tensor operations. The first option is to use ready-made operations. The framework implements simple operations that work with ordinary matrices, and more complex operations, such as convolution. In this case, the researcher needs to analyze how the cellular automaton works and select the necessary tensor operations that implement the automaton. The second option is to create your own tensor operation. A custom operation is created using CUDA technology. In this case, the structural elements are ordinary two-dimensional arrays that represent tensors. The user needs to independently allocate the number of threads and blocks required for calculations. The TensorFlow developer libraries allow you to implement operations on data both as programs for the central processor and as programs for graphics adapters in the CUDA C programming language.

In this paper, to demonstrate the performance and capabilities of the tensor approach, the well- known cellular automaton FHP flow model is implemented. There are two phases in this model: collision and propagation. It was decided to implement our own operation, which describes the logic of the automaton, after analyzing the existing TensorFlow operations. Both phases of the FHP model are implemented by us with one custom operation, which is implemented in TensorFlow. The operation takes as input a tensor corresponding to a cellular automaton and a tensor of random numbers to determine the propagation direction. At the output, the operation generates a tensor, to the elements of which the propagation phase and the collision phase are applied. Experiments with the model implemented using the tensor approach were carried out. The gas flow in an open longitudinal pipe is simulated. The particle source is located on one side of the pipe. In the first part of the experiment, an obstacle in the form of a small oblong object is located in the pipe, in the second part — a small round object. The experimental results are visualized and the picture of the process agrees with the results obtained in other literature sources.

Additionally, a comparative experiment was carried out to evaluate the effectiveness of the tensor approach. In this case, we compared the number of automaton cells processed per second in our implementation on TensorFlow and in an implementation written using only CUDA technology. The results of the comparison showed that when implementing the FHP flow model using the tensor approach, fewer (by 10 or 100 times, depending on the size of the cellular automaton) cells are processed in the same time than in an implementation built on only one CUDA technology. Despite the fact that the implementation of the flow model in TensorFlow is inferior to the implementation written in CUDA in terms of time, the process of building a parallel software implementation is simpler and does not require the researcher to have a deep understanding of the features of parallel programming.

Key words: cellular automaton, tensor approach, gas flow.

Bibliographic reference: Matolygina N. A., Gromov M. L., Matolygin A. K. Application of the tensor approach to the software implementation of the cellular automaton flow model //journal “Problems of informatics”. 2023, № 2. P.74-85. DOI:10.24412/2073-0667-2023-2-74-85

article


I.Mikulik, E. Blagoveshchenskaya

Petersburg State Transport University, 190031, Saint Petersburg, Russia

PARALLEL IMPLEMENTATION OF THE ANT COLONY ALGORITHM WITH PARAMETERS UPDATE USING THE GENETIC ALGORITHM

DOI: 10.24412/2073-0667-2023-2-86-97

EDN: HBTPLC

The paper considers the possibility of using a joint implementation of a hybrid method using the ant colony optimization with genetic algorithm for solving traveling salesman problem. It is known that the ant colony optimization is sensitive to its parameters, so the search for the optimal parameters of the ant colony is suitable as a problem, the solution of which is related to the genetic algorithm. The one of the purposes of calculations parallelization is to reduce execution time, but not every algorithm has an effective parallel implementation. It is known that the genetic algorithm and the ant colony optimization are parallelized. The paper studies the possibility of constructing parallel computations for the hybrid method presented. The traveling salesman problem on which the research is conducted is an NP-complete problem and it is often used to test combinatorial optimization algorithms. It is shown that parallelization of the method used leads to an increase in the speed of the algorithm.

Key words: traveling salesman problem, optimization methods, ant colony optimization, genetic algorithm, parallel computing.

The research was supported by the Russian Science Foundation (project No. 22-21-00267).

Bibliographic reference: Mikulik I., Blagoveshchenskaya E. Parallel implementation of the ant colony algorithm with parameters update using the genetic algorithm //journal “Problems of informatics”. 2023, № 2. P.86-97. DOI:10.24412/2073-0667-2023-2-86-97

article