2016 № 3 (32)


  7. Matveev A. S., Nikitin* V.V. , Romanenko** A. A., . Duchkov A. A. EFFECTIVE IMPLEMENTATION OF USFFT ALGORITHM

Yunicheva N. R.

Institute of Information and Computational Technologies, 125 Pushkin Street, 050010, Alma-Ata, Kazakhstan


UDC 681.5

Mathematical description of the phenomena and processes of the nature is often carried out with this or that degree of error. Such errors lead to the fact that the mathematical model not rather fully reflects all qualities of studied processes. Desire to eliminate this deficiency contributed to development of a new scientific direction in the modern control theory, which acquires the increasing relevance and a demand in practice now. Within this direction of an error of modeling caused by the reasons of various type are considered directly in the most mathematical model by introduction of interval parameters with set lower and upper bounds. Such approach to a problem allows to judge existence of these or those qualities of the studied phenomenon or process in conditions, so-called, parametrical uncertainty. The most developed in sense of richness of ideas and methods of property research in conditions of parametrical uncertainty there was a class of linear mathematical models to which the most part of scientific works in this area is devoted. However, in practice there are cases when it is impossible to be limited to consideration only of linear mathematical models. On the other hand, today the arsenal of methods for studying dynamical qualities of processes described by nonlinear mathematical model with unknown parameters are presented extremely sparingly in current scientific work. In this regard special relevance is acquired by problems of development new and existing methods of quality research of nonlinear dynamic models of processes in the conditions of parametrical uncertainty.
Studying the properties of nonlinear dynamic systems with unknown parameters interval type is of great scientific interest. Many of the issues relating to the investigation of the stability of non-linear interval dynamic systems defined in the state space are still open. In this paper we consider the nonlinearity of sector type. Research problems of dynamic systems with nonlinearity of sector type, mathematical models are accurately known goes back to the works A. I. Lure, and Popov's and consists of two interrelated areas of the modern theory of absolute stability. The presence of interval uncertainty has given rise to a new round of the relevance of research tasks A. I. Lure nonlinear systems with unknown parameters. For example, obtained by modifying the frequency robust stability criteria absolute uncertainty in the linear part of the system. In contrast to this work, in which the linear part is given in the form of a family of polynomials, the greatest interest in this area is the study of nonlinear systems defined in the state space. In other work using the Lyapunov–Krasovskii functional, sufficient conditions for the absolute stability of interval nonlinear systems with delay in state and nonlinearity of sector type.

The development of Lyapunov's direct method, successfully proven in solving many problems of control theory, a class of interval-specified objects leads to the necessity of the study of solution sets of interval matrix Lyapunov equations, Sylvester. The complexity of the mathematical description of such sets leads to an exponential growth in computing costs in solving the problems of control theory. However, in most cases in practice it suffices to consider the outer or inner interval estimates of these sets.

In this paper, based on the direct method of Lyapunov, an algebraic criterion for absolute stability of zero equilibrium position interval dynamic system with vector nonlinearity of sector type is proposed.

Key words: inaccurate data, stability of a dynamic system, the tolerance solution set

Bibliographic reference: Yunicheva N. R. Sufficient conditions for stability of the dynamic system with inaccurate data //journal “Problems of informatics”. 2016. № 3. P. 4-12.


Achasova S. M.

Institute of Computational Mathematics and Mathematical Geophysics of SB RAS. 630090, Novosibirsk. Russia


UDC 681.32

John von Neumann used the concept of cellular automaton for presenting and studying the logical form of self-reproduction. His goal was to describe the fundamental principles and algorithms of information processing involved in the process of self-reproduction, in other words, to separate the logical form from the natural process of self-reproduction. It is interesting to note, a few years before J. Watson and F. Crick discovered the DNA double helix von Neumann formulated the need for a one-dimensional description (genome) of the self-replicating structure, which is fed on the input tape, and then generates the structure in the cellular automata space. In addition, von Neumann formulated the principle of the dual use of the genome that is fed on the input tape: it serves as a program for construction of the mother structure (translation of the genome), and is also copied to the mother structure (transcription of the genome) so that a daughter structure could then be produced.

Further study of self-reproduction was associated with Langton loop. This is a cellular automaton incapable of universal construction, but capable exclusively of self-replication. Originally Langton loop was a rectangular loop in a two-dimensional cellular automata space. The self-description (genome) of the mother loop circulates in Langton loop in the form of a sequence of cell states. Simultaneously with construction of the daughter loop the genome is rewritten into it, and then this loop creates its daughter. Langton loop was used as a model for verification hypotheses relating to the emergence of biological life. Langton loop was endowed with the ability to interact with the external observer. Attempts were made to create "useful replicator" on the basis of Langton loop; this is the cellular structure which executes a computational program together with constructing a copy. Subject to the successful development of such direction self-replicating structures can be considered as a new paradigm for designing fine-grain parallel algorithms and architectures.

In the paper the self-replicating cellular automaton structure in the form of a matrix of artificial biological cells "Star" is presented. The simple program for constructing this structure is based on the Parallel substitution algorithm (PSA) that is a spatial system for representing fine-grain parallel algorithms and architectures. An artificial biological cell is constructed from a genome that is fed on the input tape. The result is a model of an artificial biological cell, which contains the phenotype as a set of fixed data and the genotype as a set of mobile data. The structures of artificial biological cells can be the components of computer system that mimic the properties of living organisms – growth, self-replicating, self- repair.

The PSA is an expanded paradigm of the classical cellular automaton (CA) and has some new properties compared with CA, which enhances its functional and expressive abilities. These properties are as follows. An arbitrary substitution template is admitted. At each clock cycle, one substitution can change the states of several cells. New type of a substitution is introduced. This is the functional substitution, in which the new states of the cells are functions of the states of the adjacent cells. These properties of the PSA enable to create compact, easily foreseeable and structured description of the process of building fine-grain models of artificial biological cells. The devices possessing such properties can be used in space research, in radioactive environments, avionics, etc.

Key words: cellular automaton, self-replicating structure, parallel substitution algorithm, artificial biological cell, artificial multicellular organism.

Bibliographic reference: Achasova S. M. Cellular automata self-replicating matrix of artificial biological cells //journal “Problems of informatics”. 2016. № 3. P. 13-25.


Bredikhin S. V., Lyapunov V. M., Shcherbakova N. G.

Institute of Computational Mathematics and Mathematical Geophysics SB RAS, 630090, Novosibirsk, Russia


UDC 001.12—303.2

Methods of measurement of the parameters characterizing a structure of the citation network of scientific publications are presented: average distance, density and transitivity. Values of these parameters are calculated based on the citation data extracted from the bibliographic DB RePEc. Clustering analysis of co-citation, bibliographic coupling and summary graphs corresponding to the main network component is made using two algorithms of community detection. The comparison of results was done by computing NMI. Analysing allowed to detect groups of articles, joint by common subject and to characterize them.

Key words: average distance, density, transitivity, clustering coefficient, communities, clustering algorithm, modularity, NMI measure.

Bibliographic reference: Bredikhin S. V.,Lyapunov V. M., Shcherbakova N. G. The structure of the citation network of scientific publications //journal “Problems of informatics”. 2016. № 3. P. 26-43.


Zaripova G.I.

Samarkand State University, 140104, Samarkand, Uzbekistan


UDC 658.512.011

Probability structure of information authenticity is the main reason for complexity of solving the problems of identification of non-stationary objects; improve the performance of automated control systems to technological processes, and also ensuring of efficiency of data processing. Thus, significant factors that reduce authenticity of data transfer and processing become non-stationarity of processes, insufficiency of priory knowledge and large parametrical uncertainty in models to describing objects [1, 2].

In this context, development of methodical bases to constructing methods and software-algorithmic complexes for improve the authenticity of data processing, taking into account kind, properties, distribution laws, regular error represent actual scientific and technical problem [2, 3].

In traditional approaches to development of methods to improve the authenticity of information the solutions of tasks are gotten on the basis of statistical and dynamic modeling, algorithmic implementation and experimental studies with extensive a priory data. Thus the most typical statistical characteristics of factors used for estimation of the regular error are mathematical expectation, mean-squared deviation, distribution laws, auto correlation functions, coefficients of pair and mutual correlation connections, other dynamic characteristics of random time series (RTS), describing non-stationary objects [3].

Feature of the present issue is the development of methodical bases for multivariate analysis of regular error function in methods of RTS’ identification and approximation on the basis of mechanisms to revealing and use statistical, dynamic characteristics of information and probabilities of errors’ distribution with limited retrospective data of information process.

The offered approach to ensure the authenticity of data assumes search of extremum of influences function, creating in future an opportunity to evaluate the minimal regular error of RTS identification. Thus the regular error of identification on model of non-stationary objects is represented by the sum of additional factor errors caused at each stage of information transformation [4, 5].

The technique of multivariate analysis is developed on the basis of identification of non-stationary objects in view of estimates of influence degree on a regular error on all stages of information processes. The structural components of the influencing factors are considered. The models and algorithms of RTS identification are developed on the basis of account of a regular error according to given technique.

Functions in the form of regression dependences are used as model of identification and approximations of non-stationary object and in them estimates of error are defined by parameters of mathematical expectation and mean-squared deviation with the normalized level of influencing factor coefficient’s sign.

The perspective and effective approach is offered to improve the authenticity of data processing by overlapping opportunities of statistical and dynamic models of identification with methods to estimate influence factors on a regular error. Questions of synthesis of statistical and dynamic models for non-stationary objects identification are investigated as the basic fundamentals of methods to improve the authenticity of non-stationary objects’ data.

With the purpose of a regular error reduction during RTS identification the technique is offered to use the method checking observance the balance ratio entered into structure of dynamic model of non-stationary object. To optimize the analysis and processing of data in structure of dynamic model for non-stationary object identification the additional balance ratio are entered, and they are set based on normative requirements revealed on a long enough time interval. The mathematical model is formalized for identification of non-stationary object with procedures of check of observance of balance ratio. The check of observance of balance ratio in dynamic models of identification of non-stationary object is carried out by method of target shift of RTS sequence values inside a range of probability distribution with account iteratively, conditions of non-stationarity and parametrical uncertainty.

The general solution of task is gotten in the form of continuous and differentiable by all variable equations. The algorithm is developed for identification of non-stationary objects by correcting balance ratio with linear dynamic equations.
The method to dynamic identification is investigated under various distribution laws for a regular error and properties of RTS non-stationarity. The composition of specific characteristics of input variable, models and algorithms to control of a regular error, adjustment and correction of parameters of model are determined and also obtained estimates of minimization of RTS dispersion and target parameters are investigated. It is proved, that realization of methods to synthesis models of multivariate influences of a regular error with dynamic model of non-stationary object, methods to improve information authenticity by check and correction of balance ratio contribute to achieving improve the stability of identification, and also efficiency of data analysis and processing.

Bibliographic reference: Zaripova G.I. Ensuring the authenticity of data processing on the basis of identification of non-stationary objects in conditions of uncertainty of regular error //journal “Problems of informatics”. 2016. № 3. P. 44-58.


Kazancev G. Yu., Omarova G. A.

Institute of Computational Mathematics and Mathematical Geophysics SB RAS, 630090, Novosibirsk, Russia


UDC 519.179.2—512.23

Simulation of transport systems is the demanded task in control of road networks helping to make decisions on further development and extension of transport system. In particular, simulation allows defining need for extension of a road network or adding of means of regulation. In this paper, traffic is simulated according to the cellular automaton of the Nagel-Scheckenberg model. It was the first model to take into account the imperfect behavior of human drivers and was thus the first model to explain the spontaneous formation of traffic jams. Now application of cellular automaton is also actual for simulation and research of different road situations and behavior of pedestrians. This model satisfies to traffic regulations in ideal conditions, when all machines move with the fixed speed and all drivers follow rules.

The Nagel-Scheckenberg model is a one-dimensional stochastic machine designed to simulate traffic. The model dimensional grid used in each cell is placed exactly one machine, the cell is either empty or contains a car. Time is discrete, the machine moves to forward an integral number of cells for each step of iteration. At each step of the iteration for each car in turn all the rules are applied. The first rule is responsible for aspiration of drivers to go as soon as possible, without violating the rule, the second rule doesn't allow collisions, and the third rule introduces an element of randomness in the motion of each driver. This rule set is the minimum set necessary for reproduction of basic properties of a transport flow. The main advantage of this model is its simplicity compared with other approaches; it is important in the transition to more complex structures, such as multi-row model or models including intersections. Both of these generalizations require a transition to a two-dimensional grid and seriously complicate the process of updating the machine state, as multiband movement also requires at each step to assess the need and possibility to change the band, and also requires a method of resolving the conflict when you attempt to move in one cell from two different neighboring bands. Creation of the intersection, results in need to process movement in the non-parallel directions through the same cells that can be realized through the cells containing only part of the machine. Besides creation of the intersection it is necessary to consider a priority of roads that can seriously complicate rules of transition or procedure of their application. For implementation of the intersection the following cellular automaton was defined.

Cellular automaton CA=<A,C,M,X,T>, A – set of statuses of a cell, C – set of cells, M – transition function that changes the state of the automaton, X – neighborhood relation, T – set of clock periods of time. Each object is characterized by two parameters – the direction of movement and the identifier. Cells belonging to the set C, are determined by two different sizes of the full and half. One object occupies one cell of the complete size or two cells of the half size. The neighborhood ratio X is not the same in different cells and strongly depends on the position of the cells relative to intersections. For determination of M we introduce rules for all cells, after that we define application of M on some configuration of the automatic machine as, application of rules of cells in a certain order.

Functions for operation with the cellular automaton: Rules(ci) – set of rules of a cell ci, Name(rij) – rule name rij in Rules(ci). Imp(rij) and Next(rij) – set of important and following cells for the rule. The basic rules have the general structure. The structure of additional rules for the second part of object is available only for cells of the half size. As for these cell it is necessary to distinguish objects from each other, for this purpose we enter the Id function returning an object identifier. The rule for an empty cell is trivial - the cell doesn't influence adjacent cells and doesn't change the statuses without influence of other cells. Each cell has rule set for movement through it in any the allowed direction, according to traffic regulations.

A cellular automaton for traffic simulation on a group of intersections was designed. The program of generation of the cellular automaton for different intersections was realized.
The models can be used to understand, predict and optimize different traffic situations and as examples for the various possible extensions and fields of applications.

Key words: Model, cellular automata, distance, speed, acceleration, regular grid.

Bibliographic reference: Kazancev G. Yu., Omarova G. A. Simulation of traffic flows using cellular automata //journal “Problems of informatics”. 2016. № 3. P. 59-69.


Krutikov N. O., Podakov N. G., Zhilyakova V. A.

Novosibirsk national research state university, 630090, Novosibirsk, Russian Federation


UDC 004.852

This article describes an approach to Russian language information extraction systems development presented for subject domain Criminalistics.At first; we should describe the task in details. The developed system should extract named entities from the text, such as people and organizations, and events. Also, attributes of the extracted entities, such as name, gender and date of birth for individuals and name and type of organization, time and place for the event should be filled. Relationships between named entities and events should be extracted; semantic role should be defined for each dependent entity (for example “subject” and “object” of event). Different semantic entities that describe single real object (person, organization or event) must be glued together by resolving coreference, their attributes should be united.

For text analysis RCO FX Ru library is used in system. This library used rule-based approach and provides the following results: list of the extracted from the text semantic entities, their morphological and syntactic attributes and semantic graph of each proposal.To resolve the problem corpus of texts has been builded, domain ontology has been developed and a system of rules and patterns based on the RCO FX has been realized. After processing texts are translated to RDF structure and saved in RDF store.Also, the system has visualization module that allows the user to view text analysis results, search among the extracted information, and use a variety of filters that discard the most important information.The approach allows extract information from texts with level precision 70–80 % with recall 30–35 %.

Key words: information extraction, rule-based approach, CAPE, named entity, event extraction, relation extraction, ontology.

Bibliographic reference: Krutikov N. O., Podakov N. G., Zhilyakova V. A. Development of the information extraction system from texts in russian for subject domain criminalistics //journal “Problems of informatics”. 2016. № 3. P. 70-84.


Matveev A. S., Nikitin * V.V. , Romanenko** A. A., Duchkov A. A.
IPGG SB RAS, 630090, Novosibirsk, Russia
*NSU, 630090, Novosibirsk, Russia
** Center for Mathematical Sciences, Lund University, 22100, Lund, Sweden


UDC 621

This article is devoted to the Unequispaeed Fast Fourier Transform (USFFT), which is a popular analytical tool for solving physics and engineering problems. The most common applications of the transform include seismology, optics, computed tomography, crystallography, etc. Despite the favorable computational complexity of the USFFT algorithm (0(N logN)), the execution time remains rather high due to the algorithm structure and large input data sizes. There are two main types of USFFT: Fourier transform from equispaeed grid to unequispaeed grid and Fourier transform from unequispaeed grid to equispaeed grid. Corresponding computational algorithms consist of three main steps: convolution, Fast Fourier Transform (FFT) and deconvolution. Profiling shows that up to 95% of execution time is spent on the convolution step. In this paper, we propose a parallel USFFT algorithm and its effective cache-optimized implementation on CPU for one-, two- and three-dimensional cases. Cache performance optimization is based on the sorting of unequispaeed grid points. The constructed sorting procedure sufficiently reduces the number of cache misses. For instance, for the two-dimensional ease the number of cache misses is reduced by 36 times, which results in 2x speed-up of the transform evaluation. Next, we propose a parallel block algorithm for the convolution step and implement it by making use of OpenMP, a popular extension for the С programming language supporting multi­platform shared memory parallel programming. The obtained parallel implementation was optimized in terms of optimal block sizes and type of scheduling for the convolution step. Numerical tests show high parallel efficiency: speed-up on 16 processors compared to the sequential implementation is approximately equal to 13. The tests also show that the performance is several times higher than the performance of the commonly-used library for the fast Fourier transform at nonequispaeed nodes (NFFT 3.0). USFFT is commonly used for fast evaluation of the Radon transform operator which is one of the main mathematical tools in computed tomography. In this paper, we consider a standard reconstruction of tomography data by inversion of the Radon transform, and an iterative reconstruction by using the Expectation-maximization algorithm. The iterative reconstruction is well-suited for processing data with [21] or irregularly-structured data. Since iterative schemes assume applying the forward and adjoint Radon operators several times, computational times for preprocessing procedures such as sorting of grid points and allocation memory can be diminished. The obtained program for evaluating iterative schemes was tested for synthetic Radon data containing Poisson noise. The program outperforms the implementation via NFFT by 4.4 times for the same accuracy level.

Key words: fast Fourier transform, unequispaeed grids, parallel algorithm, optimization, high performance computing.

Bibliographic reference: Matveev A. S., Nikitin V.V. , Romanenko A. A., . Duchkov A. A Effective implementation of usfft algorithm //journal “Problems of informatics”. 2016. № 3. P. 85-102.