×

You are using an outdated browser Internet Explorer. It does not support some functions of the site.

Recommend that you install one of the following browsers: Firefox, Opera or Chrome.

Contacts:

+7 961 270-60-01
ivdon3@bk.ru

  • Synthesis of neural networks and system analysis using socratic methods for managing corporate it projects

    The article examines the modular structure of interactions between various models based on the Socratic dialogue. The research aims to explore the possibilities of synthesizing neural networks and system analysis using Socratic methods for managing corporate IT projects. The application of these methods enables the integration of knowledge stored in pre – trained models without additional training, facilitating the resolution of complex management tasks. The research methodology is based on analyzing the capabilities of multimodal models, their integration through linguistic interactions, and system analysis of key aspects of IT project management. The results include the development of a structured framework for selecting suitable models and generating recommendations, thereby improving the efficiency of project management in corporate environments. The scientific significance of the study lies in the integration of modern artificial intelligence approaches to implement system analysis using multi – agent solutions.

    Keywords: neural networks, system analysis, Socratic method, corporate IT projects, multimodal models, project management

  • Digital modeling of the technical condition of buildings for maintenance and repair purposes

    This article investigates the application of a digital operational model to enhance the efficiency of maintenance and repair processes for capital construction projects. The study focuses on the operational phase within the building lifecycle, analyzing maintenance procedures and categorizing structural defects. The research identifies limitations in traditional defect reports, which lack quantitative data and spatial referencing to the building structure. These limitations hinder their effectiveness in organizational and technical planning for repair works. The proposed digital model optimizes building condition management, improves the accuracy of technical assessments, and facilitates precise quantification of repair scopes. It also enhances collaboration between facility management and contracting entities. A case study of the educational and laboratory building at the Northern (Arctic) Federal University demonstrates the model's implementation. Key benefits include cost reduction, improved maintenance quality, and streamlined operational workflows. The findings highlight the potential of digital tools to transform building maintenance practices, offering a data-driven approach to facility management in the construction sector.

    Keywords: maintenance and repair, operational digital model, defect, defect list, technical condition

  • Evaluation of the effectiveness of a data set expansion method based on deep reinforcement learning

    The article presents the results of a numerical experiment comparing the accuracy of neural network recognition of objects in images using various types of data set extensions. It describes the need to expand data sets using adaptive approaches in order to minimize the use of image transformations that may reduce the accuracy of object recognition. The author considers such approaches to data set expansion as random and automatic augmentation, as they are common, as well as the developed method of adaptive data set expansion using a reinforcement learning algorithm. The algorithms of operation of each of the approaches, their advantages and disadvantages of the methods are given. The work and main parameters of the developed method of expanding the dataset using the Deep-Q-Network algorithm are described from the point of view of the algorithm and the main module of the software package. Attention is being paid to one of the machine learning approaches, namely reinforcement learning. The application of a neural network for approximating the Q-function and updating it in the learning process, which is based on the developed method, is described. The experimental results show the advantage of using data set expansion using a reinforcement learning algorithm using the example of the Squeezenet v1.1 classification model. The comparison of recognition accuracy using data set expansion methods was carried out using the same parameters of a neural network classifier with and without the use of pre-trained weights. Thus, the increase in accuracy in comparison with other methods varies from 2.91% to 6.635%.

    Keywords: dataset, extension, neural network models, classification, image transformation, data replacement

  • Modern tools used in the formation of intelligent control systems

    Modern intelligent control systems (ICS) are complex software and hardware systems that use artificial intelligence, machine learning, and big data processing to automate decision-making processes. The article discusses the main tools and technologies used in the development of ICS, such as neural networks, deep learning algorithms, expert systems and decision support systems. Special attention is paid to the role of cloud computing, the Internet of Things and cyber-physical systems in improving the efficiency of intelligent control systems. The prospects for the development of this field are analyzed, as well as challenges related to data security and interpretability of models. Examples of the successful implementation of ICS in industry, medicine and urban management are given.

    Keywords: intelligent control systems, artificial intelligence, machine learning, neural networks, big data, Internet of things, cyber-physical systems, deep learning, expert systems, automation

  • Application of variational principles in problems of development and testing of complex technical systems

    The technology of applying the variational principle in problems of development and testing of complex technical systems is described. Let there be a certain set of restrictions imposed on random variables in the form of given statistical moments and/or in the form of a restriction by some estimates from above and below the range of possible values of these random variables. The task is set: without knowing anything except these restrictions, to construct for further research, ultimately, for assessing the efficiency of the complex technical system being developed, the probability distribution function of its determining parameter. By varying the functional, including Shannon entropy and typical restrictions on the distribution density function of the determining parameter of a complex technical system, the main stages of constructing the distribution density function are described. It is shown that, depending on the type of restriction, the constructed distribution density function can have an analytical form, be expressed through special mathematical functions, or be calculated numerically. Examples of applying the variational principle to find the distribution density function are given. It is demonstrated that the variational principle allows obtaining both the distribution laws widely used in probability theory and mathematical statistics, and specific distributions characteristic of the problems of developing and testing complex technical systems. The technology of applying the variational principle presented in the article can be used in the model of managing the self-diagnostics process of intelligent control systems with machine consciousness.

    Keywords: variational principle, distribution density function, Shannon entropy, complex technical system

  • Distribution of stresses near underground cylindrical and spherical cavities created by an explosion

    The paper considers the problem of the stress state of a rock array with continuous inhomogeneity. This type of inhomogeneity can be observed in rock arrays with cavities created by explosion. In this case, the dependence was chosen when the main mechanical characteristics depend only on one coordinate - the radius. It was also taken into account that the chosen dependence gives an opportunity to obtain relatively simple methods of solving the problems. The chosen calculation scheme of the problem allows to reduce it to the solution of one-dimensional task. For the case of the centrally symmetric problem we consider the solving equation, which is an ordinary inhomogeneous differential equation of the second order with variable coefficients. Using the substitution of variables, we can proceed to the solution of the hypergeometric equation. Solutions of hypergeometric equations are given in the form of hypergeometric series, which are known to converge. Using inverse substitutions, the stresses are found. The stress state of the rock array at different degrees of its heterogeneity is determined. The results are presented in the form of graphs. Comparison with similar solutions for homogeneous arrays is carried out. The presented results allow us to conclude that when solving problems on the stress state of rock arrays with cavities, it is necessary to take into account the heterogeneity of the arrays obtained in the process of creating such cavities with the help of explosion.

    Keywords: heterogeneity of the medium, rock array, spherical cavity, stress state

  • The actor model in the Elixir programming language: fundamentals and application

    The article explores the actor model as implemented in the Elixir programming language, which builds upon the principles of the Erlang language. The actor model is an approach to parallel programming where independent entities, called actors, communicate with each other through asynchronous messages. The article details the main concepts of Elixir, such as comparison with a sample, data immutability, types and collections, and mechanisms for working with the actors. Special attention is paid to the practical aspects of creating and managing actors, their interaction and maintenance. This article will be valuable for researchers and developers interested in parallel programming and functional programming languages.

    Keywords: actor model, elixir, parallel programming, pattern matching, data immutability, processes, messages, mailbox, state, recursion, asynchrony, distributed systems, functional programming, fault tolerance, scalability

  • Development and Analysis of a Feature Model for Dynamic Handwritten Signature Recognition

    In this work, we present the development and analysis of a feature model for dynamic handwritten signature recognition to improve its effectiveness. The feature model is based on the extraction of both global features (signature length, average angle between signature vectors, range of dynamic characteristics, proportionality coefficient, average input speed) and local features (pen coordinates, pressure, azimuth, and tilt angle). We utilized the method of potentials to generate a signature template that accounts for variations in writing style. Experimental evaluation was conducted using the MCYT_Signature_100 signature database, which contains 2500 genuine and 2500 forged samples. We determined optimal compactness values for each feature, enabling us to accommodate signature writing variability and enhance recognition accuracy. The obtained results confirm the effectiveness of the proposed feature model and its potential for biometric authentication systems, presenting practical interest for information security specialists.

    Keywords: dynamic handwritten signature, signature recognition, biometric authentication, feature model, potential method, MCYT_Signature_100, FRR, FAR

  • The effect of data replacement and expansion using transformations on the recognition accuracy of the deep neural network ResNet - 50

    The article examines how the replacement of the original data with transformed data affects the quality of training of deep neural network models. The author conducts four experiments to assess the impact of data substitution in tasks with small datasets. The first experiment consists in training the model without making changes to the original data set, the second is to replace all images in the original set with transformed ones, the third is to reduce the number of original images and expand the original data set using transformations applied to images, and also in the fourth experiment, the data set is expanded in order to balance the number of images There are more in each class.

    Keywords: dataset, extension, neural network models, classification, image transformation, data replacement

  • Modeling Paid Parking Occupancy: A Regression Analysis Taking into Account Customer Behavior

    The article describes the methodology for constructing a regression model of occupancy of paid parking zones taking into account the uneven distribution of sessions during the day and the behavioral characteristics of two groups of clients - the regression model consists of two equations that take into account the characteristics of each group. In addition, the process of creating a data model, collecting, processing and analyzing data, distribution of occupancy during the day is described. Also, the methodology for modeling a phenomenon whose distribution has the shape of a bell and depends on the time of day is given. The results can be used by commercial enterprises managing parking lots and city administrations, researchers when modeling similar indicators that demonstrate a normal distribution characteristic of many natural processes (customer flow in bank branches, replenishment and / or withdrawal of funds during the life of replenished deposits, etc.).

    Keywords: paid parking, occupancy, regression model, customer behavior, behavioral segmentation, model robustness, model, forecast, parking management, distribution

  • Software for calculating the surface characteristics of liquid media

    Software has been developed to evaluate the surface characteristics of liquids, solutions and suspensions in the Microsoft Visual Studio environment. The module with a user-friendly interface does not require special skills from the user and allows for a numerical calculation of the energy characteristics of the liquid in a time of ~ 1 second: adhesion, cohesion, wetting energy, spreading coefficient and adhesion of the liquid composition to the contact surface. Using the example of a test liquid - distilled water and an initial liquid separation lubricant of the Penta-100 series, an example of calculating the wetting of a steel surface with liquid media is demonstrated. Optical microscopy methods have shown that good lubrication of the steel surface ensures the formation of a homogeneous, defect-free coating. The use of the proposed module allows for an express assessment of the compatibility of liquid formulations with the protected surface and is of interest to manufacturers of paint and varnish materials in product quality control.

    Keywords: computer program, C# programming language, wetting, surface, adhesion

  • Modeling the interaction of a single abrasive grain with the surface of a part

    A review of various approaches used to model the contact interaction between the grinding wheel grain and the surface layer of the workpiece during grinding is presented. In addition, the influence of material properties, grinding parameters and grain morphology on the contact process is studied.

    Keywords: grinding, grain, contact zone, modeling, grinding wheel, indenter, micro cutting, cutting depth

  • Methods for forming quasi-orthogonal matrices based on pseudo-random sequences of maximum length

    Linear feedback shift registers (LFSR) and the pseudo-random sequences of maximum length (m-sequences) generated by them have become widely used in solving problems of mathematical modeling, cryptography, radar and communications. The wide distribution is due to their special properties, such as correlation. An interesting, but rarely discussed in the scientific literature of recent years, property of these sequences is the possibility of forming quasi-orthogonal matrices on their basis.In this paper, was conducted a study of methods for generating quasi-orthogonal matrices based on pseudo-random sequences of maximum length (m-sequences). An analysis of the existing method based on the cyclic shift of the m-sequence and the addition of a border to the resulting cyclic matrix is carried out. Proposed an alternative method based on the relationship between pseudo-random sequences of maximum length and quasi-orthogonal Mersenne and Hadamard matrices, which allows generating cyclic quasi-orthogonal matrices of symmetric structure without a border. A comparative analysis of the correlation properties of the matrices obtained by both methods and the original m-sequences is performed. It is shown that the proposed method inherits the correlation properties of m-sequences, provides more efficient storage, and is potentially better suited for privacy problems.

    Keywords: orthogonal matrices, quasi-orthogonal matrices, Hadamard matrices, m-sequences

  • Moving from a university data warehouse to a lake: models and methods of big data processing

    The article examines the transition of universities from data warehouses to data lakes, revealing their potential in processing big data. The introduction highlights the main differences between storage and lakes, focusing on the difference in the philosophy of data management. Data warehouses are often used for structured data with relational architecture, while data lakes store data in its raw form, supporting flexibility and scalability. The section ""Data Sources used by the University"" describes how universities manage data collected from various departments, including ERP systems and cloud databases. The discussion of data lakes and data warehouses highlights their key differences in data processing and management methods, advantages and disadvantages. The article examines in detail the problems and challenges of the transition to data lakes, including security, scale and implementation costs. Architectural models of data lakes such as ""Raw Data Lake"" and ""Data Lakehouse"" are presented, describing various approaches to managing the data lifecycle and business goals. Big data processing methods in lakes cover the use of the Apache Hadoop platform and current storage formats. Processing technologies are described, including the use of Apache Spark and machine learning tools. Practical examples of data processing and the application of machine learning with the coordination of work through Spark are proposed. In conclusion, the relevance of the transition to data lakes for universities is emphasized, security and management challenges are emphasized, and the use of cloud technologies is recommended to reduce costs and increase productivity in data management. The article examines the transition of universities from data warehouses to data lakes, revealing their potential in processing big data. The introduction highlights the main differences between storage and lakes, focusing on the difference in the philosophy of data management. Data warehouses are often used for structured data with relational architecture, while data lakes store data in its raw form, supporting flexibility and scalability. The section ""Data Sources used by the University"" describes how universities manage data collected from various departments, including ERP systems and cloud databases. The discussion of data lakes and data warehouses highlights their key differences in data processing and management methods, advantages and disadvantages. The article examines in detail the problems and challenges of the transition to data lakes, including security, scale and implementation costs. Architectural models of data lakes such as ""Raw Data Lake"" and ""Data Lakehouse"" are presented, describing various approaches to managing the data lifecycle and business goals. Big data processing methods in lakes cover the use of the Apache Hadoop platform and current storage formats. Processing technologies are described, including the use of Apache Spark and machine learning tools. Practical examples of data processing and the application of machine learning with the coordination of work through Spark are proposed. In conclusion, the relevance of the transition to data lakes for universities is emphasized, security and management challenges are emphasized, and the use of cloud technologies is recommended to reduce costs and increase productivity in data management.

    Keywords: data warehouse, data lake, big data, cloud storage, unstructured data, semi-structured data

  • Determination of zigzag nature of vehicle trajectories

    The paper presents a method for quantitative assessment of zigzag trajectories of vehicles, which allows to identify potentially dangerous behavior of drivers. The algorithm analyzes changes in direction between trajectory segments and includes data preprocessing steps: merging of closely spaced points and trajectory simplification using a modified Ramer-Douglas-Pecker algorithm. Experiments on a balanced data set (20 trajectories) confirmed the effectiveness of the method: accuracy - 0.8, completeness - 1.0, F1-measure - 0.833. The developed approach can be applied in traffic monitoring, accident prevention and hazardous driving detection systems. Further research is aimed at improving the accuracy and adapting the method to real-world conditions.

    Keywords: trajectory, trajectory analysis, zigzag, trajectory simplification, Ramer-Douglas-Pecker algorithm, yolo, object detection

  • Queuing system with mutual assistance between channels and limited dwell time

    In this paper, a new model of an open multichannel queuing system with mutual assistance between channels and limited waiting time for a request in a queue is proposed. General mathematical dependencies for the probabilistic characteristics of such a system are presented.

    Keywords: queuing system, queue, service device, mutual assistance between channels

  • On the Development of Secure Applications Based on the Integration of the Rust Programming Language and PostgreSQL DBMS

    Currently, key aspects of software development include the security and efficiency of the applications being created. Special attention is given to data security and operations involving databases. This article discusses methods and techniques for developing secure applications through the integration of the Rust programming language and the PostgreSQL database management system (DBMS). Rust is a general-purpose programming language that prioritizes safety as its primary objective. The article examines key concepts of Rust, such as strict typing, the RAII (Resource Acquisition Is Initialization) programming idiom, macro definitions, and immutability, and how these features contribute to the development of reliable and high-performance applications when interfacing with databases. The integration with PostgreSQL, which has been demonstrated to be both straightforward and robust, is analyzed, highlighting its capacity for efficient data management while maintaining a high level of security, thereby mitigating common errors and vulnerabilities. Rust is currently used less than popular languages like JavaScript, Python, and Java, despite its steep learning curve. However, major companies see its potential. Rust modules are being integrated into operating system kernels (Linux, Windows, Android), Mozilla is developing features for Firefox's Gecko engine and StackOverflow surveys show a rising usage of Rust. A practical example involving the dispatch of information related to class schedules and video content illustrates the advantages of utilizing Rust in conjunction with PostgreSQL to create a scheduling management system, ensuring data integrity and security.

    Keywords: Rust programming language, memory safety, RAII, metaprogramming, DBMS, PostgreSQL

  • Analysis of existing structural solutions for in-tube inspection robots: selection of the optimal type of movement and chassis for 3D scanning of the weld relief in large diameter welded straight-seam pipes using a laser triangulation sensor

    This article provides an overview of existing structural solutions for in-line robots designed for inspection work. The main attention is paid to the analysis of various motion mechanisms and chassis types used in such robots, as well as to the identification of their advantages and disadvantages in relation to the task of scanning a longitudinal weld. Such types of robots as tracked, wheeled, helical and those that move under the influence of pressure inside the pipe are considered. Special attention is paid to the problem of ensuring stable and accurate movement of the robot along the weld, minimizing lateral displacements and choosing the optimal positioning system. Based on the analysis, recommendations are offered for choosing the most appropriate type of motion and chassis to perform the task of constructing a 3D model of a weld using a laser triangulation sensor (hereinafter referred to as LTD).

    Keywords: in-line work, inspection work, 3D scanning, welds, structural solutions, types of movement, chassis, crawler robots, wheeled robots, screw robots, longitudinal welds, laser triangulation sensor

  • Analysis of the directions of application of predictive analytics in railway transport

    The railway transport industry demonstrates significant achievements in various fields of activity through the introduction of predictive analytics. Predictive analytics systems use data from a variety of sources, such as sensor networks, historical data, weather conditions, etc. The article discusses the key areas of application of predictive analytics in railway transport, as well as the advantages, challenges and prospects for further development of this technology in the railway infrastructure.

    Keywords: predictive analytics in railway transport, passenger traffic forecasting, freight optimization, maintenance optimization, inventory and supply management, personnel management, financial planning, big data analysis

  • Simulation modeling of calculation of transient response using Duhamel integral

    A Simulink model is considered that allows calculating transient processes of objects described using a transient function for any type of input action. An algorithm for the operation of the S-function that performs calculations using the Duhamel integral is described. It is shown that due to the features of the S-function, it can store the values of the previous step of the Simulink model calculation. This allows the input signal to be decomposed into step components and the time of occurrence of each step and its value to be stored. For each step of the input signal increment, the S-function calculates the response by scaling the transient response. Then, at each step of the calculation, the sum of such reactions is found. The S-function provides a procedure for freeing memory when the end point of the transient response is reached at each step. Thus, the amount of memory required for the calculation does not increase above a certain limit, and, in general, does not depend on the length of the model time. For calculations, the S-function uses matrix operations and does not use cycles. Due to this, the speed of model calculation is quite high. The article presents the results of calculations. Recommendations are given for setting the parameters of the model. A conclusion is formulated on the possibility of using the model for calculating dynamic modes.

    Keywords: simulation modeling, Simulink, step response, step function, S-function, Duhamel integral.

  • Computer Modeling and Improvement of Extrusion Tooling 

    This paper presents a simulation of square bar extrusion using the finite element method in QForm software. The sequence of simulation stages is described and verified using an example of extruding AD31 alloy. The geometry of the extrusion tool (die) was improved to reduce material damage and eliminate profile distortion. It was found that rounding the corners and adding a calibrating section to the die provides improved performance. The analysis of metal heating from plastic deformation is also provided. The study highlights the importance of computer modeling for optimizing tools and increasing the efficiency of the metal forming process. Specifically, the initial and final die geometries were compared to assess the impact on product properties. Recommendations are provided for using simulation in the practice of developing metal forming process technologies. The benefits of using QForm software over alternative solutions are highlighted, including access to a comprehensive material database and robust technical support. The improved die design resulted in reduced plastic deformation and improved product quality. Finite element analysis significantly accelerates the development and testing of tools, providing the ability to develop optimal designs that account for various factors.

    Keywords: extrusion, die design, finite element method, computer modeling, optimization, metal forming, aluminum alloy, process simulation

  • Numerical study of the effect of dust deposits on heat transfer in porous heat exchangers

    A study of heat transfer in porous heat exchangers with a sediment of dust particles was carried out. The influence of heat exchanger length and the presence of sediment on the Nusselt number was studied. It was revealed that increasing the length of the heat exchanger from 5 to 30 mm leads to an increase in the Nusselt number by 39.72-81.35% depending on the Reynolds number. The formation of sediment on the surface of the heat exchanger leads to a decrease in the Nusselt number by 2.8-6.6%.

    Keywords: heat transfer, hydrodynamics, calculation, Nusselt number, Reynolds number, mathematical modeling, sediment formation, microelectronics, cooling systems, radiator

  • Qualitative study of the emulsion layer formation model

    The article discusses the basic model of the formation of an emulsion layer on a rotating cylinder. Special attention is paid to the study of the influence of various parameters on the internal characteristics of the emulsion formation process. A mathematical description of the displacement layer is given and functional dependencies between the parameters characterizing the emulsion formation process are derived. Based on experimental data, the qualitative influence of "internal" and "external" factors on the formation of the emulsion layer has been studied. The result of the study were graphs, analyzing which the following conclusions can be drawn. An increase in the viscosity of the emulsion leads to a decrease in such parameters as the boundary of the fracture region of a more viscous liquid layer adjacent to the surface of the rotating cylinder, the viscosity of the emulsion in the transition layer, but there is an increase in the consumption of the emulsion. It is also established that the growth of the complex of "external" parameters leads to a decrease in all internal parameters during the formation of the emulsion. The research conducted in the article will help to take into account the obtained dependencies when calculating the operating and design parameters of devices.

    Keywords: emulsion layer, viscosity, density, emulsion, rotating cylinder, liquid, emulsion composition

  • Development of a system for optimizing the planning of the deployment of logging sites using the example of the Republic of Karelia

    The paper is devoted to the application of a machine learning model with reinforcement for automating the planning of the deployment of logging sites in forestry. A method for optimizing the selection of cutting areas based on the algorithm of optimization of the Proximal Policy Optimization is proposed. An information system adapted for processing forest management data in a matrix form and working with geographic information systems has been developed. The experiments conducted demonstrate the ability to find rational options for the placement of cutting areas using the proposed method. The results obtained are promising for the use of intelligent systems in the forestry industry.

    Keywords: reinforcement learning, deep learning, cutting areas location, forestry, artificial intelligence, planning optimization, clear-cutting

  • Development of an image recognition algorithm for an automated system for monitoring fiberglass defects based on machine learning methods

    The article proposes an image recognition algorithm for an automated fiberglass defect inspection system using machine learning methods. Various types of neural network architectures are considered, such as models with a firing rate of neurons, a Hopfield network, a restricted Boltzmann machine, and convolutional neural networks. A convolutional neural network of the ResNet model was chosen for developing the algorithm. To develop the algorithm, a convolutional neural network of the ResNet model was selected. As a result of testing the program, the neural network worked correctly with a high learning percentage.

    Keywords: fiberglass, defects, machine learning, convolutional neural networks, ResNet architecture, testing, accuracy