# 2023 year, volume 27, issue 1 (PDF)

Using numerical experiments we demonstrated that accuracy of numerical singular value decomposition (SVD) is aﬀected not only by condition number of the matrix, but also by distribution of singular values in the matrix spectrum. We note that widely used relative norm error of SVD result might be insufficient and propose an alternative metric based on root-mean-squared relative error.

**Keywords:** singular value decomposition, SVD, condition number, matrix spectrum, numerical stability.

Decision trees are widely used in machine learning, statistics and data mining. Predictive models based on decision trees show outstanding results in terms of accuracy and training time. Especially on heterogeneous tabular datasets. Performance, simplicity and integrity make this family of algorithms one of the most popular in data science. One important hyper-parameter of decision tree training algorithms is maximum depth of the trees. This paper proves theoretical result that shows how maximum depth of decision trees limits the expressive power of ensemble. This result is applicable to such tree based algorithms as plain Decision Tree, Random Forest, GBDT and others.

**Keywords:** machine learning, data science, decision tree, random forest, gradient boosting.

The paper is devoted to studying the orthogonality graph of the matrix ring over a commutative ring. It is proved that the orthogonality graph of the ring of matrices with size greater than 1 over a commutative ring with zero-divisors is connected and has diameter 3 or 4; a criterion for each value is obtained. It is also shown that each of its vertices has distance at most 2 from some scalar matrix.

**Keywords:** associative ring with identity, commutative ring, zero-divisor, matrix ring, zero-divisor graph, orthogonality graph.

The paper considers the question of the expressibility of any continuous particle-linear function of several real variables in the form of a neural circuit over a basis with nonlinearities of the max type. Then the result is transferred to neural circuits built over a basis with a single non-linear RELU function. Before proving the result, several auxiliary, technical lemmas are formulated and proved, expanding the existing knowledge about the properties of particle-linear functions and equivalence classes generated by a certain set of hyperplanes. The paper also gives estimates of nonlinear complexity and depth for the constructed neural circuits in two given bases. Finally, the paper proves the equality of the class of continuous particle-linear functions, the class of functions representable by neural circuits over a basis of the ﬁrst type, and the class of functions representable by neural circuits over a basis of the second type.

**Keywords:** Neural networks, architecture, functions recovery, functions expressibility, convex functions, continuous particle-linear functions, ReLU function, maximum function.

The locally recoverable codes (LRC codes) are linear codes with an important for applications property that every symbol of a codeword can be recovered from a small set of other symbols. The paper provides reductions from known decision problems of coding theory to the problem of checking such property and a proof for the NP-completeness of this problem for an arbitrary ﬁxed ﬁnite field.

**Keywords:** erasure coding, locally recoverable codes, NP-complete.

In this paper, volume circuits are considered, which are the embeddings of Boolean circuits in space. For volume circuits, the lower bound on the potential is obtained. Potential is a measure of circuit activity equal to the number of gates that produce one on a given input. It is shown that for almost all partial operators with n inputs and m outputs, the potential of the volume circuit that implements them is not less than \[ \frac{m \sqrt[3]{d}}{\min^{2/3}(m, log_2 d)} \], where d is the size of the domain.

**Keywords:** Boolean circuits, volume circuits, circuit complexity, circuit activity, potential.

We study the proportion of Boolean functions inducing p-reducible functions of various types among all Boolean functions.

**Keywords:** Bernoulli random variable, ﬁnite generation, random variable transformation.