Complex systems of interacting components often can be modeled by a simple graph $\mathcal{G}$ that consists of a set of $n$ nodes and a set of $m$ edges. Such a graph can be represented by an adjacency matrix $A\in\R^{n\times n}$, whose $(ij)$th entry is one if there is an edge pointing from node $i$ to node $j$, and is zero otherwise. The matrix $A$ and its positive integer powers reveal important properties of the graph and allow the construction of the path length matrix $L$ for the graph. The $(ij)$th entry of $L$ is the length of the shortest path from node $i$ to node $j$; if there is no path between these nodes, then the value of the entry is set to $\infty$. We are interested in how well information flows via shortest paths of the graph. This can be studied with the aid of the path length matrix. The path length matrix allows the definition of several measures of communication in the network defined by the graph such as the global $K$-efficiency, which considers shortest paths that are made up of at most $K$ edges for some $K<n$, as well as the number of such shortest paths. Novel notions of connectivity introduced in this paper help us understand the importance of specific edges for the flow of information through the graph. This is of interest when seeking to simplify a network by removing selected edges or trying to assess the sensitivity of the flow of information to changes due to exterior causes such as a traffic stoppage on a road network.
We present the results of a large-scale computational analysis of mathematical papers from the ArXiv repository, demonstrating a comprehensive system that not only detects mathematical errors but provides complete referee reports with journal tier recommendations. Our automated analysis system processed over 37,000 papers across multiple mathematical categories, revealing significant error rates and quality distributions. Remarkably, the system identified errors in papers spanning three centuries of mathematics, including works by Leonhard Euler (1707-1783) and Peter Gustav Lejeune Dirichlet (1805-1859), as well as contemporary Fields medalists. In Numerical Analysis (math.NA), we observed an error rate of 9.6\% (2,271 errors in 23,761 papers), while Geometric Topology (math.GT) showed 6.5\% (862 errors in 13,209 papers). Strikingly, Category Theory (math.CT) showed 0\% errors in 93 papers analyzed, with evidence suggesting these results are ``easier'' for automated analysis. Beyond error detection, the system evaluated papers for journal suitability, recommending 0.4\% for top generalist journals, 15.5\% for top field-specific journals, and categorizing the remainder across specialist venues. These findings demonstrate both the universality of mathematical error across all eras and the feasibility of automated comprehensive mathematical peer review at scale. This work demonstrates that the methodology, while applied here to mathematics, is discipline-agnostic and could be readily extended to physics, computer science, and other fields represented in the ArXiv repository.
In this paper, we examine the problem of sampling from log-concave distributions with (possibly) superlinear gradient growth under kinetic (underdamped) Langevin algorithms. Using a carefully tailored taming scheme, we propose two novel discretizations of the kinetic Langevin SDE, and we show that they are both contractive and satisfy a log-Sobolev inequality. Building on this, we establish a series of non-asymptotic bounds in $2$-Wasserstein distance between the law reached by each algorithm and the underlying target measure.
Lorenzo Fabris, Marco Tezzele, Ciro Busiello
et al.
In this work, we focus on the early design phase of cruise ship hulls, where the designers are tasked with ensuring the structural resilience of the ship against extreme waves while reducing steel usage and respecting safety and manufacturing constraints. At this stage the geometry of the ship is already finalized and the designer choose the thickness of the primary structural elements, such as decks, bulkheads, and the shell. Reduced order modeling and black-box optimization techniques reduce the use of expensive finite element analysis to only validate the most promising configurations, thanks to the efficient exploration of the domain of decision variables. However, the quality of the final results heavily relies on the problem formulation, and on how the structural elements are assigned to the decision variables. With the increased request for alternative fuels and engine technologies, the designers are often faced with novel configurations and risk producing ill-suited parameterizations. To address this issue, we enhanced a structural optimization pipeline for cruise ships developed in collaboration with Fincantieri S.p.A. with a novel data-driven hierarchical reparameterization procedure, based on the optimization of a series of sub-problems. Moreover, we implemented a multi-objective optimization module to provide the designers with insights into the efficient trade-offs between competing quantities of interest and enhanced the single-objective Bayesian optimization module. The new pipeline is tested on a simplified midship section and a full ship hull, comparing the automated reparameterization to a baseline model provided by the designers. The tests show that the iterative refinement outperforms the baseline, thus streamlining the initial design phase and helping tackle more innovative projects.
Gabriella Puppo, Matteo Semplice, Giuseppe Visconti
Many interesting physical problems described by systems of hyperbolic conservation laws are stiff, and thus impose a very small time-step because of the restrictive CFL stability condition. In this case, one can exploit the superior stability properties of implicit time integration which allows to choose the time-step only from accuracy requirements, and thus avoid the use of small time-steps. We discuss an efficient framework to devise high order implicit schemes for stiff hyperbolic systems without tailoring it to a specific problem. The nonlinearity of high order schemes, due to space- and time-limiting procedures which control nonphysical oscillations, makes the implicit time integration difficult, e.g.~because the discrete system is nonlinear also on linear problems. This nonlinearity of the scheme is circumvented as proposed in (Puppo et al., Comm.~Appl.~Math.~\& Comput., 2023) for scalar conservation laws, where a first order implicit predictor is computed to freeze the nonlinear coefficients of the essentially non-oscillatory space reconstruction, and also to assist limiting in time. In addition, we propose a novel conservative flux-centered a-posteriori time-limiting procedure using numerical entropy indicators to detect troubled cells. The numerical tests involve classical and artificially devised stiff problems using the Euler's system of gas-dynamics.
Let a network be represented by a simple graph $\mathcal{G}$ with $n$ vertices. A common approach to investigate properties of a network is to use the adjacency matrix $A=[a_{ij}]_{i,j=1}^n\in\R^{n\times n}$ associated with the graph $\mathcal{G}$, where $a_{ij}>0$ if there is an edge pointing from vertex $v_i$ to vertex $v_j$, and $a_{ij}=0$ otherwise. Both $A$ and its positive integer powers reveal important properties of the graph. This paper proposes to study properties of a graph $\mathcal{G}$ by also using the path length matrix for the graph. The $(ij)^{th}$ entry of the path length matrix is the length of the shortest path from vertex $v_i$ to vertex $v_j$; if there is no path between these vertices, then the value of the entry is $\infty$. Powers of the path length matrix are formed by using min-plus matrix multiplication and are important for exhibiting properties of $\mathcal{G}$. We show how several known measures of communication such as closeness centrality, harmonic centrality, and eccentricity are related to the path length matrix, and we introduce new measures of communication, such as the harmonic $K$-centrality and global $K$-efficiency, where only (short) paths made up of at most $K$ edges are taken into account. The sensitivity of the global $K$-efficiency to changes of the entries of the adjacency matrix also is considered.
Modeling complex systems that consist of different types of objects leads to multilayer networks, in which vertices are connected by both inter-layer and intra-layer edges. In this paper, we investigate multiplex networks, in which vertices in different layers are identified with each other, and the only inter-layer edges are those that connect a vertex with its copy in other layers. Let the third-order adjacency tensor $\mathcal{A}\in\R^{N\times N\times L}$ and the parameter $γ\geq 0$, which is associated with the ease of communication between layers, represent a multiplex network with $N$ vertices and $L$ layers. To measure the ease of communication in a multiplex network, we focus on the average inverse geodesic length, which we refer to as the multiplex global efficiency $e_\mathcal{A}(γ)$ by means of the multiplex path length matrix $P\in\R^{N\times N}$. This paper generalizes the approach proposed in \cite{NR23} for single-layer networks. We describe an algorithm based on min-plus matrix multiplication to construct $P$, as well as variants $P^K$ that only take into account multiplex paths made up of at most $K$ intra-layer edges. These matrices are applied to detect redundant edges and to determine non-decreasing lower bounds $e_\mathcal{A}^K(γ)$ for $e_\mathcal{A}(γ)$, for $K=1,2,\dots,N-2$. Finally, the sensitivity of $e_\mathcal{A}^K(γ)$ to changes of the entries of the adjacency tensor $\mathcal{A}$ is investigated to determine edges that should be strengthened to enhance the multiplex global efficiency the most.
Residual deep neural networks (ResNets) are mathematically described as interacting particle systems. In the case of infinitely many layers the ResNet leads to a system of coupled system of ordinary differential equations known as neural differential equations. For large scale input data we derive a mean--field limit and show well--posedness of the resulting description. Further, we analyze the existence of solutions to the training process by using both a controllability and an optimal control point of view. Numerical investigations based on the solution of a formal optimality system illustrate the theoretical findings.
Mohammed Al Mugahwi, Omar De la Cruz Cabrera, Silvia Noschese
et al.
Matrix functions play an important role in applied mathematics. In network analysis, in particular, the exponential of the adjacency matrix associated with a network provides valuable information about connectivity, as well as about the relative importance or centrality of nodes. Another popular approach to rank the nodes of a network is to compute the left Perron vector of the adjacency matrix for the network. The present article addresses the problem of evaluating matrix functions, as well as computing an approximation to the left Perron vector, when only some of the columns and/or some of the rows of the adjacency matrix are known. Applications to network analysis are considered, when only some sampled columns and/or rows of the adjacency matrix that defines the network are available. A sampling scheme that takes the connectivity of the network into account is described. Computed examples illustrate the performance of the methods discussed.
The lack of smoothness is a common feature of weak solutions of nonlinear hyperbolic equations and is a crucial issue in their approximation. This has motivated several efforts to define appropriate indicators, based on the values of the approximate solutions, in order to detect the most troublesome regions of the domain. This information helps to adapt the approximation scheme in order to avoid spurious oscillations when using high-order schemes. In this paper we propose a genuinely multidimensional extension of the WENO procedure in order to overcome the limitations of indicators based on dimensional splitting. Our aim is to obtain new regularity indicators for problems in 2D and apply them to a class of ``adaptive filtered'' schemes for first order evolutive Hamilton-Jacobi equations. According to the usual procedure, filtered schemes are obtained by a simple coupling of a high-order scheme and a monotone scheme. The mixture is governed by a filter function $F$ and by a switching parameter $\varepsilon^n=\varepsilon^n({Δt,Δx})>0$ which goes to 0 as $(Δt,Δx)$ is going to 0. The adaptivity is related to the smoothness indicators and allows to tune automatically the switching parameter $\varepsilon^n_j$ in time and space. Several numerical tests on critical situations in 1D and 2D are presented and confirm the effectiveness of the proposed indicators and the efficiency of our scheme.
Dieter Armbruster, Michael Herty, Giuseppe Visconti
The Ensemble Kalman Filter (EnKF) belongs to the class of iterative particle filtering methods and can be used for solving control--to--observable inverse problems. In this context, the EnKF is known as Ensemble Kalman Inversion (EKI). In recent years several continuous limits in the number of iteration and particles have been performed in order to study properties of the method. In particular, a one--dimensional linear stability analysis reveals possible drawbacks in the phase space of moments provided by the continuous limits of the EKI, but observed also in the multi--dimensional setting. In this work we address this issue by introducing a stabilization of the dynamics which leads to a method with globally asymptotically stable solutions. We illustrate the performance of the stabilized version by using test inverse problems from the literature and comparing it with the classical continuous limit formulation of the method.
In this paper we consider a system of two coupled nonlinear diffusion--reaction partial differential equations (PDEs) which model the growth of biofilm and consumption of the nutrient. At the scale of interest the biofilm density is subject to a pointwise constraint, thus the biofilm PDE is framed as a parabolic variational inequality. We derive rigorous error estimates for a finite element (FE) approximation to the coupled nonlinear system and confirm experimentally that the numerical approximation converges at the predicted rate. We also show simulations in which we track the free boundary in the domains which resemble the pore scale geometry and in which we test the different modeling assumptions.
We present a comprehensive analysis of the performance of noise-reduction (``denoising'') algorithms to determine whether they provide advantages in source detection on extragalactic survey images. The methods under analysis are Perona-Malik filtering, Bilateral filter, Total Variation denoising, Structure-texture image decomposition, Non-local means, Wavelets, and Block-matching. We tested the algorithms on simulated images of extragalactic fields with resolution and depth typical of the Hubble, Spitzer, and Euclid Space Telescopes, and of ground-based instruments. After choosing their best internal parameters configuration, we assess their performance as a function of resolution, background level, and image type, also testing their ability to preserve the objects fluxes and shapes. We analyze in terms of completeness and purity the catalogs extracted after applying denoising algorithms on a simulated Euclid Wide Survey VIS image, on real H160 (HST) and K-band (HAWK-I) observations of the CANDELS GOODS-South field. Denoising algorithms often outperform the standard approach of filtering with the Point Spread Function (PSF) of the image. Applying Structure-Texture image decomposition, Perona-Malik filtering, the Total Variation method by Chambolle, and Bilateral filtering on the Euclid-VIS image, we obtain catalogs that are both more pure and complete by 0.2 magnitudes than those based on the standard approach. The same result is achieved with the Structure-Texture image decomposition algorithm applied on the H160 image. The advantage of denoising techniques with respect to PSF filtering increases at increasing depth. Moreover, these techniques better preserve the shape of the detected objects with respect to PSF smoothing. Denoising algorithms provide significant improvements in the detection of faint objects and enhance the scientific return of current and future extragalactic surveys.
We consider a class of "filtered" schemes for first order time dependent Hamilton-Jacobi equations and prove a general convergence result for this class of schemes. A typical filtered scheme is obtained mixing a high-order scheme and a monotone scheme according to a filter function $F$ which decides where the scheme has to switch from one scheme to the other. A crucial role for this switch is played by a parameter $\varepsilon=\varepsilon({Δt,Δx})>0$ which goes to 0 as the time and space steps $(Δt,Δx)$ are going to 0 and does not depend on the time $t_n$, for each iteration $n$. The tuning of this parameter in the code is rather delicate and has an influence on the global accuracy of the filtered scheme. Here we introduce an adaptive and automatic choice of $\varepsilon=\varepsilon ^n (Δt, Δx)$ at every iteration modifying the classical set up. The adaptivity is controlled by a smoothness indicator which selects the regions where we modify the regularity threshold $\varepsilon^n$. A convergence result and some error estimates for the new adaptive filtered scheme are proved, this analysis relies on the properties of the scheme and of the smoothness indicators. Finally, we present some numerical tests to compare the adaptive filtered scheme with other methods.