Juan Tapia-Aguilera, Luis Fernando Grisales-Noreña, Roberto Eduardo Quintal-Palomo
et al.
This work develops a methodology for operating Battery Energy Storage Systems (BESSs) in distribution networks, connected in parallel with a medium- and small-scale photovoltaic Distributed Generator (PMGD), focusing on a real project located in the O’Higgins region of Chile. The objective is to increase energy sales by the PMGD while ensuring compliance with operational constraints related to the grid, PMGD, and BESSs, and optimizing renewable energy use. A real distribution network from Compañía General de Electricidad (CGE) comprising 627 nodes was simplified into a validated three-node, two-line equivalent model to reduce computational complexity while maintaining accuracy. A mathematical model was designed to maximize economic benefits through optimal energy dispatch, considering solar generation variability, demand curves, and seasonal energy sales and purchasing prices. An energy management system was proposed based on a master–slave methodology composed of Particle Swarm Optimization (PSO) and an hourly power flow using the successive approximation method. Advanced optimization techniques such as Monte Carlo (MC) and the Genetic Algorithm (GAP) were employed as comparison methods, supported by a statistical analysis evaluating the best and average solutions, repeatability, and processing times to select the most effective optimization approach. Results demonstrate that BESS integration efficiently manages solar generation surpluses, injecting energy during peak demand and high-price periods to maximize revenue, alleviate grid congestion, and improve operational stability, with PSO proving particularly efficient. This work underscores the potential of BESS in PMGD to support a more sustainable and efficient energy matrix in Chile, despite regulatory and technical challenges that warrant further investigation.
We study a stochastic model of a copolymerization process that has been extensively investigated in the physics literature. The main questions of interest include: (i) what are the criteria for transience, null recurrence, and positive recurrence in terms of the system parameters; (ii) in the transient regime, what are the limiting fractions of the different monomer types; and (iii) in the transient regime, what is the speed of growth of the polymer? Previous studies in the physics literature have addressed these questions using heuristic methods. Here, we utilize rigorous mathematical arguments to derive the results from the physics literature. Moreover, the techniques developed allow us to generalize to the copolymerization process with finitely many monomer types. We expect that the mathematical methods used and developed in this work will also enable the study of even more complex models in the future.
In this chapter, I discuss teaching mathematical tools specifically tailored for economics students. A typical one-semester course in this area seeks to blend a range of topics: from foundational elements of subjects such as linear algebra and multivariate calculus to intermediate areas like real and convex analysis and further into advanced topics such as dynamic optimization in both continuous and discrete time. This breadth of coverage corresponds to material usually spread across multiple years in traditional mathematics programs. Given the comprehensive nature of these courses, careful selection of topics is essential, balancing numerous trade-offs. I discuss potential course sequences and instructional design choices. I then focus on conceptualizing and explaining mathematical modeling in economics. I reflect on three years of teaching an advanced undergraduate course in mathematical methods online. The latter part of the chapter offers examples and visualizations I have found particularly beneficial for imparting intuition to economics students. They cover a range of topics at different degrees of difficulty and are meant as a resource for instructors in Mathematics for Economists. Among these, I use the Ramsey model as a recurring example, especially relevant when designing a mathematical tools course with an orientation towards preparing students for macroeconomic analysis.
This study presents an innovative control strategy for enabling ships to perform automatic U-turns in restricted waters, with a focus on minimizing energy consumption and reducing wear on the steering gear. The strategy integrates a closed-loop gain-shaping algorithm with nonlinear feedback control, applied to a nonlinear motion mathematical model specifically designed for low-speed operations in shallow waters. The simulations, conducted under a Beaufort wind scale conditions up to No. 5 and water depths of 15 m, demonstrate that ships can successfully execute automatic U-turns within a distance three times their length. The incorporation of nonlinear feedback technology significantly reduces energy consumption and steering gear wear, with specific improvements including a reduction in the average rudder angle by up to 18.26%, a reduction in the mean absolute error (MAE) by up to 3.6%, a reduction in the mean integrated absolute (MIA) by up to 13.55%, and a reduction in the mean total variation (MTV) by up to 36.36%. These enhancements not only optimize the control effect but also extend the service life of the steering gear, thereby contributing to more sustainable maritime operations. Theoretical proofs and Matlab-based simulations validate the effectiveness of the controller, highlighting its potential for energy savings and improved navigational efficiency in challenging maritime environments.
This work investigates the three species of one-predator-two-prey ecological models in Lotka-Volterra type functional response with or without diffusive terms. Without the diffusive effects and under two essential assumptions, we can generically classify all global dynamics completely. The global asymptotic stabilities of three equilibria are shown analytically in each case. Alternatively, with the diffusive term, we establish the existence of traveling wave solutions by the higher-dimensional shooting method, the Wazewski principle. In particular, there are two critical wave speeds $0<c_2<c_1$. We show the existence of traveling wave solutions with the wave speed $c$ if $c>c_1$ and the non-existence of traveling wave solutions if $0<c<c_2$. Finally, a brief discussion, biological interpretations, and numerical simulations are given.
Konstantinos Tsigos, Evlampios Apostolidis, Spyridon Baxevanakis
et al.
In this paper we propose a new framework for evaluating the performance of explanation methods on the decisions of a deepfake detector. This framework assesses the ability of an explanation method to spot the regions of a fake image with the biggest influence on the decision of the deepfake detector, by examining the extent to which these regions can be modified through a set of adversarial attacks, in order to flip the detector's prediction or reduce its initial prediction; we anticipate a larger drop in deepfake detection accuracy and prediction, for methods that spot these regions more accurately. Based on this framework, we conduct a comparative study using a state-of-the-art model for deepfake detection that has been trained on the FaceForensics++ dataset, and five explanation methods from the literature. The findings of our quantitative and qualitative evaluations document the advanced performance of the LIME explanation method against the other compared ones, and indicate this method as the most appropriate for explaining the decisions of the utilized deepfake detector.
Adelina Bärligea, Philipp Hochstaffl, Franz Schreier
This paper presents a solution for efficiently and accurately solving separable least squares problems with multiple datasets. These problems involve determining linear parameters that are specific to each dataset while ensuring that the nonlinear parameters remain consistent across all datasets. A well-established approach for solving such problems is the variable projection algorithm introduced by Golub and LeVeque, which effectively reduces a separable problem to its nonlinear component. However, this algorithm assumes that the datasets have equal sizes and identical auxiliary model parameters. This article is motivated by a real-world remote sensing application where these assumptions do not apply. Consequently, we propose a generalized algorithm that extends the original theory to overcome these limitations. The new algorithm has been implemented and tested using both synthetic and real satellite data for atmospheric carbon dioxide retrievals. It has also been compared to conventional state-of-the-art solvers, and its advantages are thoroughly discussed. The experimental results demonstrate that the proposed algorithm significantly outperforms all other methods in terms of computation time, while maintaining comparable accuracy and stability. Hence, this novel method can have a positive impact on future applications in remote sensing and could be valuable for other scientific fitting problems with similar properties.
Julio Benitez, Waldemar W. Koczkodaj, Adam Kowalczyk
Orthogonalization is one of few mathematical methods conforming to mathematical standards for approximation. Finding a consistent PC matrix of a given an inconsistent PC matrix is the main goal of a pairwise comparisons method. We introduce an orthogonalization for pairwise comparisons matrix based on a generalized Frobenius inner matrix product. The proposed theory is supported by numerous examples and visualizations.
Logic has pride of place in mathematics and its 20th century offshoot, computer science. Modern symbolic logic was developed, in part, as a way to provide a formal framework for mathematics: Frege, Peano, Whitehead and Russell, as well as Hilbert developed systems of logic to formalize mathematics. These systems were meant to serve either as themselves foundational, or at least as formal analogs of mathematical reasoning amenable to mathematical study, e.g., in Hilbert's consistency program. Similar efforts continue, but have been expanded by the development of sophisticated methods to study the properties of such systems using proof and model theory. In parallel with this evolution of logical formalisms as tools for articulating mathematical theories (broadly speaking), much progress has been made in the quest for a mechanization of logical inference and the investigation of its theoretical limits, culminating recently in the development of new foundational frameworks for mathematics with sophisticated computer-assisted proof systems. In addition, logical formalisms developed by logicians in mathematical and philosophical contexts have proved immensely useful in describing theories and systems of interest to computer scientists, and to some degree, vice versa. Three examples of the influence of logic in computer science are automated reasoning, computer verification, and type systems for programming languages.
A third-order accurate implicit-explicit Runge-Kutta time marching numerical scheme is proposed and implemented for the Landau-Lifshitz-Gilbert equation, which models magnetization dynamics in ferromagnetic materials, with arbitrary damping parameters. This method has three remarkable advantages:~(1) only a linear system with constant coefficients needs to be solved at each Runge-Kutta stage, which greatly reduces the time cost and improves the efficiency; (2) the optimal rate convergence analysis does not impose any restriction on the magnitude of damping parameter, which is consistent with the third-order accuracy in time for 1-D and 3-D numerical examples; (3) its unconditional stability with respect to the damping parameter has been verified by a detailed numerical study. In comparison with many existing methods, the proposed method indicates a better performance on accuracy and efficiency, and thus provides a better option for micromagnetics simulations.
This narrative review synthesizes and analyzes empirical studies on the adoption and acceptance of ChatGPT in higher education, addressing the need to understand the key factors influencing its use by students and educators. Anchored in theoretical frameworks such as the Technology Acceptance Model (TAM), Unified Theory of Acceptance and Use of Technology (UTAUT), Diffusion of Innovation (DoI) Theory, Technology–Organization–Environment (TOE) model, and Theory of Planned Behavior, this review highlights the central constructs shaping adoption behavior. The confirmed factors include hedonic motivation, usability, perceived benefits, system responsiveness, and relative advantage, whereas the effects of social influence, facilitating conditions, privacy, and security vary. Conversely, technology readiness and extrinsic motivation remain unconfirmed as consistent predictors. This study employs a qualitative synthesis of 40 peer-reviewed empirical studies, applying thematic analysis to uncover patterns in the factors driving ChatGPT adoption. The findings reveal that, while the traditional technology adoption models offer valuable insights, a deeper exploration of the contextual and psychological factors is necessary. The study’s implications inform future research directions and institutional strategies for integrating AI to support educational innovation.
In this paper, a computational method based on parameterizing state and control variables is presented for solving Stochastic Optimal Control (SOC) problems. By using Chebyshev wavelets with unknown coefficients, state and control variables are parameterized, and then a stochastic optimal control problem is converted to a stochastic optimization problem. The expected cost functional of the resulting stochastic optimization problem is approximated by sample average approximation thereby the problem can be solved by optimization methods more easily. For facilitating and guar-anteeing convergence of the presented method, a new theorem is proved. Finally, the proposed method is implemented based on a newly designed algorithm for solving one of the well-known problems in mathematical fi-nance, the Merton portfolio allocation problem in finite horizon. The simu-lation results illustrate the improvement of the constructed portfolio return.
The double dispersion equation comprising the Lame coefficient, nonlinear coefficient, and Poisson ratio components is described as the uniform and inhomogeneous Murnaghan’s rod by A. M. Samsonov in Samsonov (2001). In this work, we apply the F expansion method to the double dispersion equation in the uniform and inhomogeneous Murnaghan’s rod, extract the Jacobi elliptic function solution, and classify it into six families of unique solutions. The necessary condition and the degeneration of the Jacobi solutions based upon the elliptic function modulus are given for each solution. The six classifications are formed based on the solutions of the algebraic equations.
The fields of medicine and neuroscience often face challenges in obtaining a sufficient amount of diverse data for training machine learning models. Data augmentation can alleviate this issue by artificially synthesizing new data from existing data. Generative adversarial networks (GANs) provide a promising approach for data augmentation in the context of images and biomarkers. GANs can synthesize high-quality, diverse, and realistic data that can supplement real data in the training process. This study provides an overview of the use of GANs for data augmentation in medicine and neuroscience. The strengths and weaknesses of various GAN models, including deep convolutional GANs (DCGANs) and Wasserstein GANs (WGANs), are discussed. This study also explores the challenges and ways to address them when using GANs for data augmentation in the field of medicine and neuroscience. Future works on this topic are also discussed.
Andrey Kovtanyuk, Andrey Kovtanyuk, Alexander Chebotarev
et al.
A non-linear model of oxygen transport from a capillary to tissue is considered. The model takes into account the convection of oxygen in the blood, its diffusion transfer through the capillary wall, and the diffusion and consumption of oxygen in tissue. In the current work, a boundary value problem for the oxygen transport model is studied. The existence theorem is proved and a numerical algorithm is constructed and implemented. The numerical experiments studying the effect of low hematocrit and reduced blood flow rate on cerebral hypoxia in preterm infants are conducted.
We initiate the study of the cycle structure of uniformly random parking functions. Using the combinatorics of parking completions, we compute the asymptotic expected value of the number of cycles of any fixed length. We obtain an upper bound on the total variation distance between the joint distribution of cycle counts and independent Poisson random variables using a multivariate version of Stein's method via exchangeable pairs. Under a mild condition, the process of cycle counts converges in distribution to a process of independent Poisson random variables.
The multi-resolution method, e.g., the Adaptive Particle Refinement (APR) method, has been developed to increase the local particle resolution and therefore the solution quality within a pre-defined refinement zone instead of using a globally uniform resolution for Smoothed Particle Hydrodynamics (SPH). However, sometimes, the targeted zone of interest can be varying, and the corresponding topology is very complex, thus the conventional APR method is not able to track these characteristics adaptively. In this study, a novel Block-based Adaptive Particle Refinement (BAPR) method is developed, which is able to provide the necessary local refinement flexibly for any targeted characteristic, and track it adaptively. In BAPR, the so-called activation status of the block array defines the refinement regions, where the transition and activated zones are determined accordingly. A regularization method for the generated particles in the newly activated blocks is developed to render an isotropic distribution of these new particles. The proposed method has been deployed for simulating Fluid-Structure Interaction (FSI) problems. A set of 2D FSI cases have been simulated with the proposed BAPR method, and the performance of the BAPR method is quantified and validated comprehensively. In a word, the BAPR method is viable and potential for complex multi-resolution FSI simulations by tracking any targeted characteristic of interest.