Construction-Verification: A Benchmark for Applied Mathematics in Lean 4
Bowen Yang, Yi Yuan, Chenyi Li
et al.
Recent advances in large language models have demonstrated impressive capabilities in mathematical formalization. However, existing benchmarks focus on logical verification of declarative propositions, often neglecting the task of explicitly synthesizing solutions. This limitation is particularly acute in applied mathematics domains, where the goal is frequently to derive concrete values or executable algorithms rather than solely proving theorems. To address this, we introduce a Lean 4 framework that enforces a construction-verification workflow, compelling the agent to define explicit solutions before proving their correctness. We curate a comprehensive benchmark AMBER (Applied Mathematics BEnchmark for Reasoning) spanning core domains of applied mathematics, including convex analysis, optimization, numerical algebra, and high-dimensional probability. Aside from theorem proving, our benchmark features complex tasks such as evaluation, algorithm design, and representation transformation. Experiments reveal that current models face significant difficulties with these constructive tasks. Notably, we observe that general-purpose reasoning models consistently outperform specialized theorem provers. We attribute this to a degradation of instruction following capabilities in specialized models. Fine-tuning on proof corpora appears to induce ``tactical overfitting", compromising the ability to adhere to complex constructive requirements, whereas general models retain the versatility needed for multi-task formal reasoning.
Estimation of the heat transfer coefficient for three ranges of reference values under fourth-kind boundary conditions using swarm algorithms
Maria Zych, Robert Dyja, Elzbieta Gawronska
et al.
Applied mathematics. Quantitative methods
Impacts of oxytactic microorganisms and viscous dissipation in Carreau nanoliquid featuring ferromagnetic nanoparticles
Muhammad Tabrez, I. Hussain, W.A. Khan
et al.
Owing to rapid progress in field of nanotechnology, various mathematical model has been proposed regarding the flow of nanofluids. Based on boosted thermal properties, multidisciplinary applications of such materials has been claimed in various engineering and industrial processes. In current work, a theoretical analysis is performed for bioconvective flow of Carreau fluid with applications of magnetic dipole. The significance of magnetic dipole are presented with interaction of ferrofluid. The applications of viscous dissipation have also been entertained. The flow is subject to applications of stretched surface. Similarity variables are used for making transformation to obtain set of nonlinear ODEs from set of PDEs. The numerical computation is also made by using of well-known numerical method termed as bvp4c. Influences of various non dimensional fluid parameters like Brownian motion, viscous dissipation, Curie temperature, Weissenberg number, motile difference parameter and bioconvection Paclet number are examined and expressed graphically. It is perceived that ferrofluid's concentration declines when Brownian diffusive variable is augmented while opposite characteristics are reported for Schmidt number. Current results conveying applications in the biofuels, better heat transport medium, fertilizers, catalysts, electronics, paints, semiconductors, speakers, drug delivery, fertilizers etc.
Applied mathematics. Quantitative methods
A quantitative comparison of high-order asymptotic-preserving and asymptotically-accurate IMEX methods for the Euler equations with non-ideal gases
Giuseppe Orlando, Sebastiano Boscarino, Giovanni Russo
We present a quantitative comparison between two different Implicit-Explicit Runge-Kutta (IMEX-RK) approaches for the Euler equations of gas dynamics, specifically tailored for the low Mach limit. In this regime, a classical IMEX-RK approach involves an implicit coupling between the momentum and energy balance so as to avoid the acoustic CFL restriction, while the density can be treated in a fully explicit fashion. This approach leads to a mildly nonlinear equation for the pressure, which can be solved according to a fixed point procedure. An alternative strategy consists of employing a semi-implicit temporal integrator based on IMEX-RK methods (SI-IMEX-RK). The stiff dependence is carefully analyzed, so as to avoid the solution of a nonlinear equation for the pressure also for equations of state (EOS) of non-ideal gases. The spatial discretization is based on a Discontinuous Galerkin (DG) method, which naturally allows high-order accuracy. The asymptotic-preserving (AP) and the asymptotically-accurate (AA) properties of the two approaches are assessed on a number of classical benchmarks for ideal gases and on their extension to non-ideal gases.
Context-sensitive norm enforcement reduces sanctioning costs in spatial public goods games
Hsuan-Wei Lee, Colin Cleveland, Attila Szolnoki
Uniform punishment policies can sustain cooperation in social dilemmas but impose severe costs on enforcers, creating a second-order free-rider problem that undermines the very mechanism designed to prevent exploitation. We show that the remedy is not a harsher stick but a smarter one. In a four-strategy spatial public-goods game we pit conventional punishers, who levy a fixed fine, against norm-responsive punishers that double both fine and cost only when at least half of their current group already cooperates. Extensive large scale Monte Carlo simulations on lattices demonstrate that context-sensitive punishment achieves complete defector elimination at fine levels 15\% lower than uniform enforcement, despite identical marginal costs per sanctioning event. The efficiency gain emerges because norm-responsive punishers conserve resources in defector-dominated regions while concentrating intensified sanctions at cooperative-defector boundaries, creating self-reinforcing fronts that amplify the spread of prosocial behavior. These findings reveal that enforcement efficiency can be dramatically improved by targeting punishment at cooperative-defector interfaces rather than applying uniform sanctions, offering quantitative guidelines for designing adaptive regulatory mechanisms that maximize compliance while minimizing institutional costs.
en
physics.soc-ph, nlin.CD
On ultrametrics, b-metrics, w-distances, metric-preserving functions, and fixed point theorems
Suchat Samphavat, Thanakorn Prinyasart
Abstract In this article, new classes of functions based on new variations of metric-preserving functions are defined. Necessary and sufficient conditions for functions to be in these classes are also provided. As a result, we can explain relations between all classes and learn that all functions in the classes are weakly separated from 0. We can extend fixed point theorems, which were originally provided by Kirk and Shahzad and were later extended by Pongsriiam and Termwuttipong, in this journal by considering all functions that are weakly separated from 0.
Applied mathematics. Quantitative methods, Analysis
Binary fuzzy linear programming problems: a new solution
Malihe Niksirat, Majid Abdolrazzagh-Nezhad
Purpose: In this paper, a Binary Fuzzy Linear Programming Problem (BFLPP) with fuzzy objective function and fuzzy constraints is considered. This paper proposes a new approach that solves the problem based on Kerre's adapted method, which maintains the assumption of being fuzzy in the solving process. Therefore, the solution is more consistent with the uncertainty governing the problem.Methodology: This paper proposes a new fuzzy branch-and-bound approach based on Kerre's adapted method to solve the fuzzy binary integer programming problem. In each node of the branch-and-bound tree, the linear relaxation of the fuzzy problem is solved with a new fuzzy simplex method based on Kerre's adapted method.Findings: Numerical examples are presented to illustrate the proposed method step by step, and the results are compared with those of other approaches that solve fuzzy binary integer programming problems.Originality/Value: Unlike the available defuzzification procedures and fuzzy ranking functions in the research problem literature, the proposed approach considers the assumption of being fuzzy in the solution process and thus offers a more realistic solution.
Management. Industrial management, Applied mathematics. Quantitative methods
Modeling individual mobility’s impact on COVID-19 transmission: Insights from a two-patch SEIR-V approach
M. Bouziane, M.A. Boubekeur, M.E.B. Keddar
et al.
This research explores the influence of individual mobility on COVID-19 transmission, utilizing a temporal mathematical model to clarify disease spread and vaccination dynamics across diverse regions. Employing a com-putationally efficient two-patch configuration that emphasizes regional in-teractions, our study aims to guide optimal disease control strategies. The introduced SEIR-V model with a two-patch setup estimates the vaccination reproduction number, Rv, while equilibrium points and system stability are identified. Visualizations from numerical simulations and sensitivity analyses illustrate key parameters affecting the vaccination reproduction number and COVID-19 control measures. Our findings underscore system responsiveness, emphasizing the intricate relationship between Rv , migra-tion rates, and disease prevalence.
Applied mathematics. Quantitative methods
Tree-Cotree-Based Tearing and Interconnecting for 3D Magnetostatics: A Dual-Primal Approach
Mario Mally, Bernard Kapidani, Melina Merkel
et al.
The simulation of electromagnetic devices with complex geometries and large-scale discrete systems benefits from advanced computational methods like IsoGeometric Analysis and Domain Decomposition. In this paper, we employ both concepts in an Isogeometric Tearing and Interconnecting method to enable the use of parallel computations for magnetostatic problems. We address the underlying non-uniqueness by using a graph-theoretic approach, the tree-cotree decomposition. The classical tree-cotree gauging is adapted to be feasible for parallelization, which requires that all local subsystems are uniquely solvable. Our contribution consists of an explicit algorithm for constructing compatible trees and combining it with a dual-primal approach to enable parallelization. The correctness of the proposed approach is proved and verified by numerical experiments, showing its accuracy, scalability and optimal convergence.
Report on the 49th Annual United States of America Mathematical Olympiad
Bela Bajnok
We provide the problems and their solutions to the 2020 USA Mathematical Olympiad.
The Effect of Mathematical Modeling Activities on Students' Mathematical Modeling Skills in the Context of STEM Education
Yaprak Armutcu, A. P. Bal
This study was conducted to examine the effect of mathematical modeling activities on mathematical modeling skills of secondary school students in the context of STEM education. The study was designed according to the embedded design, one of the mixed research methods. The study group of the research consists of a total of 66 eighth grade students studying in a public school in the central district of a large province in the south of Turkey in the 2020-2021 academic year. While the criterion sampling method, one of the purposeful sampling methods, was used to determine the quantitative study group of the research, the maximum variation sampling method was used to determine the qualitative study group. On the other hand, in the context of STEM education, mathematical modeling problems, evaluation rubric and semi-structured interview form were used as data collection tools in the research. As a result of the research; It was concluded that mathematical modeling activities in the context of STEM education positively improved the mathematical modeling skills of secondary school students. In addition, it has been concluded that the students who receive education with mathematical modeling activities applied in the context of STEM education gain different interdisciplinary perspectives, experience positive developments in their thinking skills, adapt to group work more easily, and increase their interest in engineering and technology.
Improving Foraminifera Classification Using Convolutional Neural Networks with Ensemble Learning
Loris Nanni, Giovanni Faldani, Sheryl Brahnam
et al.
This paper presents a study of an automated system for identifying planktic foraminifera at the species level. The system uses a combination of deep learning methods, specifically convolutional neural networks (CNNs), to analyze digital images of foraminifera taken at different illumination angles. The dataset is composed of 1437 groups of sixteen grayscale images, one group for each foraminifera specimen, that are then converted to RGB images with various processing methods. These RGB images are fed into a set of CNNs, organized in an ensemble learning (EL) environment. The ensemble is built by training different networks using different approaches for creating the RGB images. The study finds that an ensemble of CNN models trained on different RGB images improves the system’s performance compared to other state-of-the-art approaches. The main focus of this paper is to introduce multiple colorization methods that differ from the current cutting-edge techniques; novel strategies like Gaussian or mean-based techniques are suggested. The proposed system was also found to outperform human experts in classification accuracy.
Applied mathematics. Quantitative methods
Parameter estimation in mixed fractional stochastic heat equation
Diana Avetisian, Kostiantyn Ralchenko
The paper is devoted to a stochastic heat equation with a mixed fractional Brownian noise. We investigate the covariance structure, stationarity, upper bounds and asymptotic behavior of the solution. Based on its discrete-time observations, we construct a strongly consistent estimator for the Hurst index H and prove the asymptotic normality for $H. Then assuming the parameter H to be known, we deal with joint estimation of the coefficients at the Wiener process and at the fractional Brownian motion. The quality of estimators is illustrated by simulation experiments.
Applied mathematics. Quantitative methods, Mathematics
Quantitative Methods for Optimizing Patient Outcomes in Liver Transplantation
Raja Al-Bahou, Julia Bruner, Helen Moore
et al.
Liver transplantation continues to be the gold standard for treating patients with end-stage liver diseases. However, despite the huge success of liver transplantation in improving patient outcomes, long term graft survival continues to be a major problem. The current clinical practice in the management of liver transplant patients is centered around immunosuppressive multidrug regimens. Current research has been focusing on phenotypic personalized medicine as a novel approach in the optimization of immunosuppression, a regressional math modeling focusing on individual patient dose and response using specific markers like transaminases. A prospective area of study includes the development of a mechanistic computational math modeling for optimizing immunosuppression to improve patient outcomes and increase long-term graft survival by exploring the intricate immune/drug interactions to help us further our understanding and management of medical problems like transplants, autoimmunity, and cancer therapy. Thus, by increasing long-term graft survival, the need for redo transplants will decrease, which will free up organs and potentially help with the organ shortage problem promoting equity and equal opportunity for transplants, as well as decreasing the medical costs associated with additional testing and hospital admissions. Although long-term graft survival remains challenging, computational and quantitative methods have led to significant improvements. In this article, we review recent advances and remaining opportunities. We focus on the following topics: donor organ availability and allocation with a focus on equity, monitoring of patient and graft health, and optimization of immunosuppression dosing.
Antenna Boosters versus Flexible Printed Circuit Antennas for IoT Devices
Jaume Anguera, Alejandro Fernández, Carles Puente
et al.
Antennas should be small enough to fit in the limited space of IoT devices and, at the same time, with multi-band operation across several bands as well as ensure stability when embedded in a device. In this regard, two different technologies are compared: antenna booster and flexible printed circuit antenna. A comparison is addressed from measured results in terms of efficiency, concluding that despite the antenna booster is more than fifty times smaller in area, it provides better efficiency across the frequency range of 698–960 MHz and 1710–2690 MHz across three different printed circuit boards (PCB): a big PCB of 131 mm × 60 mm, a medium PCB of 95 mm × 42 mm, and a small PCB of 65 mm × 42 mm. Moreover, the flexible printed antenna depends on the mounting process, whereas the antenna booster does not.
Applied mathematics. Quantitative methods
Robust Quantitative Susceptibility Mapping via Approximate Message Passing with Parameter Estimation
Shuai Huang, James J. Lah, Jason W. Allen
et al.
Purpose: For quantitative susceptibility mapping (QSM), the lack of ground-truth in clinical settings makes it challenging to determine suitable parameters for the dipole inversion. We propose a probabilistic Bayesian approach for QSM with built-in parameter estimation, and incorporate the nonlinear formulation of the dipole inversion to achieve a robust recovery of the susceptibility maps. Theory: From a Bayesian perspective, the image wavelet coefficients are approximately sparse and modelled by the Laplace distribution. The measurement noise is modelled by a Gaussian-mixture distribution with two components, where the second component is used to model the noise outliers. Through probabilistic inference, the susceptibility map and distribution parameters can be jointly recovered using approximate message passing (AMP). Methods: We compare our proposed AMP with built-in parameter estimation (AMP-PE) to the state-of-the-art L1-QSM, FANSI and MEDI approaches on the simulated and in vivo datasets, and perform experiments to explore the optimal settings of AMP-PE. Reproducible code is available at https://github.com/EmoryCN2L/QSM_AMP_PE Results: On the simulated Sim2Snr1 dataset, AMP-PE achieved the lowest NRMSE, DFCM and the highest SSIM, while MEDI achieved the lowest HFEN. On the in vivo datasets, AMP-PE is robust and successfully recovers the susceptibility maps using the estimated parameters, whereas L1-QSM, FANSI and MEDI typically require additional visual fine-tuning to select or double-check working parameters. Conclusion: AMP-PE provides automatic and adaptive parameter estimation for QSM and avoids the subjectivity from the visual fine-tuning step, making it an excellent choice for the clinical setting.
Family Floer mirror space for local SYZ singularities
Hang Yuan
We give a mathematically precise statement of the SYZ conjecture between mirror space pairs and prove it for any toric Calabi-Yau manifold with the Gross Lagrangian fibration. To date, it is the first time we realize the SYZ proposal with singular fibers beyond the topological level. The dual singular fibration is explicitly written and proved to be compatible with the family Floer mirror construction. Moreover, we discover that the Maurer-Cartan set of a singular Lagrangian is only a strict subset of the corresponding dual singular fiber. This responds negatively to the previous expectation and leads to new perspectives of SYZ singularities. As extra evidence, we also check some computations for a well-known folklore conjecture for the Landau-Ginzburg model.
Forecasting by Machine Learning Techniques and Econometrics: A Review
G. Shobana, K. Umamaheswari
Econometricians deal with a tremendous amount of data to derive the relationships between economic entities. When statistical techniques are applied to the economic data to determine the relative economic entities with verifiable observations, this quantitative analysis is termed Econometrics. Traditional Econometric methods employ pure statistical and mathematical concepts to analyze economic data. Applied Econometrics deals with exploring real-world observations like forecasting, fluctuating market prices, economic outcomes or results, etc. In recent years, Machine Learning models are applied to quantitative data available in almost all domains. Machine Learning Models perform very efficiently in the classification process and it is used in the field of economics to classify the economic data more accurately than traditional econometric models. In this paper, several machine learning methods that are specifically used for economic data are explored. This paper further investigates the various Supervised machine learning techniques that contribute effectively along with metrics that are involved in the analysis procedure of econometric models. This study provides deep insight into those machine learning models preferred by the Econometricians and their future implications.
The Importance of Making Assumptions in Bias Analysis
R. Maclehose, T. Ahern, T. Lash
et al.
Quantitative bias analyses allow researchers to adjust for uncontrolled confounding, given specification of certain bias parameters. When researchers are concerned about unknown confounders, plausible values for these bias parameters will be difficult to specify. Ding and VanderWeele developed bounding factor and E-value approaches that require the user to specify only some of the bias parameters. We describe the mathematical meaning of bounding factors and E-values and the plausibility of these methods in an applied context. We encourage researchers to pay particular attention to the assumption made, when using E-values, that the prevalence of the uncontrolled confounder among the exposed is 100% (or, equivalently, the prevalence of the exposure among those without the confounder is 0%). We contrast methods that attempt to bound biases or effects and alternative approaches such as quantitative bias analysis. We provide an example where failure to make this distinction led to erroneous statements. If the primary concern in an analysis is with known but unmeasured potential confounders, then E-values are not needed and may be misleading. In cases where the concern is with unknown confounders, the E-value assumption of an extreme possible prevalence of the confounder limits its practical utility.
A new image denoising framework using bilateral filtering based non-subsampled shearlet transform
Sidheswar Routray, Prince Priya Malla, S. Sharma
et al.
Abstract In this paper, we propose an advanced framework for image denoising using bilateral filtering based non-subsampled shearlet transform (NSST). Initially, we apply the NSST to decompose noisy input image into high and low frequency coefficients. The weighted bilateral filter (WBF) is then used to remove the noise from the low frequency coefficients while; thresholding is used to remove noise from the high frequency coefficients. The outputs of both the process are combined to form the resultant image. Finally, the inverse NSST is applied on the resultant output to estimate the final denoised image. To ensure validity of the proposed model, we conduct several experiments by considering different grayscale images with various noise variances. The qualitative and quantitative comparison is illustrated and it shows the improved performance of the proposed method as compared to the other conventional image denoising methods. Mathematical and simulation results are presented to show the validity of our work.
39 sitasi
en
Computer Science