Hasil untuk "Applied mathematics. Quantitative methods"

Menampilkan 20 dari ~6502468 hasil · dari DOAJ, CrossRef, arXiv, Semantic Scholar

JSON API
S2 Open Access 2019
Atom search optimization and its application to solve a hydrogeologic parameter estimation problem

Wei-guo Zhao, Liying Wang, Zhenxing Zhang

Abstract In recent years, various metaheuristic optimization methods have been proposed in scientific and engineering fields. In this study, a novel physics-inspired metaheuristic optimization algorithm, atom search optimization (ASO), inspired by basic molecular dynamics, is developed to address a diverse set of optimization problems. ASO mathematically models and mimics the atomic motion model in nature, where atoms interact through interaction forces resulting from the Lennard-Jones potential and constraint forces resulting from the bond-length potential. The proposed algorithm is simple and easy to implement. ASO is tested on a range of benchmark functions to verify its validity, qualitatively and quantitatively, and then applied to a hydrogeologic parameter estimation problem with success. The results demonstrate that ASO is superior to some classic and newly emerging algorithms in the literature and is a promising solution to real-world engineering problems.

622 sitasi en Computer Science
arXiv Open Access 2026
Quantitative Methods in Finance

Eric Vansteenberghe

These lecture notes provide a comprehensive introduction to Quantitative Methods in Finance (QMF), designed for graduate students in finance and economics with heterogeneous programming backgrounds. The material develops a unified toolkit combining probability theory, statistics, numerical methods, and empirical modeling, with a strong emphasis on implementation in Python. Core topics include random variables and distributions, moments and dependence, simulation and Monte Carlo methods, numerical optimization, root-finding, and time-series models commonly used in finance and macro-finance. Particular attention is paid to translating theoretical concepts into reproducible code, emphasizing vectorization, numerical stability, and interpretation of outputs. The notes progressively bridge theory and practice through worked examples and exercises covering asset pricing intuition, risk measurement, forecasting, and empirical analysis. By focusing on clarity, minimal prerequisites, and hands-on computation, these lecture notes aim to serve both as a pedagogical entry point for non-programmers and as a practical reference for applied researchers seeking transparent and replicable quantitative methods in finance.

en econ.EM
DOAJ Open Access 2025
A novel explicit scheme for stochastic diffusive SIS models with treatment effects

Muhammad Shoaib Arif

In this study, we propose a novel computational scheme for solving deterministic and stochastic partial differential equations (PDEs). The scheme is designed as an explicit two-stage method, where only the time-dependent terms are discretized, ensuring computational efficiency. A compact finite difference scheme is employed to discretize the spatial components, achieving a sixth-order accuracy in space. The stability and consistency of the proposed method are thoroughly investigated in the mean square sense, guaranteeing its validity for stochastic PDEs. The scheme's effectiveness is demonstrated by applying it to a stochastic diffusive SIS epidemic model. Furthermore, a comparative analysis uses existing numerical methods for deterministic models, including the Runge–Kutta and Euler schemes. The results indicate that the proposed scheme provides higher accuracy and reduced numerical error, making it a promising approach for solving complex epidemiological models.

Applied mathematics. Quantitative methods
arXiv Open Access 2025
Mathematics of natural intelligence

Evgenii Vityaev

In the process of evolution, the brain has achieved such perfection that artificial intelligence systems do not have and which needs its own mathematics. The concept of cognitome, introduced by the academician K.V. Anokhin, as the cognitive structure of the mind -- a high-order structure of the brain and a neural hypernetwork, is considered as the basis for modeling. Consciousness then is a special form of dynamics in this hypernetwork -- a large-scale integration of its cognitive elements. The cognitome, in turn, consists of interconnected COGs (cognitive groups of neurons) of two types -- functional systems and cellular ensembles. K.V. Anokhin sees the task of the fundamental theory of the brain and mind in describing these structures, their origin, functions and processes in them. The paper presents mathematical models of these structures based on new mathematical results, as well as models of different cognitive processes in terms of these models. In addition, it is shown that these models can be derived based on a fairly general principle of the brain works: \textit{the brain discovers all possible causal relationships in the external world and draws all possible conclusions from them}. Based on these results, the paper presents models of: ``natural" classification; theory of functional brain systems by P.K. Anokhin; prototypical theory of categorization by E. Roche; theory of causal models by Bob Rehter; theory of consciousness as integrated information by G. Tononi.

en q-bio.NC, cs.AI
arXiv Open Access 2025
Multigrid methods for total variation

Felipe Guerra, Tuomo Valkonen

Based on a nonsmooth coherence condition, we construct and prove the convergence of a forward-backward splitting method that alternates between steps on a fine and a coarse grid. Our focus is a total variation regularised inverse imaging problems, specifically, their dual problems, for which we develop in detail the relevant coarse-grid problems. We demonstrate the performance of our method on total variation denoising and magnetic resonance imaging.

en math.OC, eess.IV
S2 Open Access 2020
Applying the triple bottom line in sustainable supplier selection: A meta-review of the state-of-the-art

Kamran Rashidi, A. Noorizadeh, Devika Kannan et al.

Abstract This study conducts a systematic meta-literature review in the field of sustainable supplier selection. The number of published papers within the domain of sustainable supplier selection has grown considerably in recent years. Up until now, there has been no attempt quantitatively analyze the content of these published papers using bibliometric and network analysis software. Thus, this paper utilizes Gephi and Bibexcel software to conduct a quantitative review. In total, 4,882 documents were reviewed based on 336 combinations searched in Scopus and the Web of Science from 1990 to March 2018. Bibliometric, co-word and co-citation analysis are applied to quantitatively extract and analyze the content of these papers. The analysis reveals that: 1) There is a gap between industry and academia that needs to be bridged; 2) More studies in the area of global sourcing are needed; 3) Comparing the outcomes of different supplier evaluation methods is required; 4) There has been no major shift or change in the traditional supplier selection practices; 5) The ratio of the applied social criteria is relatively low compared to the total number of criteria; 6) The innovation capability of suppliers needs to be further considered; 7) More studies of sustainable supplier selection are needed in the e-procurement arena, as well as service-based industries such as healthcare, and 8) Evaluating the sustainability of suppliers in a dynamic environment needs to be further studied. The conclusion also reveals that only a limited number of journals exhibit a specific focus on the sustainable supplier selection arena; analytical and mathematical-based methods are the most applied supplier selection tools and there is a misalignment between the applied criteria in the triple bottom line.

163 sitasi en Computer Science
S2 Open Access 2024
Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models

Changyu Chen, Xiting Wang, Ting-En Lin et al.

In reasoning tasks, even a minor error can cascade into inaccurate results, leading to suboptimal performance of large language models in such domains. Earlier fine-tuning approaches sought to mitigate this by leveraging more precise supervisory signals from human labeling, larger models, or self-sampling, although at a high cost. Conversely, we develop a method that avoids external resources, relying instead on introducing perturbations to the input. Our training approach randomly masks certain tokens within the chain of thought, a technique we found to be particularly effective for reasoning tasks. When applied to fine-tuning with GSM8K on Llama-2-7B, this method achieved a 5\% improvement in GSM8K accuracy and a 10\% improvement in GSM-IC accuracy over standard supervised fine-tuning with a few codes modified. Furthermore, it is complementary to existing methods. When integrated with related explicit data augmentation methods, it leads to improvements across five datasets of various augmentation methods, as well as two different base models. We further investigate the mechanisms behind this improvement through case studies and quantitative analysis, suggesting that our approach may provide superior support for the model in capturing long-distance dependencies, especially those related to questions. This enhancement could deepen understanding of the premises in questions and prior steps. Our code is available at Github.

26 sitasi en Computer Science
S2 Open Access 2024
Advanced financial market forecasting: integrating Monte Carlo simulations with ensemble Machine Learning models

Akash Deep

This paper presents a novel integration of Machine Learning (ML) models with Monte Carlo simulations to enhance financial forecasting and risk assessments in dynamic market environments. Traditional financial forecasting methods, which primarily rely on linear statistical and econometric models, face limitations in addressing the complexities of modern financial datasets. To overcome these challenges, we explore the evolution of financial forecasting, transitioning from time-series analyses to sophisticated ML techniques such as Random Forest, Support Vector Machines, and Long Short-Term Memory (LSTM) networks. Our methodology combines an ensemble of these ML models, each providing unique insights into market dynamics, with the probabilistic scenario analysis of Monte Carlo simulations. This integration aims to improve the predictive accuracy and risk evaluation in financial markets. We apply this integrated approach to a quantitative analysis of the SPY Exchange-Traded Fund (ETF) and selected major stocks, focusing on various risk-reward ratios including Sharpe, Sortino, and Treynor. The results demonstrate the potential of our approach in providing a comprehensive view of risks and rewards, highlighting the advantages of combining traditional risk assessment methods with advanced predictive models. This research contributes to the field of applied mathematical finance by offering a more nuanced, adaptive tool for financial market analyses and decision-making.

DOAJ Open Access 2024
Mean-field dynamics of the non-consensus opinion model

Xinhan Liu, M. A. Achterberg, Robert Kooij

Abstract In 2009, Shao et al. (Phys Rev Lett 103(1):018701, 2009) introduced the Non-consensus opinion (NCO) model, which allows different opinions to coexist in the steady state. We propose a mean-field-based dynamical model for the NCO model on networks with low degree correlation, which reveals the mechanism of opinion formation in the NCO model. This mean-field model provides a new way of estimating important system properties such as the fraction of a certain opinion F, the critical threshold $$f_c$$ f c , and the size of the largest connected cluster for a given opinion $$s_1$$ s 1 . It offers an accurate estimation in less time than the Monte Carlo simulations. The scale invariance of the NCO model is discussed. The variation in the degree of nodes holding different opinions in the dynamics of the NCO model is investigated. The trends in the dynamics of the NCO model are also revealed. This approach can be applied to real-world social networks, providing a method of analyzing opinion dynamics in human society.

Applied mathematics. Quantitative methods
DOAJ Open Access 2024
Towards a generic agent-based vector-host model: effects of carrying capacity and host mobility

Cyrine Chenaoui, Nicolas Marilleau, Slimane Ben Miled

Abstract The aim of our work is to develop a generic conceptual agent-based model to formalize the interaction of vector and host given climate change. The model consists in creating a hypothetical example of a vector-host system. It simulates the vector’s life cycle while considering interactions with hosts and the temperature. It is presented following the ODD protocol and based on parameters and processes to conceptualize the vector-host complexity. It could accommodate a broad spectrum of vector species and different biogeographic regions. Our model can be extended to more ecologically complex systems with multiple species and real-world landscape complexity to test different host and / or vector-targeted control strategies and identify practical approaches to managing vector population and movement patterns.

Applied mathematics. Quantitative methods
S2 Open Access 2023
3D AFM Nanomechanical Characterization of Biological Materials

S. Kontomaris, A. Stylianou, A. Georgakopoulos et al.

Atomic Force Microscopy (AFM) is a powerful tool enabling the mechanical characterization of biological materials at the nanoscale. Since biological materials are highly heterogeneous, their mechanical characterization is still considered to be a challenging procedure. In this paper, a new approach that leads to a 3-dimensional (3D) nanomechanical characterization is presented based on the average Young’s modulus and the AFM indentation method. The proposed method can contribute to the clarification of the variability of the mechanical properties of biological samples in the 3-dimensional space (variability at the x–y plane and depth-dependent behavior). The method was applied to agarose gels, fibroblasts, and breast cancer cells. Moreover, new mathematical methods towards a quantitative mechanical characterization are also proposed. The presented approach is a step forward to a more accurate and complete characterization of biological materials and could contribute to an accurate user-independent diagnosis of various diseases such as cancer in the future.

17 sitasi en Medicine
S2 Open Access 2023
Mass Spectrometry-Based Evaluation of the Bland–Altman Approach: Review, Discussion, and Proposal

D. Tsikas

Reliable quantification in biological systems of endogenous low- and high-molecular substances, drugs and their metabolites, is of particular importance in diagnosis and therapy, and in basic and clinical research. The analytical characteristics of analytical approaches have many differences, including in core features such as accuracy, precision, specificity, and limits of detection (LOD) and quantitation (LOQ). Several different mathematic approaches were developed and used for the comparison of two analytical methods applied to the same chemical compound in the same biological sample. Generally, comparisons of results obtained by two analytical methods yields different quantitative results. Yet, which mathematical approach gives the most reliable results? Which mathematical approach is best suited to demonstrate agreement between the methods, or the superiority of an analytical method A over analytical method B? The simplest and most frequently used method of comparison is the linear regression analysis of data observed by method A (y) and the data observed by method B (x): y = α + βx. In 1986, Bland and Altman indicated that linear regression analysis, notably the use of the correlation coefficient, is inappropriate for method-comparison. Instead, Bland and Altman have suggested an alternative approach, which is generally known as the Bland–Altman approach. Originally, this method of comparison was applied in medicine, for instance, to measure blood pressure by two devices. The Bland–Altman approach was rapidly adapted in analytical chemistry and in clinical chemistry. To date, the approach suggested by Bland–Altman approach is one of the most widely used mathematical approaches for method-comparison. With about 37,000 citations, the original paper published in the journal The Lancet in 1986 is among the most frequently cited scientific papers in this area to date. Nevertheless, the Bland–Altman approach has not been really set on a quantitative basis. No criteria have been proposed thus far, in which the Bland–Altman approach can form the basis on which analytical agreement or the better analytical method can be demonstrated. In this article, the Bland–Altman approach is re-valuated from a quantitative bioanalytical perspective, and an attempt is made to propose acceptance criteria. For this purpose, different analytical methods were compared with Gold Standard analytical methods based on mass spectrometry (MS) and tandem mass spectrometry (MS/MS), i.e., GC-MS, GC-MS/MS, LC-MS and LC-MS/MS. Other chromatographic and non-chromatographic methods were also considered. The results for several different endogenous substances, including nitrate, anandamide, homoarginine, creatinine and malondialdehyde in human plasma, serum and urine are discussed. In addition to the Bland–Altman approach, linear regression analysis and the Oldham–Eksborg method-comparison approaches were used and compared. Special emphasis was given to the relation of difference and mean in the Bland–Altman approach. Currently available guidelines for method validation were also considered. Acceptance criteria for method agreement were proposed, including the slope and correlation coefficient in linear regression, and the coefficient of variation for the percentage difference in the Bland–Altman and Oldham–Eksborg approaches.

16 sitasi en Medicine
DOAJ Open Access 2023
Unlocking the power of Twitter communities for startups

Ana Rita Peixoto, Ana de Almeida, Nuno António et al.

Abstract Social media platforms offer cost-effective digital marketing opportunities to monitor the market, create user communities, and spread positive opinions. They allow companies with fewer budgets, like startups, to achieve their goals and grow. In fact, studies found that startups with active engagement on those platforms have a higher chance of succeeding and receiving funding from venture capitalists. Our study explores how startups utilize social media platforms to foster social communities. We also aim to characterize the individuals within these communities. The findings from this study underscore the importance of social media for startups. We used network analysis and visualization techniques to investigate the communities of Portuguese IT startups through their Twitter data. For that, a social digraph has been created, and its visualization shows that each startup created a community with a degree of intersecting followers and following users. We characterized those users using user node-level measures. The results indicate that users who are followed by or follow Portuguese IT startups are of these types: “Person”, “Company,” “Blog,” “Venture Capital/Investor,” “IT Event,” “Incubators/Accelerators,” “Startup,” and “University.” Furthermore, startups follow users who post high volumes of tweets and have high popularity levels, while those who follow them have low activity and are unpopular. The attained results reveal the power of Twitter communities and offer essential insights for startups to consider when building their social media strategies. Lastly, this study proposes a methodological process for social media community analysis on platforms like Twitter.

Applied mathematics. Quantitative methods
DOAJ Open Access 2023
Approximate solutions to the Allen–Cahn equation using rational radial basis functions method

M. Shiralizadeh, A. Alipanah, M. Mohammadi

We apply the rational radial basis functions (RRBFs) method to solve the Allen–Cahn (A.C) equation, particularly when the equation has a so-lution with steep front or sharp gradients. We approximate the spatial derivatives by the RRBFs method. Then we apply an explicit, fourth-order Runge–Kutta method to advance the resulting semi-discrete system in time. It is well known that the A.C equation has a nonlinear stability feature, meaning that the free-energy functional is reduced by time. The presented method maintains the total energy reduction property of the A.C equation. In the end, five examples to confirm the efficiency and accuracyof the proposed method are provided.

Applied mathematics. Quantitative methods
DOAJ Open Access 2023
Analytic solution of a fractional order mathematical model for tumour with polyclonality and cell mutation

A. Omame, F.D. Zaman

It is well established that gliomas are heterogeneous (polyclonal), that the degree of heterogeneity always rises with grade. It is believed that the more cancerous cells have a greater propensity to mutate, increasing heterogeneity. Consequently, it is anticipated that a tumour will have a variety of cell types. In this work, a fractional diffusion model for tumour growth where two cell populations are assumed, which could have different diffusivity and proliferation rates, is studied and analysed. The coupled system is solved analytically via the zeroth order finite Hankel transform employing the fractional derivatives with exponential and Mittag-Leffler kernels, respectively and the obtained solutions simulated using MATLAB. Important highlights of the simulations include: (i.) using the Caputo fractional derivative, and considering the scenario when the rate of loss of cell population u(r,t) is α=0.5, keeping the other values of the parameters ν=1.2,β=0.1, it is observed that, near the centre of the tumour where biopsy is to be carried out, the tumour cell concentrations u(r,t) dominates v(r,t) when t=0.5 while tumour cell population v(r,t) dominates u(r,t) at time t=2.0; (ii.) with the Atangana–Baleanu derivative, and considering the same scenario it is observed that, near the centre of the tumour, the cell concentration v(r,t) dominates u(r,t) when t=0.5 while the dominance is much higher at time t=2.0. Thus, it is concluded that, the nature of kernel in the fractional operator could indeed alter the dominance between the tumour cell concentrations.

Applied mathematics. Quantitative methods
DOAJ Open Access 2023
An interval valued fuzzy complex proportional assessment (IVF-COPRAS) method to solve MCDM problem with an application

Mahin Ashouri, Abdollah Hadi-Vencheh, Ali Jamshidi

Purpose: This study aims to tackle the challenging facility location selection problem in Multiple Criteria Decision Making (MCDM) scenarios, explicitly focusing on type-1 fuzzy MCDM issues. The research introduces Interval Valued Fuzzy Numbers (IVFNs) to express ratings, addressing the difficulty in determining precise membership degrees for fuzzy sets.Methodology: The proposed IVF-COPRAS method, centered on uncertainty risk reduction, is employed to enhance decision-making reliability in IVF decision problems. This methodology is applied to a real-world case involving the selection of a location for municipal wet waste landfill pits in a major Iranian city. Comparative analyses with other methods are conducted to assess the proposed approach.Findings: The study demonstrates the effectiveness of the IVF-COPRAS method in addressing facility location selection problems within MCDM. By utilizing IVFNs, the method successfully manages uncertainty, leading to more reliable decisions. Application to a practical scenario highlights the method's efficacy, and the comparative analysis provides insights into its performance relative to other methods.Originality/Value: This research contributes a novel approach with the IVF-COPRAS method for handling facility location selection challenges in MCDM. The reliance on IVFNs offers a unique perspective on uncertainty in decision-making, enhancing decision reliability. The real-world application emphasizes the method's practical significance, providing a valuable contribution to MCDM research and offering a methodological tool for similar decision-making problems across diverse domains.

Management. Industrial management, Applied mathematics. Quantitative methods
arXiv Open Access 2023
The Quantitative Genetics of Human Disease: 1 Foundations

David J. Cutler, Kiana Jodeiry, Andrew J. Bass et al.

In this the first of an anticipated four paper series, fundamental results of quantitative genetics are presented from a first principles approach. While none of these results are in any sense new, they are presented in extended detail to precisely distinguish between definition and assumption, with a further emphasis on distinguishing quantities from their usual approximations. Terminology frequently encountered in the field of human genetic disease studies will be defined in terms of their quantitive genetics form. Methods for estimation of both quantitative genetics and the related human genetics quantities will be demonstrated. While practitioners in the field of human quantitative disease studies may find this work pedantic in detail, the principle target audience for this work is trainees reasonably familiar with population genetics theory, but with less experience in its application to human disease studies. We introduce much of this formalism because in later papers in this series, we demonstrate that common areas of confusion in human disease studies can be resolved be appealing directly to these formal definitions. The second paper in this series will discuss polygenic risk scores. The third paper will concern the question of "missing" heritability and the role interactions may play. The fourth paper will discuss sexually dimorphic disease and the potential role of the X chromosome.

en q-bio.QM
arXiv Open Access 2023
The Elasticity of Quantitative Investment

Carter Davis

What is the demand elasticity of statistical arbitrageurs that invest according to the advice of modern cross-sectional asset pricing models? Thirteen models from the literature exhibit strikingly inelastic demand, in contrast to classical models that rely on statistical arbitrageurs to create elastic market demand for assets. This inelasticity arises from the difficulty of trading against price changes. A quantitative equilibrium model shows that aggregate demand remains inelastic even with these statistical arbitrageurs in the market.

en q-fin.PM, q-fin.MF

Halaman 11 dari 325124