Anas El Hachimi, Khalide Jbilou, Ahmed Ratnani
Hasil untuk "Applied mathematics. Quantitative methods"
Menampilkan 20 dari ~6510819 hasil · dari CrossRef, Semantic Scholar, arXiv, DOAJ
Amir Kavoosi
This study introduces the Quantitative Geometric Market Structuralist (QGMS) framework a hybrid analytical methodology integrating geometric pattern recognition with quantitative mathematical modeling to identify terminal zones of large-scale market movements. Unlike conventional econometric or signal-based models, the QGMS framework conceptualizes market dynamics as evolving geometric structures governed by self-organizing principles of price formation. To preserve the proprietary nature of its internal mathematical architecture, the methodology employs a blind-testing validation process, wherein price, symbol, and temporal identifiers are concealed during analysis. This design ensures objective verification without revealing the underlying algorithmic core. The frameworks predictive robustness has been empirically examined across multiple financial crises, including the 2008 Global Financial Collapse, the 2015 EUR CHF SNB event, the 2016 Brexit referendum, and the 2020 COVID-19 market crash. In each case, the system consistently identified structural endpoints preceding major market reversals. The findings suggest that geometric quantitative market interpretation may offer a new class of predictive tools bridging the gap between mathematical formalism and empirical price behavior. By combining academic testability with intellectual property protection, the QGMS framework establishes a viable foundation for institutional evaluation and further research into nonlinear structural forecasting models.
Mohammad Alaroud
Both linear and nonlinear differential, partial equations of fractional order can be solved efficiently using the residual power series method (RPSM). Nevertheless, the process requires the residual function's (n − 1)ϱ fractional derivative(FD). We all know that figuring out the FD of a function can be difficult. A straightforward and effective analytical technique known as the Laplace transform-residual power series method (LT-RPSM) is used in this study to provide the approximate and exact solutions to nonlinear fractional partial differential equations(NFPDEs) under Caputo fractional differentiation including the nonlinear Fokker-Planck, nonlinear gas dynamics and nonlinear Klein-Gordon equations. The computations needed to find the coefficients of an expansion series are modest because the proposed method just requires the concept of an infinite limit. Three nonlinear fractional physical problems are successfully solved by the used investigation, which provides closed- form solutions and exact solutions in ordinary case, also a thorough graphical and numerical comparisons of the findings discovered. These outcomes are compared with existing solutions in the literature, especially in the meaning of absolute errors against the Laplace Adomin decompostion method LADM in light of different FD operators. Strong agreement between the results of the used method and several series solution techniques. Consequently, LT-RPSM can be considered a very successful technique and the most effective analytical algorithm to deal with numerous NFPDEs emerging in physics and engineering.
Orazio Pinti, Jeremy M. Budd, Franca Hoffmann et al.
We present a novel probabilistic approach for generating multi-fidelity data while accounting for errors inherent in both low- and high-fidelity data. In this approach a graph Laplacian constructed from the low-fidelity data is used to define a multivariate Gaussian prior density for the coordinates of the true data points. In addition, few high-fidelity data points are used to construct a conjugate likelihood term. Thereafter, Bayes rule is applied to derive an explicit expression for the posterior density which is also multivariate Gaussian. The maximum \textit{a posteriori} (MAP) estimate of this density is selected to be the optimal multi-fidelity estimate. It is shown that the MAP estimate and the covariance of the posterior density can be determined through the solution of linear systems of equations. Thereafter, two methods, one based on spectral truncation and another based on a low-rank approximation, are developed to solve these equations efficiently. The multi-fidelity approach is tested on a variety of problems in solid and fluid mechanics with data that represents vectors of quantities of interest and discretized spatial fields in one and two dimensions. The results demonstrate that by utilizing a small fraction of high-fidelity data, the multi-fidelity approach can significantly improve the accuracy of a large collection of low-fidelity data points.
Hang Yuan, Saizhuo Wang, Jian Guo
Recently, we introduced a new paradigm for alpha mining in the realm of quantitative investment, developing a new interactive alpha mining system framework, Alpha-GPT. This system is centered on iterative Human-AI interaction based on large language models, introducing a Human-in-the-Loop approach to alpha discovery. In this paper, we present the next-generation Alpha-GPT 2.0 \footnote{Draft. Work in progress}, a quantitative investment framework that further encompasses crucial modeling and analysis phases in quantitative investment. This framework emphasizes the iterative, interactive research between humans and AI, embodying a Human-in-the-Loop strategy throughout the entire quantitative investment pipeline. By assimilating the insights of human researchers into the systematic alpha research process, we effectively leverage the Human-in-the-Loop approach, enhancing the efficiency and precision of quantitative investment research.
Daniel Weiskopf
This paper revisits the role of quantitative and qualitative methods in visualization research in the context of advancements in artificial intelligence (AI). The focus is on how we can bridge between the different methods in an integrated process of analyzing user study data. To this end, a process model of - potentially iterated - semantic enrichment and transformation of data is proposed. This joint perspective of data and semantics facilitates the integration of quantitative and qualitative methods. The model is motivated by examples of own prior work, especially in the area of eye tracking user studies and coding data-rich observations. Finally, there is a discussion of open issues and research opportunities in the interplay between AI, human analyst, and qualitative and quantitative methods for visualization research.
Idoia Berges, Jesús Bermúdez, Arantza Illarramendi
Introduction: This article is part of the Focus Theme of METHODS of Information in Medicine on "Managing Interoperability and Complexity in Health Systems". Background: The proliferation of archetypes as a means to represent information of Electronic Health Records has raised the need of binding terminological codes - such as SNOMED CT codes - to their elements, in order to identify them univocally. However, the large size of the terminologies makes it difficult to perform this task manually. Objectives: To establish a baseline of results for the aforementioned problem by using off-the-shelf string comparison-based techniques against which results from more complex techniques could be evaluated. Methods: Nine Typed Comparison METHODS were evaluated for binding using a set of 487 archetype elements. Their recall was calculated and Friedman and Nemenyi tests were applied in order to assess whether any of the methods outperformed the others. Results: Using the qGrams method along with the 'Text' information piece of archetype elements outperforms the other methods if a level of confidence of 90% is considered. A recall of 25.26% is obtained if just one SNOMED CT term is retrieved for each archetype element. This recall rises to 50.51% and 75.56% if 10 and 100 elements are retrieved respectively, that being a reduction of more than 99.99% on the SNOMED CT code set. Conclusions: The baseline has been established following the above-mentioned results. Moreover, it has been observed that although string comparison-based methods do not outperform more sophisticated techniques, they still can be an alternative for providing a reduced set of candidate terms for each archetype element from which the ultimate term can be chosen later in the more-than-likely manual supervision task.
Muhammad Sarwar, Syed Khayyam Shah, Kamaleldin Abodayeh et al.
Abstract This manuscript aims to present new results about the generalized F-contraction of Hardy–Rogers-type mappings in a complete vector-valued metric space, and to demonstrate the fixed-point theorems for single and pairs of generalized F-contractions of Hardy–Rogers-type mappings. The established results represent a significant development of numerous previously published findings and results in the existing body of literature. Furthermore, to ensure the practicality and effectiveness of our findings across other fields, we provide an application that demonstrates a unique solution for the semilinear operator system within the Banach space.
Muayyad Mahmood Khalil, Siddiq Ur Rehman, Ali Hasan Ali et al.
This manuscript presents enhanced versions of two methods: the natural transform iterative method (NTIM) and the q-homotopy analysis method (q-HAM). These methods harness concepts from Fractional Calculus, particularly leveraging the Caputo fractional derivative operator, to successfully manage the complexities of fractional-order systems. To validate their accuracy and efficiency, we applied the proposed techniques to FPDEs like the fractional-order KDV-Burger and fifth-order Sawada–Kotera equations. Our outcomes, which closely resemble the exact solutions, demonstrate how useful NTIM and q-HAM are for solving difficult FPDEs and improving the study of fractional calculus.
Sina Abbasi, Umar Muhammad Modibbo, Hamed Jafari Kolashlou et al.
In the last several decades, Iran’s ecosystem has suffered due to the careless usage of natural resources. Cities have grown in an uneven and non-normative way, and poor project management has been a major issue, particularly in large cities. An even greater number of environmental factors and engineering regulations are not relevant to projects. Because of this, in order to ascertain a project’s environmental impact, an environmental impact assessment (EIA), is required. Using the rapid impact assessment matrix (RIAM) is one method of applying it to EIA. Reducing subjectivity brings objectivity and transparency. During the COVID-19 pandemic, a thorough EIA was carried out for the Tehran project utilizing the RIAM and other possibilities. This research is the first to combine the methodology that was discussed during the incident. Through the use of the RIAM technique, the environmental impact of COVID-19 was to be quantified in this inquiry. The research examined lockdown procedures and the COVID-19 pandemic to create an EIA indicator. In a real-world case study conducted in Tehran, Iran, the impact of the initiative was evaluated using the RIAM methodology during the COVID-19 epidemic. The results demonstrated that COVID-19 had both beneficial and harmful effects. Decision-makers were effectively informed about the COVID-19 pandemic’s environmental consequences on people and the environment, as well as how to minimize negative effects, according to the EIA technique that used RIAM. This is the first research to integrate the EIA during a crisis, such as the COVID-19 pandemic, with the RIAM approach.
Mohra Zayed, Shahid Ahmad Wani, Mir Subzar et al.
This study introduces a new approach to the development of generalized 1-parameter, 2-variable Hermite–Frobenius–Euler polynomials, which are characterized by their generating functions, series definitions and summation formulae. Additionally, the research utilizes a factorization method to establish recurrence relations, shift operators and various differential equations, including differential, integro-differential and partial differential equations. The framework elucidates the fundamental properties of these polynomials by utilizing generating functions, series definitions and summation formulae. The results of the study contribute to the understanding of the properties of these polynomials and their potential applications.
María Florencia Acosta, Hugo Aimar, Ivana Gómez et al.
In this note we explore the structure of the diffusion metric of Coifman-Lafon determined by fractional dyadic Laplacians. The main result is that, for each \(t\gt 0\), the diffusion metric is a function of the dyadic distance, given in \(\mathbb{R}^+\) by \(\delta(x,y) = \inf\{|I|\colon I \text{ is a dyadic interval containing } x \text{ and } y\}\). Even if these functions of \(\delta\) are not equivalent to \(\delta\), the families of balls are the same, to wit, the dyadic intervals.
A.M. Khan, Sanjay Gaur, D.L. Suthar
The present paper aims to propose an approximation method of Caputo fractional operator using discretization based on quadrature theory to minimize the error function for an Artificial Neural Network (ANN) with higher convergence rate. In the proposed work authors have verified the suitability of multilayer feed forward ANN architecture to get an estimated solution of fractional order differential equations. The back propagation algorithm and an unsupervised learning, was imposed in order to minimize the error function including the optimizing the network parameters such as weights of synapses and the biases. The algorithm uses truncated power series to replace the unknown function in the fractional differential equations. The novelty of the present work is an approximation of a fractional operator using discretization based on quadrature theory to construct an error function to provide an appropriate estimated solution of the nonlinear fractional differential equations. Four illustrative examples with different orders of nonlinear fractional differential equations are unraveled to approve the validity of the model along with demonstration of effectiveness and fast convergence of the proposed method.
Xicui Li, Bin Wang
Stefan Loesch
Automated Market Makers (AMMs) are a class of smart contracts on Ethereum and other blockchains that "make markets" autonomously. In other words, AMMs stand ready to trade with other market participants that interact with them, at the conditions determined by the AMM. In this this paper, which relies on the existing and growing corpus of literature available, we review and present the key mathematical and quantitative finance aspects that underpin their operations, including the interesting relationship between AMMs and derivatives pricing and hedging.
Dimitrios Tzarouchis, Mario Junior Mencagli, Brian Edwards et al.
Performing analog computations with metastructures is an emerging wave-based paradigm for solving mathematical problems. For such devices, one major challenge is their reconfigurability, especially without the need for a priori mathematical computations or computationally-intensive optimization. Their equation-solving capabilities are applied only to matrices with special spectral (eigenvalue) distribution. Here we report the theory and design of wave-based metastructures using tunable elements capable of solving integral/differential equations in a fully-reconfigurable fashion. We consider two architectures: the Miller architecture, which requires the singular-value decomposition, and an alternative intuitive direct-complex-matrix (DCM) architecture introduced here, which does not require a priori mathematical decomposition. As examples, we demonstrate, using system-level simulation tools, the solutions of integral and differential equations. We then expand the matrix inverting capabilities of both architectures toward evaluating the generalized Moore-Penrose matrix inversion. Therefore, we provide evidence that metadevices can implement generalized matrix inversions and act as the basis for the gradient descent method for solutions to a wide variety of problems. Finally, a general upper bound of the solution convergence time reveals the rich potential that such metadevices can offer for stationary iterative schemes.
Pedro Lopez-Merino, Juliette Rouchier
Abstract According to innovation diffusion theories, the adoption of a new product is the result of a dynamic process whereby individuals become likelier to adopt as others do. Agent-based modelling has emerged as a useful technique to model and study processes of innovation diffusion within artificial societies, as it allows to easily programme and simulate the interaction of multiple agents among them and with their environment. Despite a large body of literature dealing with innovation of diffusions, including the use of agent-based modelling, there has been little to no consideration of two elements that are important features of consumption: the presence of multiple characteristics of goods, and that of price-premiums on the presence of added characteristics. We propose an agent-based model of the diffusion of such goods, and study its emerging properties when compared to standard ones. Our goal is to try and understand how social interaction affects the consumption of goods that are complex rather than uni-dimensional, and whose prices depend on the number of dimensions (characteristics) that are present. Testing the model for different parameters shows that as goods become more complex, social interaction becomes an increasingly important explanatory variable for purchases. This opens up interesting avenues of discussion for those seeking to bring together innovation diffusion theories and goods’ complexity, and can be linked with a number of issues in the social and sustainability sciences.
Yufeng Yang, Yufeng Yang, Ningning Song
In the starlight atmospheric refraction navigation when the starlight transmits in the supersonic flow field, the aero-optical effect will reduce the accuracy of navigation. In this paper, the aircraft model is established by ICEM and Fluent is used to simulate the atmosphere density distribution at different altitudes and speeds. Then, the principle of geometric optics is used to track the starlight, the angular deviation of starlight transmission is deduced, and finally, the influence of different speeds and altitudes on starlight atmospheric refraction navigation is analyzed. The results show that the aero-optical effect produced by supersonic vehicles is related to the flight altitude and flight speed. Taking the flight altitude of 20 and 30 km as an example, when the flight speed is Mach 2, the angular deviation caused by the aero-optical effect is 1.045 and 0.699“ respectively, and when the flight speed is Mach 10, the angular deviation is 20.075 and 4.643”, respectively. Therefore, the aero-optical effect can be ignored at the altitude of 30 km and above. However, the influence of the aero-optical effect at 20 km needs to be judged according to the flight speed.
R.K. Gupta, D. Khan
When we design the payoff matrix of a game on the basis of the available information, then rarely the information is free from impreciseness, and as a result, the payoffs of the payoff matrix have a certain amount of ambiguity associated with them. In this work, we have developed a heuristic technique to solve two persons m × n zero-sum games (m > 2, n > 2), with interval-valued payoffs and interval-valued objectives. Thus the game has been formulated by representing the impreciseness of the payoffs with interval numbers. To solve the game, a real coded genetic algorithm with interval fitness function, tournament selection, uniform crossover, and uniform mutation has been developed. Finally, our proposed technique hasbeen demonstrated with a few examples and sensitivity analyses with respect to the genetic algorithm parameters have been done graphically to study the stability of our algorithm.
Xinyi Zhu, Haoran Li, Zhiliang Zhang et al.
For 1-MHz GaN LLC converters with 1-kV input, the switching speed of eGaN high-electron mobility transistors (HEMTs) is as fast as 6 ns, which results in dv/dt up to 200 kV/μs. It poses serious challenge for synchronous rectification (SR). A sensorless model-based SR driving scheme for high voltage applications is proposed to optimize the efficiency at steady state and the complementary control as an interlock mechanism is applied during the transients to ensure safety. A mathematic model is built to determine the turn-on instant and conduction time related to the switching frequency and load condition so that the driving signals are adjusted adaptively. The proposed method provides reliable gate driving signals without any detection circuits and is immune to high frequency noise. The transient response is analyzed and the tolerance effects of the resonant components are analyzed quantitatively. This control is fully transparent to design engineers compared to SR drive ICs, and is convenient to implement in high voltage and high frequency applications. A 1-MHz prototype with 1-kV input and 32 V/3 kW output is built, which achieves the power density of 103 W/in3 and peak efficiency of 95.92% with an improvement of 2.0% at full load compared to the conventional SR driving scheme.
Halaman 32 dari 325541