A Refined Extreme Quantile Estimator for Weibull Tail-distributions
Jonathan El Methni, Stéphane Girard
We address the estimation of extreme quantiles of Weibull tail-distributions. Since such quantiles are asymptotically larger than the sample maximum, their estimation requires extrapolation methods. In the case of Weibull tail-distributions, classical extreme-value estimators are numerically outperformed by estimators dedicated to this set of light-tailed distributions. The latter estimators of extreme quantiles are based on two key quantities: an order statistic to estimate an intermediate quantile and an estimator of the Weibull tail-coefficient used to extrapolate. The common practice is to select the same intermediate sequence for both estimators. We show how an adapted choice of two different intermediate sequences leads to a reduction of the asymptotic bias associated with the resulting refined estimator. This analysis is supported by an asymptotic normality result associated with the refined estimator. A data-driven method is introduced for the practical selection of the intermediate sequences and our approach is compared to three estimators of extreme quantiles dedicated to Weibull tail-distributions on simulated data. An illustration on a real data set of daily wind measures is also provided.
Statistics, Probabilities. Mathematical statistics
From brain to motion: harnessing higher-derivative mechanics for neural control
Olivier White, Fabien Buisseret, Fabien Buisseret
et al.
Applied mathematics. Quantitative methods, Probabilities. Mathematical statistics
Empirical examination of the Black–Scholes model: evidence from the United States stock market
Monsurat Foluke Salami
Option pricing is crucial in enabling investors to hedge against risks. The Black–Scholes option pricing model is widely used for this purpose. This paper investigates whether the Black–Scholes model is a good indicator of option pricing in the United States stock market. We examine the relevance of the Black–Scholes model to certain stocks using paired sample t-test and Corrado and Miller’s approximation for the implied volatility. Empirical tests are applied to determine the significance of the relationship between the actual market values and the Black–Scholes model values. Paired sample t-tests are applied to 582 call options and 579 put options. The empirical test results show that there is no significant difference between the actual market premium value and the Black–Scholes model premium value for seven out of nine stocks considered for call options, and four out of nine stocks considered for put options. Thus, we conclude that the Black–Scholes option pricing model can be used to price call options but is not suitable for pricing put options in the United States stock market.
Applied mathematics. Quantitative methods, Probabilities. Mathematical statistics
No idle flow shop scheduling models for optimization of machine rental costs with processing and separated setup times
Shakuntla Singla, Harshleen Kaur, Deepak Gupta
et al.
Scheduling is one of the many skills required for advancement in today’s modern industry. The flow-shop scheduling problem is a well-known combinatorial optimization challenge. Scheduling issues for flow shops are NP-hard and challenging. The present research investigates a two-stage flow shop scheduling problem with decoupled processing and setup times, where a correlation exists between probabilities, job processing times, and setup times. This study proposes a novel heuristic algorithm that optimally sequences jobs to minimize the makespan and eliminates machine idle time, thereby reducing machine rental costs. The proposed algorithm’s efficacy is demonstrated through several computational examples implemented in MATLAB 2021a. The results are compared with the existing approaches such as those by Johnson, Palmer, NEH, and Nailwal to highlight the proposed algorithm’s superior performance.
Applied mathematics. Quantitative methods, Probabilities. Mathematical statistics
Degree of an edge and Platt Number in signed networks
Diviya K D, Anjaly Kishore
Positive labelled edges play a vital role in network analysis.The degree of edges in signed graphs is introduced by giving importance to
positive edges incident on the end vertices of that edge. The concept
of Platt number of a graph, which is the sum of degrees of its edges, is
extended to signed graphs based on the degree defined. Bounds of degree of an edge and Platt number in certain classes of signed graphs
are determined. Some characterizations on Platt number of signed
graphs are also established. A model to analyse social networks using degree of edges and Platt number is also proposed.
Keywords: Signed graph, positive edges, negative edges, networks,
information diffusion, degree of an edge, Platt number
Mathematics, Probabilities. Mathematical statistics
Relatively Prime Inverse Domination On Line Graph
C. Jayasekaran, Roshini L
Let G be non-trivial graph. A subset D of the vertex set V (G) of a graph G is called a dominating set of G if every vertex in V − D is adjacent to a vertex in D. The minimum cardinality of a dominating set is called the domination number and is denoted by γ(G). If V −D contains a dominating set S of G, then S is called an inverse dominating set with respect to D. In an inverse dominating set S, every pair of vertices u and v in S such that (degu, degv) = 1, then S is called relatively prime inverse dominating set. The minimum cardinality of a relatively prime inverse dominating set is called relatively prime inverse dominating number and is denoted by γ −1 rp (G). In this paper we find relatively prime inverse dominating number of some line graphs.
Mathematics, Probabilities. Mathematical statistics
Numerical solution of density-driven groundwater flows using a generalized finite difference method defined by an unweighted least-squares problem
Ricardo Román-Gutiérrez, Carlos Chávez-Negrete, Francisco Domínguez-Mota
et al.
Density-driven groundwater flows are described by nonlinear coupled differential equations. Due to its importance in engineering and earth science, several linearizations and semi-linearization schemes for approximating their solution have been proposed. Among the more efficient are the combinations of Newtonian iterations for the spatially discretized system obtained by either scalar homotopy methods, fictitious time methods, or meshless generalized finite difference method, with several implicit methods for the time integration. However, when these methods are used, some parameters need to be determined, in some cases, even manually. To overcome this problem, this paper presents a novel generalized finite differences scheme combined with an adaptive step-size method, which can be applied for solving the governing equations of interest on non-rectangular structured and unstructured grids. The proposed method is tested on the Henry and the Elder problems to verify the accuracy and the stability of the proposed numerical scheme.
Applied mathematics. Quantitative methods, Probabilities. Mathematical statistics
SAMPLING VARIANCE ESTIMATION METHOD AND PRECISION OF SMALL AREA ESTIMATION IN THE EXPONENTIAL SPATIAL STRUCTURE
Yadollah Mehrabi, Amir Kavousi, Mojtaba Soltani-Kermanshahi
Background
In numerous practical applications, data from neighbouring small areas present spatial correlation. More recently, an extension of the Fay–Herriot model through the spatial (exponential) has been considered. This spatial area-level model like the fundamental area-level model (was first suggested by Fay III and Herriot ) has a powerful assumption of known sampling variance . Several methods have been suggested for smoothing of sampling variance and there is no unique method for sampling variance estimation, more studies need.
Methods
This research examines four techniques for sampling variance estimates including of Direct , Probability Distribution, Bayes and Bootstrap methods. We used households' food expenditures (HFE) data 2013 and other socio-economic ancillary data to fit the read model and at last conduct a simulation study based on this data to compare the effects of four variance estimation methods on precision of small area estimates.
Results
The best model on real data showed that the lowest and the highest HFE belonged to Pishva district (in Tehran province) with 26,707 thousand rials (TRs) and Omidiyeh (in Khouzestan province) with 101,961 TRs, respectively. Accordingly on simulation study, the probability distribution and direct methods, respectively and approximately had the smallest and the highest Root Average Mean Square Errors (RAMSE) for all conditions.
Conclusion
The results showed the best fitting with Direct method in real data and best precision with Probability Distribution method in simulation study.
Biology (General), Probabilities. Mathematical statistics
On Complete, Horizontal and Vertical Lifts From a Manifold With fλ(6,4) Structure to Its Cotangent Bundle
Manisha M. Kankarej, Jai Pratap Singh
Manifolds with fλ(6,4) structure was defined and studied in the past. Later the geometry of tangent and cotangent bundles in a differentiable manifold with fλ(6,4) structure was studied. The aim of the present paper is to study complete, horizontal and vertical lifts from a manifold with fλ(6,4)- structure to its cotangent bundle.
Probabilities. Mathematical statistics, Analysis
An imprecise-probabilistic characterization of frequentist statistical inference
Ryan Martin
Between the two dominant schools of thought in statistics, namely, Bayesian and classical/frequentist, a main difference is that the former is grounded in the mathematically rigorous theory of probability while the latter is not. In this paper, I show that the latter is grounded in a different but equally mathematically rigorous theory of imprecise probability. Specifically, I show that for every suitable testing or confidence procedure with error rate control guarantees, there exists a consonant plausibility function whose derived testing or confidence procedure is no less efficient. Beyond its foundational implications, this characterization has at least two important practical consequences: first, it simplifies the interpretation of p-values and confidence regions, thus creating opportunities for improved education and scientific communication; second, the constructive proof of the main results leads to a strategy for new and improved methods in challenging inference problems.
Education for Sustainable Development in Primary Education Textbooks—An Educational Approach from Statistical and Probabilistic Literacy
Claudia Vásquez, Israel García-Alonso, Margaret Seckel
et al.
Based on the Stochastic Education Approach to Sustainability Education, the statistical and probability tasks for sustainability education in a collection of primary school mathematics textbooks in Chile (6–14 years old) were analyzed. A content analysis was carried out based on four categories: contexts for sustainability, levels of articulation, cognitive demand, and authenticity. The results show that: (1) there is a low presence of contexts for sustainability; (2) the tasks are not articulated to develop any of the Sustainable Development Goals; (3) there is a clear predominance of memorization tasks; (4) the teaching of statistics and probability in textbooks is not aligned with Education for Sustainable Development (ESD). These results are the roadmap for a new educational approach that allows the design of statistical and probability tasks to educate for sustainability in Primary Education. This new approach should promote that, through the progressive development of statistical and probabilistic literacy, students understand the different problems (social, economic and environmental) that we are faced with, as well as the measures that must be adopted to transform and act for a more sustainable world.
Selected Aspects of Fractional Brownian Motion
I. Nourdin
279 sitasi
en
Mathematics
Environmental Science and Policy
S. Trudgill, K. Richards
197 sitasi
en
Environmental Science
On the Normality of the Product of Tow Operators in Hilbert Space
Benali Abdelkader, Mohammed Meziane, Mohammed Hichem Mortad
In this paper we present the results of the maximality of operators not nec-essarily bounded. For that, we will see the results obtained by operators in situation ofextension. Regarding the normal product of normal operators we seem to be the key tomaximality.
Probabilities. Mathematical statistics, Analysis
Comparative Study between Generalized Maximum Entropy and Bayes Methods to Estimate the Four Parameter Weibull Growth Model
Saifaldin Hashim Kamar, Basim Shlaibah Msallam
The Weibull growth model is an important model especially for describing the growth instability; therefore, in this paper, three methods, namely, generalized maximum entropy, Bayes, and maximum a posteriori, for estimating the four parameter Weibull growth model have been presented and compared. To achieve this aim, it is necessary to use a simulation technique to generate the samples and perform the required comparisons, using varying sample sizes (10, 12, 15, 20, 25, and 30) and models depending on the standard deviation (0.5). It has been shown from the computational results that the Bayes method gives the best estimates.
Probabilities. Mathematical statistics
Characterizing the Ordered AG-Groupoids Through the Properties of Their Different Classes of Ideals
N. Kausar, M. Munir, M. Gulzar
et al.
In this article, we have presented some important charcterizations of the ordered non-associative semigroups in relation to their ideals. We have initially characterized the ordered AG-groupoid through the properties of the their ideals, then we characterized the two important classes of these AG-groupoids, namely the regular and intragregular non-associative AG-groupoids. Our aim is also to encourage the research and the maturity of the associative algebraic structures by studying a class of non-associative and non-commutative algebraic structures called the ordered AG-groupoid.
Analysis, Analytic mechanics
Statistical aspects of nuclear mass models
Vojtech Kejzlar, Léo Neufcourt, Witold Nazarewicz
et al.
We study the information content of nuclear masses from the perspective of global models of nuclear binding energies. To this end, we employ a number of statistical methods and diagnostic tools, including Bayesian calibration, Bayesian model averaging, chi-square correlation analysis, principal component analysis, and empirical coverage probability. Using a Bayesian framework, we investigate the structure of the 4-parameter Liquid Drop Model by considering discrepant mass domains for calibration. We then use the chi-square correlation framework to analyze the 14-parameter Skyrme energy density functional calibrated using homogeneous and heterogeneous datasets. We show that a quite dramatic parameter reduction can be achieved in both cases. The advantage of Bayesian model averaging for improving uncertainty quantification is demonstrated. The statistical approaches used are pedagogically described; in this context this work can serve as a guide for future applications.
Probabilistic Concepts in a Changing Climate: A Snapshot Attractor Picture
102 sitasi
en
Computer Science
Smoothed Conditional Scale Function Estimation in AR(1)-ARCH(1) Processes
Lema Logamou Seknewna, Peter Mwita Nyamuhanga, Benjamin Kyalo Muema
The estimation of the Smoothed Conditional Scale Function for time series was taken out under the conditional heteroscedastic innovations by imitating the kernel smoothing in nonparametric QAR-QARCH scheme. The estimation was taken out based on the quantile regression methodology proposed by Koenker and Bassett. And the proof of the asymptotic properties of the Conditional Scale Function estimator for this type of process was given and its consistency was shown.
Probabilities. Mathematical statistics
Statistical analysis of the effect of the current, potential and proposed rules of a game in tennis
G. Szigeti
With the aid of mathematical modelling (basic tool is the random walk with absorbing barriers) we derive subsequent formulas to study the effect of different versions of possible rules. For different rules the probability of winning a game, the probability of break point occurrence, the mathematical expectation of the number of rallies (points) and, the mathematical expectation of the number of break points in a game are expressed. We check these rules against ATP statistics for the Top-200 men players. In conclusion, we suggest a slight but essential modification for the rule of a tennis game, namely , second service ( in case of a first service fault) is to be allowed only at the first three points (rallies). This would partially preserve the traditions (server has an advantage in the modern game) and at the same time it would reduce the predictability of the game, significantly increasing in this way the excitement for the spectators.