Break-and-charge: Leveraging EU regulations to enhance electric truck competitiveness
Fabian Brockmann, Mario Guajardo
The electrification of trucks progresses slowly, with extended charging times as a major concern for transportation companies. In the comparison of electric versus diesel trucks, an aspect often neglected is that regulations on driver working hours affect both types of trucks. In particular, mandatory break times offer opportunities for electric trucks to be charged while drivers rest and, therefore, without necessarily implying additional time over the traditional route duration. To this aim, this paper develops a mathematical programming model that allows to synchronize break times of the drivers with charging times of the trucks. We implement this model using data on real-world truck specifications and charging station infrastructure from Northwest Germany. Our results indicate that under average conditions, the current features of batteries and charging stations are sufficient for electric trucks to perform routes at very similar times as combustion engine trucks. We also study how variations in features such as usable battery size or charging rates due to aging or ambient conditions affect route duration. Our results show that in these cases synchronization of charging and break times is crucial to keep the competitiveness of electric trucks with respect to diesel trucks.
Probabilities. Mathematical statistics, Applied mathematics. Quantitative methods
BIRESPONSE SPLINE TRUNCATED NONPARAMETRIC REGRESSION MODELING FOR LONGITUDINAL DATA ON MONTHLY STOCK PRICES OF THREE PRIVATE BANKS IN INDONESIA
Reza Pahlepi, Idhia Sriliana, Winalia Agwil
et al.
This study investigates the application of a truncated spline nonparametric regression model for biresponse analysis of longitudinal data, focusing on modeling monthly stock prices specifically opening and closing prices of three private banks in Indonesia: Bank Mayapada, Bank Mega, and Bank Sinar Mas. The data used in this research are secondary data sourced from the website Id.Investing.com and monthly financial statement publications of three private banks in Indonesia. Longitudinal data, combining cross-sectional and time-series dimensions, are utilized to capture trends and patterns not detectable in traditional cross-sectional analysis. The truncated spline method is selected for its adaptability to nonlinear relationships and abrupt data behavior changes. The model incorporates three predictor variables traded stock volume, total assets, and total liabilities and evaluates their influence on stock prices. Assumptions of longitudinal data are validated using the Ljung-Box autocorrelation test, Bartlett’s sphericity test, and Pearson correlation. Results confirm significant within-subject correlations, independence between subjects, and strong interdependence between response variables. The optimal configuration is determined using Generalized Cross Validation (GCV), with up to three knots considered for segmentation. Weighted Least Squares (WLS) is employed for parameter estimation, accounting for within-subject correlations. Model evaluation based on Mean Absolute Percentage Error (MAPE) indicates high accuracy, with all MAPE values below 5%. The highest MAPE value is 4.41% for the closing price of Bank Mayapada, while the lowest is 2.65% for the opening price of the same bank. The segmentation analysis reveals that traded stock volume and total assets positively influence stock prices, while total liabilities exhibit a predominantly negative impact. The model is limited to internal financial indicators and does not include external macroeconomic factors such as interest rates or inflation. This study is the first to apply a biresponse truncated spline nonparametric regression approach to analyze stock prices of private banks in Indonesia by simultaneously modeling both opening and closing prices, providing a flexible and effective method for capturing complex patterns in longitudinal financial data.
Probabilities. Mathematical statistics
INTEGRATION OF HIERARCHICAL CLUSTER, SELF-ORGANIZING MAPS, AND ENSEMBLE CLUSTER WITH NAÏVE BAYES CLASSIFIER FOR GROUPING CABBAGE PRODUCTION IN INDONESIA
Maulidya Maghfiro, Ni Wayan Surya Wardhani, Atiek Iriany
The purpose of this study is to evaluate and compare different clustering techniques, including hierarchical cluster analysis (using complete linkage, average linkage, and single linkage methods), Self-Organizing Maps (SOM) clustering, and ensemble clustering, within the framework of integrated cluster analysis combined with Naïve Bayes analysis, specifically applied to cabbage production in Indonesia. The data utilized in this study are on cabbage production from various districts and cities in Indonesia, obtained from the 2023 publications of the Central Statistics Agency (BPS). The variables used in this study are cabbage harvest, cabbage production, area height, and rainfall. The data size used is 157 districts/cities in Indonesia. This research is a quantitative analysis employing integrated cluster analysis combined with Naïve Bayes. Cluster analysis is used to obtain classes in each district/city. Different clustering methods, including hierarchical clustering, Self-Organizing Map (SOM), and ensemble clustering, are compared to determine the best approach for grouping districts based on cabbage production. Naïve Bayes analysis is then used to classify cabbage production in Indonesia and identify the optimal clusters. This comparison aims to find the most effective clustering method for improving grouping accuracy and understanding cabbage production patterns. The best method for classifying cabbage production in Indonesia is the ensemble clustering approach integrated with Naïve Bayes, resulting in three distinct clusters: high, medium, and low production clusters.
Probabilities. Mathematical statistics
Improved sensitivity bounds for mediation under unmeasured mediator–outcome confounding
Sjölander Arvid, Waernbaum Ingeborg
It is often of interest to decompose a total effect into an indirect effect, relayed through a particular mediator, and a direct effect. However, these effect components are not identified if there are unmeasured confounding of the mediator and the outcome. We derive nonparametric bounds on the natural direct and indirect effects, and Cornfield inequalities that the unmeasured confounders must satisfy to explain away an “observed” effect. We demonstrate, analytically and by simulation, that these bounds and Cornfield inequalities are sharper than those previously proposed in the literature. We illustrate the methods with an application to cholestyramine treatment for coronary heart disease.
Mathematics, Probabilities. Mathematical statistics
Corrigendum: Algebraic and toroidal representation of the genetic code
Rodrigo Rodríguez-Gutiérrez, Francisco Hernandez-Cabrera, Francisco Javier Almaguer-Martínez
et al.
Applied mathematics. Quantitative methods, Probabilities. Mathematical statistics
Superior Eccentric Domination Polynomial
R Tejaskumar, A Mohamed Ismayil
In this paper we introduce the superior eccentric domination polynomial $SED(G, φ) = β\sum_{ l=\gamma_{sed}(G)} |sed(G, l)|φ^{l}$ where |sed(G, l)| is the number of all distinct superior eccentric dominating sets with cardinality l and $\gamma_{sed}(G)$ is superior eccentric domination number. We find SED(G, φ) for different standard graphs. Results are presented.
Mathematics, Probabilities. Mathematical statistics
Flipped classroom: experience of a pedagogical model adopted during the health crisis to support work-study teaching
Hommane Boudine, Meriem Bentaleb, Mourad Radi
et al.
In early 2020 new pedagogical practices and approaches were adopted to ensure the continuity of the education system during the health crisis caused by the coronavirus pandemic (covid19) to make the student more active in the new learning process.
This study focuses on manipulating information and communication technologies in education (ICT) to support the pedagogical alternation between distance and face-to-face education to ensure equity and equal opportunities. The objective of this study is to assess the implementation of reverse pedagogy and the obstacles that hinder it by focusing on this new process that offers the learner the opportunity to see the course support through the digital tool, using an educational platform (Moodle), which gives each student the opportunity to learn and evolve at their own pace, without losing the motivation to learn. This research is addressed to students of the last year of the college cycle at the ALYASSAMIN school in Sidi Slimane. The results obtained confirm that the pedagogical model of the flipped classroom ensured a stable progression of the education system during the period of the health crisis. This study is based on assessing the state of play and suggests the generalization of inverted classes and integrating them into teaching practices. In conclusion, the results presented must be considered for adapted teaching to support alternative education and to follow the current digital evolution.
Science, Probabilities. Mathematical statistics
On ωθ˜-µ-Open Sets in Generalized Topological Spaces
Fatimah Al Mahri, Abdo Qahis
In this paper analogous to [1], we introduce a new class of sets called ωθ˜-µ-open sets in generalized topological spaces which lies strictly between the class of θ˜µ-open sets and the class of ω-µ-open sets. We prove that the collection of ωθ˜-µ-open sets forms a generalized topology. Finally, several characterizations and properties of this class have been given.
Probabilities. Mathematical statistics, Analysis
Probability Theory: Author index
E. Jaynes, G. Bretthorst
518 sitasi
en
Mathematics, Computer Science
Central Limit Theorem in View of Subspace Convex-Cyclic Operators
H.M. Hasan, D.F. Ahmed, M.F. Hama
et al.
In our work we have defined an operator called subspace convex-cyclic operator. The property of this newly defined operator relates eigenvalues which have eigenvectors of modulus one with kernels of the operator. We have also illustrated the effect of the subspace convex-cyclic operator when we let it function in linear dynamics and joining it with functional analysis. The work is done on infinite dimensional spaces which may make linear operators have dense orbits. Its property of measure preserving puts together probability space with measurable dynamics and widens the subject to ergodic theory. We have also applied Birkhoff’s Ergodic Theorem to give a modified version of subspace convex-cyclic operator. To work on a separable infinite Hilbert space, it is important to have Gaussian invariant measure from which we use eigenvectors of modulus one to get what we need to have. One of the important results that we have got from this paper is the study of Central Limit Theorem. We have shown that providing Gaussian measure, Central Limit Theorem holds under the certain conditions that are given to the defined operator. In general our work is theoretically new and is combining three basic concepts dynamical system, operator theory and ergodic theory under the measure and statistics theory.
Analysis, Analytic mechanics
Fuzzy Time Series for Projecting School Enrolment in Malaysia
Nor Hayati Shafii, Rohana Alias, Siti Rohani Shamsuddin
et al.
There are a variety of approaches to the problem of predicting educational enrolment. However, none of them can be used when the historical data are linguistic values. Fuzzy time series is an efficient and effective tool to deal with such problems. In this paper, the forecast of the enrolment of pre-primary, primary, secondary, and tertiary schools in Malaysia is carried out using fuzzy time series approaches. A fuzzy time series model is developed using historical dataset collected from the United Nations Educational, Scientific, and Cultural Organization (UNESCO) from the year 1981 to 2018. A complete procedure is proposed which includes: fuzzifying the historical dataset, developing a fuzzy time series model, and calculating and interpreting the outputs. The accuracy of the model are also examined to evaluate how good the developed forecasting model is. It is tested based on the value of the mean squared error (MSE), Mean Absolute Percent Error (MAPE) and Mean Absolute Deviation (MAD). The lower the value of error measure, the higher the accuracy of the model. The result shows that fuzzy time series model developed for primary school enrollments is the most accurate with the lowest error measure, with the MSE value being 0.38, MAPE 0.43 and MAD 0.43 respectively.
Probabilities. Mathematical statistics, Technology
Bootstrap Methods in Econometrics
Joel L. Horowitz
The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one's data or a model estimated from the data. Under conditions that hold in a wide variety of econometric applications, the bootstrap provides approximations to distributions of statistics, coverage probabilities of confidence intervals, and rejection probabilities of hypothesis tests that are more accurate than the approximations of first-order asymptotic distribution theory. The reductions in the differences between true and nominal coverage or rejection probabilities can be very large. In addition, the bootstrap provides a way to carry out inference in certain settings where obtaining analytic distributional approximations is difficult or impossible. This article explains the usefulness and limitations of the bootstrap in contexts of interest in econometrics. The presentation is informal and expository. It provides an intuitive understanding of how the bootstrap works. Mathematical details are available in the references that are cited.
67 sitasi
en
Mathematics, Economics
The Inverse Xgamma Distribution: Statistical Properties and Different Methods of Estimation
A. Yadav, Sudhansu S. Maiti, M. Saha
The paper proposes a new probability distribution, named inverse xgamma (IXG) distribution. Different mathematical and statistical properties, viz., reliability characteristics, inverse moments, quantile function, mean inverse residual life, stress-strength reliability, stochastic ordering and order statistics of the proposed distribution have been derived and discussed. Estimation of the parameter of IXG distribution has been approached by different methods, namely, maximum likelihood estimation, least squares estimation, weighted least squares estimation, Cramèr–von-Mises estimation and maximum product of spacing estimation (MPSE). A simulation study has been carried out to compare the performance of these estimators in terms of their mean squared errors. Asymptotic confidence interval of the parameter in terms of average widths and coverage probabilities is also obtained using MPSE of the parameter. Finally, a data set is used to demonstrate the applicability of IXG distribution in real life situations.
Subspace Estimation from Unbalanced and Incomplete Data Matrices: $\ell_{2,\infty}$ Statistical Guarantees
Changxiao Cai, Gen Li, Yuejie Chi
et al.
This paper is concerned with estimating the column space of an unknown low-rank matrix $\boldsymbol{A}^{\star}\in\mathbb{R}^{d_{1}\times d_{2}}$, given noisy and partial observations of its entries. There is no shortage of scenarios where the observations -- while being too noisy to support faithful recovery of the entire matrix -- still convey sufficient information to enable reliable estimation of the column space of interest. This is particularly evident and crucial for the highly unbalanced case where the column dimension $d_{2}$ far exceeds the row dimension $d_{1}$, which is the focal point of the current paper. We investigate an efficient spectral method, which operates upon the sample Gram matrix with diagonal deletion. While this algorithmic idea has been studied before, we establish new statistical guarantees for this method in terms of both $\ell_{2}$ and $\ell_{2,\infty}$ estimation accuracy, which improve upon prior results if $d_{2}$ is substantially larger than $d_{1}$. To illustrate the effectiveness of our findings, we derive matching minimax lower bounds with respect to the noise levels, and develop consequences of our general theory for three applications of practical importance: (1) tensor completion from noisy data, (2) covariance estimation / principal component analysis with missing data, and (3) community recovery in bipartite graphs. Our theory leads to improved performance guarantees for all three cases.
Applicability of the Analytical Solution to N-Person Social Dilemma Games
Ugo Merlone, Daren R. Sandbank, Ferenc Szidarovszky
The purpose of this study is to present an analysis of the applicability of an analytical solution to the N−person social dilemma game. Such solution has been earlier developed for Pavlovian agents in a cellular automaton environment with linear payoff functions and also been verified using agent based simulation. However, no discussion has been offered for the applicability of this result in all Prisoners' Dilemma game scenarios or in other N−person social dilemma games such as Chicken or Stag Hunt. In this paper it is shown that the analytical solution works in all social games where the linear payoff functions are such that each agent's cooperating probability fluctuates around the analytical solution without cooperating or defecting with certainty. The social game regions where this determination holds are explored by varying payoff function parameters. It is found by both simulation and a special method that the analytical solution applies best in Chicken when the payoff parameter S is slightly negative and then the analytical solution slowly degrades as S becomes more negative. It turns out that the analytical solution is only a good estimate for Prisoners' Dilemma games and again becomes worse as S becomes more negative. A sensitivity analysis is performed to determine the impact of different initial cooperating probabilities, learning factors, and neighborhood size.
Applied mathematics. Quantitative methods, Probabilities. Mathematical statistics
Statistical Aspects of Wasserstein Distances
Victor M. Panaretos, Yoav Zemel
Wasserstein distances are metrics on probability distributions inspired by the problem of optimal mass transportation. Roughly speaking, they measure the minimal effort required to reconfigure the probability mass of one distribution in order to recover the other distribution. They are ubiquitous in mathematics, with a long history that has seen them catalyse core developments in analysis, optimization, and probability. Beyond their intrinsic mathematical richness, they possess attractive features that make them a versatile tool for the statistician: they can be used to derive weak convergence and convergence of moments, and can be easily bounded; they are well-adapted to quantify a natural notion of perturbation of a probability distribution; and they seamlessly incorporate the geometry of the domain of the distributions in question, thus being useful for contrasting complex objects. Consequently, they frequently appear in the development of statistical theory and inferential methodology, and have recently become an object of inference in themselves. In this review, we provide a snapshot of the main concepts involved in Wasserstein distances and optimal transportation, and a succinct overview of some of their many statistical aspects.
The properties of central types with respect to enrichment by Jonsson set
A.R. Yeshkeyev
The main results of the article are for a new class of theories, namely existential prime strongly convex Jonsson theories. This class is quite broad in terms of algebra, for example it includes the class of all Abelian groups and groups. This article examines the issues relating to the following subjects. The language on considered a signature adds a new predicate symbol which reflects the presence of the Jonsson set. The concept of Jonsson sets in Jonsson theory is a generalization of the concept of the dimension of the linear space. T.G. Mustafin in due time, introduced and proved the basic properties of the syntactic and semantic similarity. In this paper, in the extended language we have similare to the results for the considered theories. In this direction, the main results of the work are the following results: The coincidence of P–stability for the prototype and its central-type center. The equivalence of syntactic similarity of existentially EP SCJ compleate theories and syntactical similarity of their centers was consedered. From this it can be seen a lot of useful facts. In particular semantic similarity. As well as a list of semantic properties, which are stored at the semantic similarity. For example, the semantic properties that invariant properties of the first order applies Morley rank of the central type.
Analysis, Analytic mechanics
Some Implicit Methods for Solving Harmonic Variational Inequalities
Muhammad Aslam Noor, Khalida Inayat Noor
In this paper, we use the auxiliary principle technique to suggest an implicit method for solving the harmonic variational inequalities. It is shown that the convergence of the proposed method only needs pseudo monotonicity of the operator, which is a weaker condition than monotonicity.
Probabilities. Mathematical statistics, Analysis
Book Reviews
E. Stadlober, Wilfried Grossmann, Franz Konecny
N. BALAKRISHNAN, V.B. MELAS, S. ERMAKOV (Editors) Advances in Stochastic Simulation Methods. Statistics for Industry and Technology. Boston: Birkhäuser 2000, XXVI+ 386 S., ISBN 0-8176-4107-6.
Yadolah DODGE, Jana JURE?KOVA: Adaptive Regression. New York: Springer Verlag, 2000, xii+177 S., ISBN 0-387-98965-X.
R. REBOLLEDO (ed.): Stochastic Analysis and Mathematical Physics. ANESTOC ’98, Proceedings of the Third InternationalWorkshop. Boston: Birkhäuser, 2000. 166 S., ISBN 0-8176-4185-8.
L. DECREUSEFOND, J. GJERDE, B. OKSENDAL, A.S. ÜSTÜNEL (eds.): Stochastic Analysis and Related Topics VI. Proceedings of the Sixth Oslo-Silivri Workshop Geilo 1996. Boston: Birkhäuser, 1998. 408 S., ISBN 0-8176-4018-5.
Probabilities. Mathematical statistics, Statistics
A Multiepoch Regression Model used in Geodesy
Lubomír Kubáček, Ludmila Kubáčková
An investigation of the deformations of large buildings (bridges, dams, etc.) needs replicated measurements in special types of geodetical networks. They are characterized by two groups of points creating the network; one group is formed by points with stable positions and the other one is formed by points located on the building and characterizing its deformations. A statistical analysis of measurement results is done after each epoch of
measurement and also after several epochs. It is of a practical importance to develop an algorithm of estimation which enables us to use the partial results obtained after each epoch for results after several epochs.
Probabilities. Mathematical statistics, Statistics