This study aims to compare the performance of three optimization methods—Adam, Nadam, and RMSProp—in forecasting monthly economic indicators of Indonesia, namely the Consumer Price Index (CPI), Inflation, and Gross Domestic Product (GDP), using a hybrid Vector Autoregressive–Long Short-Term Memory (VAR–LSTM) model. The analysis begins with Vector Autoregression (VAR), where VAR(4) is selected as the best model based on the lowest Akaike Information Criterion (AIC) value of 1.075. Significant parameters from the VAR model are then used as input variables for the LSTM to enhance forecasting accuracy. The experimental results show that all three optimization methods generate similar prediction patterns, with forecasted values closely tracking the actual data. Nevertheless, the best optimizer differs across variables: Nadam performs best for CPI with a Root Mean Square Error (RMSE) of 0.4996, Adam yields the best performance for Inflation with an RMSE of 0.676, and RMSProp performs best for GDP with an RMSE of 1.288. Despite these variations, the overall forecasting performance of the three methods is comparable. These findings indicate that the VAR–LSTM approach can effectively capture the dynamic patterns of multiple economic variables and that the choice of optimization method should be aligned with the specific characteristics of the data, considering both accuracy and computational efficiency.
Merve N. Kursav, Scott D. Pauls, Petra Bonfert-Taylor
et al.
DIFUSE is a National Science Foundation-funded project at Dartmouth College that develops flexible and reusable data science modules. To disseminate our work, we organized a workshop where faculty participants explored modules and the design process. We identify factors that led participants to join the workshop, their goals for incorporating data science in their courses, and their impacts on their practice. We report and interpret quantitative and qualitative outcomes through participants’ surveys and interviews, finding that the workshop was very successful in increasing participants’ resources and experience levels and promoting change of practices. Further, participants who engaged in continued collaboration, adapting or creating modules for their own courses, reaped deeper changes in practice.
Probabilities. Mathematical statistics, Special aspects of education
Probability distributions are mathematical functions that describe the likelihood of different outcomes in random process estimates for the scale parameters and the shape parameter according to the type of data that can determine the appropriate probability distribution. In this paper, an experimental study is presented to compare a number of estimation methods for the parameters of the Frechet distribution, which is one of the most important probability distributions in the fields of determining failure times. The estimation Methods are (Maximum Likelihood, Moments and Bayesian methods) were adopted. Through the simulation method, the comparison process was carried out, where the experimental samples were determined (n = 15, 25, 50, 75, 100) with the assumption of four default values for each of the shape parameter (λ =1.1, 1.5, 2, 2.5) and the scale parameter (θ=1.4, 1.8, 2.3, 3). Through this method, the paper was able to determine the appropriate method for estimation by adopting the Mean square error criterion. The experimental results showed the superiority of the Bayes method. Then the method of Maximum likelihood.
Abstract A recent article in this journal presented an approximation procedure for obtaining probabilities from logistic regression when using IBM SPSS Statistics. The present note provides a more direct approach with the same software.
Teh Faradilla Abdul Rahman, Raudzatul Fathiyah Mohd Said, Alya Geogiana Buja
et al.
The coronavirus disease 2019 (COVID-19) that has plagued the world since 2019 has initiated several issues and challenges in the mental health services field. World Health Organisation (WHO) recommended implementing remote mental health services such as telehealth to reach out to patients. One of telehealth services is text messaging therapy. Despite the challenges in treating depression via text messaging, the text messages for depression therapy that were built with different content renders this situation as a captivating subject for study. Nonetheless, the topics included in depression mobile therapy are scarce, particularly from the short text perspective. Fortunately, a machine learning technique known as topic modelling (TM) can be used to extracts topics from a set of documents without manually reading individual documents. It is very useful in searching for topics contained in short texts. This study aims to determine the topics in the text messages sent by mental health practitioners for depression therapy. In this study, three topic modelling techniques, i.e., Biterm Topic Model (BTM), Word Network Topic Model (WNTM), and Latent Feature Dirichlet Multinomial Mixture (LFDMM), were evaluated on 258 text messages of depression therapy. The performance of the TM techniques was evaluated using classification accuracy, clustering, and coherence scores. The findings indicate that the set of text messages comprises five topics. BTM performed better than the other techniques in classification accuracy and clustering in some cases based on the performance measures. Consequently, not much significant difference was found in the coherence score between the three topic modelling.
AbstractThe opioid epidemic is an ongoing public health crisis. In North Carolina, overdose deaths due to illicit opioid overdose have sharply increased over the last 5–7 years. Buprenorphine is a U.S. Food and Drug Administration approved medication for treatment of opioid use disorder and is obtained by prescription. Prior to January 2023, providers had to obtain a waiver and were limited in the number of patients that they could prescribe buprenorphine. Thus, identifying counties where increasing buprenorphine would yield the greatest overall reduction in overdose death can help policymakers target certain geographical regions to inform an effective public health response. We propose a Bayesian spatio-temporal model that relates yearly, county-level changes in illicit opioid overdose death rates to changes in buprenorphine prescriptions. We use our model to forecast the statewide count and rate of illicit opioid overdose deaths in future years, and we use nonlinear constrained optimization to identify the optimal buprenorphine increase in each county under a set of constraints on available resources. Our model estimates a negative relationship between death rate and increasing buprenorphine after accounting for other covariates, and our identified optimal single-year allocation strategy is estimated to reduce opioid overdose deaths by over 5%. Supplementary materials for this article are available online.
Political institutions and public administration (General), Probabilities. Mathematical statistics
This study aims at quantifying the impact of the budget on the performance of Formula 1 teams. Until recently, the budgets of Formula 1 teams varied, thus, a competitive advantage was provided to those that consistently had greater funds than others. Understanding how dominant the effect of the budget is in the performance of a team will provide significant findings since a cost cap that aims at balancing the financial field, has been recently introduced. Prediction models regarding the team that performed better on a relatively lower budget will provide insights regarding which of them may thrive in the future.
Sri Endang Saleh, Debyyansa Pakaya, Irsan K. Hasan
et al.
The Composite Stock Price Index (CSPI) is a valuable number in assessing the performance of the stocks listed on the stock exchange; by looking at the Composite Stock Price Index, investors can determine their investment strategy. However, the rise and fall of the Composite Stock Price Index depend on a country's macroeconomic conditions; if the economy weakens, the company's performance will also undermine investors' confidence, and confidence decreases. Analysing the relationship between the Composite Stock Price Index with macroeconomic factors can show how much the influence of these factors on the increase or decrease in the Composite Stock Price Index, the macroeconomic factors in question are inflation, interest rates and the rupiah exchange rate. In this study, dependency analysis was carried out with the Copula approach method involving the Tau Kendal method for parameter estimation and the Maximum Likelihood Estimation (MLE) method to choose the best Copula model to explain the relationship between the Composite Stock Price Index and these macroeconomic factors. Research results in it are found that the best Copula that can explain the dependency structure between the Composite Stock Price, The index with inflation and interest rates is the Gumbel Copula with parameters θ ̂= 1.264 and θ ̂= 1.174, While the Copula model is the best that can explain the structure of the dependency between Composite Stock Price Index and the exchange rate is Copula Student-t with parameter θ ̂= −0.6037.
Concise and convenient bounds are obtained for the probability mass and cumulative distribution functions associated with the first success run of length k k in a sequence of n n Bernoulli trials. Results are compared to an approximation obtained by the Stein–Chen method as well as to bounds obtained from statistical reliability theory. These approximation formulas are used to obtain precise estimates of the expectation value associated with the occurrence of at least one success run of length k k within N N concurrent sequences of Bernoulli trials.
Scheduling academic staff timetables is crucial and necessary to avoid redundancy and clash of class between teacher and student timetables. A good timetable allows students and teachers to manage their time and support a good and healthy lifestyle. However, with the scheduling, academic staff timetable may use many procedures to get efficient results. Therefore, this paper provides a gap of study for existing work on Optimization Timetable to support Work-Life Balance (WLB) regarding their market commercial and research purposes. The methodology of this study was conducted using a Systematic Literature Review (SLR). Result: two findings investigate 1) relevant optimization timetable scheduling used and 2) the method for timetable optimization to support WLB. The strengths and weaknesses of the features and utilities behind each study are also presented to provide a further understanding of the gaps and weaknesses of each body of research. We conclude that these studies are still insufficient and require further evaluation and improvement.
Jan Beirlant , Gaonyalelwe Maribe , Philippe Naveau
et al.
Bias reduction in tail estimation has mainly been performed in case of Pareto-type models; see for instance Drees (1996), Peng (1998), Feuerverger and Hall (1999), Beirlant et al. (1999, 2002), Gomes and Martins (2002) and Caeiro et al. (2005, 2009). In that context, Beirlant et al. (2009) and Papastathopoulos and Tawn (2013) constructed distributional models that are based on second order rates of convergence for distributions of peaks over thresholds (POT). Such approach also allows to connect the tail and the bulk of the distribution. Bias reduction for all max-domains of attractions, i.e. without restricting to the Pareto-type case, received much less attention up to now. Here we extend the second-order refined POT approach started in Beirlant et al. (2009) providing a bias reduction technique for the classical generalized Pareto (GP) approximation for POTs. We consider parametric and nonparametric modelling of the second order component.
Sevia Indah Purnama, Irmayatul Hikmah, Mas Aly Afandi
et al.
Fever is one of the symptoms of a person with Covid-19. Body temperature must be checked e before entering crowded areas such as schools, offices, shops, and hospitals. It is a mandatory protocol that must be done. One of the tools that can be used to check body temperature is a thermal camera. Thermal cameras have the disadvantage of a high temperature reading error. This is because the thermal camera used has a low resolution. This study aims to reduce the value of the temperature reading error on the thermal camera using the linear regression method. The linear regression method is able to reduce the error rate of temperature readings by 5.27% at 36 ° C reading. The reduction in reading error also occurred by 5.27% at 37 ° C and 6.44% at 38 ° C. Based on the results obtained, this study shows that linear regression can be applied to thermal cameras and provides a decrease in the error rate of temperature readings on thermal cameras
Mathematical models have been widely used to understand complex phenomena. Generally, the model is in the form of system of differential equations. However, when the model becomes complex, analytical solutions are not easily used and hence a numerical approach has been used. A number of numerical schemes such as Euler, Runge-Kutta, and Finite Difference Scheme are generally used. There are also alternative numerical methods that can be used to solve system of differential equations such as the nonstandard finite difference scheme (NSFDS), the Adomian decomposition method (ADM), Variation iteration method (VIM), and the differential transformation method (DTM). In this paper, we apply the differential transformation method (DTM) to solve system of differential equations. The DTM is semi-analytical numerical technique to solve the system of differential equations and provides an iterative procedure to obtain the power series of the solution in terms of initial value parameters. In this paper, we present a mathematical model of HIV with antiviral treatment and construct a numerical scheme based on the differential transformation method (DTM) for solving the model. The results are compared to that of Runge-Kutta method. We find a good agreement of the DTM and the Runge-Kutta method for smaller time step but it fails in the large time step
Recently, different distributions have been generalized using the T-R {Y} framework but the possibility of using Dagum distribution has not been assessed. The T-R {Y} combines three distributions, with one as a baseline distribution, with the strength of each distribution combined to produce greater effect on the new generated distribution. The new generated distributions would have more parameters but would have high flexibility in handling bimodality in datasets and it is a weighted hazard function of the baseline distribution. This paper therefore generalized the Dagum distribution using the quantile function of Lomax distribution. A member of T-Dagum class of distribution called exponentiated-exponential-Dagum {Lomax} (EEDL) distribution was proposed. The distribution will be useful in survival analysis and reliability studies. Different characterizations of the distribution are derived, such as the asymptotes, stochastic ordering, stress-strength analysis, moment, Shannon entropy, and quantile function. Simulated and real data are used and compared favourably with existing distributions in the literature.
We propose a new robust test to detect changes in the autocovariance function of a time series. The test is based on empirical autocovariances of a robust transformation of the original time series. Because of the transformation, we do not require any finite moments of the original time series, making the test especially suitable for heavy tailed time series. We furthermore propose a lag weighting scheme, which puts emphasis on changes of the autocovariance at smaller lags. Our approach is compared to existing ones in some simulations.
In this paper, Petrovi´c’s inequality is generalized for h−convex functions, when h is supermultiplicative function. It is noted that the case for h−convex functions does not lead the particular cases for P −function, Godunova-Levin functions, s−Godunova-Levin functions and s−convex functions due to the conditions imposed on h. To cover the case, when h is submultiplicative, Petrovi´c’s inequality is generalized for h−concave functions.
In this paper, we consider classes of harmonic convex functions and give their special characterizations. Furthermore, we consider Hermite Hadamard type inequalities related to these classes to give some non-numeric estimates of well-known definite integrals.
A. G. Libardoni, A. G. Libardoni, C. E. Forest
et al.
<p>Historical time series of surface temperature and ocean heat content changes
are commonly used metrics to diagnose climate change and estimate properties
of the climate system. We show that recent trends, namely the slowing of
surface temperature rise at the beginning of the 21st century and the
acceleration of heat stored in the deep ocean, have a substantial impact on
these estimates. Using the Massachusetts Institute of Technology Earth System
Model (MESM), we vary three model parameters that influence the behavior of
the climate system: effective climate sensitivity (ECS), the effective ocean
diffusivity of heat anomalies by all mixing processes (<span class="inline-formula"><i>K</i><sub><i>v</i></sub></span>), and the net
anthropogenic aerosol forcing scaling factor. Each model run is compared to
observed changes in decadal mean surface temperature anomalies and the trend
in global mean ocean heat content change to derive a joint probability
distribution function for the model parameters. Marginal distributions for
individual parameters are found by integrating over the other two parameters.
To investigate how the inclusion of recent temperature changes affects our
estimates, we systematically include additional data by choosing periods that
end in 1990, 2000, and 2010. We find that estimates of ECS increase in
response to rising global surface temperatures when data beyond 1990 are
included, but due to the slowdown of surface temperature rise in the early
21st century, estimates when using data up to 2000 are greater than when data
up to 2010 are used. We also show that estimates of <span class="inline-formula"><i>K</i><sub><i>v</i></sub></span> increase in
response to the acceleration of heat stored in the ocean as data beyond 1990
are included. Further, we highlight how including spatial patterns of surface
temperature change modifies the estimates. We show that including latitudinal
structure in the climate change signal impacts properties with spatial
dependence, namely the aerosol forcing pattern, more than properties defined
for the global mean, climate sensitivity, and ocean diffusivity.</p>
Every semester, a new batch of final year students needs to find a topic and a supervisor to
complete their final year project requirement. The problem with the current approach is that
it is based on first come first serve. So, the pairing between student and supervisor is not the
optimal ones, i.e. some students may not get their preferred topic or supervisor. Plus, it is also
time consuming for both students and supervisors. The researcher is motivated to solve this
long overdue problem by applying a stable marriage model that is introduced by Gale and
Shapley hence the name Gale-Shapley Algorithm. To determine the functionality of this
approach, a system prototype has been constructed and a random dataset is used. The result,
60% of the students get their first choice topics while the remaining students get their second
or third choice. This is a remarkable outcome considering the time and effort saved compared
to the current process. Therefore, stable marriage model is applicable in solving student-topic
pairing