Psychological flexibility (PF) is a prominent construct in modern clinical psychological research, best known for its central role in Acceptance and Commitment Therapy (ACT) and its empirical associations with various health outcomes. However, the PF literature largely relies on the use of different self-report measures that appear to be based off varying definitions and conceptualisations of PF. To investigate this further, the current study jointly examined the latent construct of four prominent PF measures: CompACT, PPFI, Psy-flex and MPFI. The low-order and high-order measurement structure was evaluated through a series of exploratory factor analyses (EFA) and confirmatory factory analyses (CFA). EFA and CFA was conducted using item level data for each measure collected from a student and prolific sample (n = 1210). The hierarchical structure was examined using CFA to test whether the four PF measures examined a construct that aligns with existing conceptualisations of PF. Two theoretical structures were tested, where (1) low-order PF domains load onto a single superordinate global PF construct, and (2) low-order PF domains load onto one of two superordinate PF constructs (trait-level or state-level PF). Results did not support either theoretical model. Instead, both EFA and CFA revealed that the CompACT, PPFI, Psy-flex and MPFI are best explained by a measurement structure with nine low-order domains. These findings suggest that collectively these measures may assess a broader construct than original conceptualisations of PF. This supports a lack of coherence across common PF measurements, likely stemming from conceptual ambiguity. These findings echo concerns regarding PF and the credibility of the current research base. Further research is required to clarify what construct existing measures are assessing. In the meantime, researchers and clinicians are advised to exercise caution in their selection and interpretation of PF measures.
Faridah Abu Bakar, Noor Kamalia Abd Hamed, Mohd Khairul Ahmad
Abstract Titanium dioxide (TiO₂) nanocatalyst has received significant attention due to its superior photo-induced electron transfer properties, particularly in the metastable anatase phase, which underpins its application in advanced oxidation processes (AOPs). However, anatase TiO₂ crystals are predominantly dominated by the thermodynamically stable {101} facet, representing over 94% of the surface, whereas the highly reactive {001} facet diminishes rapidly under equilibrium growth, limiting photocatalytic efficiency. To address this limitation, this study evaluates the morphological and structural evolution of TiO₂ nanocatalysts synthesized via thermal decomposition of peroxotitanic acid in the presence of ammonium hexafluorophosphate (NH₄PF₆) and a mixed ammonium/tetrabutylammonium hexafluorophosphate system (NH₄PF₆/NBu₄PF₆). Field emission scanning electron microscopy (FE-SEM) revealed that fluorine incorporation effectively promoted anisotropic growth, producing rice grain-like nanocrystals with improved dispersion. X-ray diffraction (XRD) analysis demonstrated enhanced anatase phase stability in the co-doped NH₄PF₆/NBu₄PF₆–TiO₂ sample (85.81%) compared with NH₄PF₆–TiO₂ (59.68%) and undoped Peroxo–TiO₂ (57.12%), while Raman spectroscopy confirmed increased crystallinity and coherent lattice vibrations. Surface facet analysis indicated that {001} facet exposure was slightly higher in NH₄PF₆–TiO₂ (6.54%) than in the co-doped system (5.65%), reflecting the effect of dual-cation fluorination on crystal growth. Overall, the dual-cation strategy effectively suppresses anatase-to-rutile transformation, stabilizes the anatase phase, and regulates facet development, yielding TiO₂ nanocatalysts with improved structural integrity, controlled morphology, and tailored high-energy surfaces. These engineered materials present considerable potential for enhanced photocatalytic performance in sustainable energy conversion and environmental remediation applications.
We describe some features of the A100 memory architecture. In particular, we give a technique to reverse-engineer some hardware layout information. Using this information, we show how to avoid TLB issues to obtain full-speed random HBM access to the entire memory, as long as we constrain any particular thread to a reduced access window of less than 64GB.
Main text In order to calibrate an LCR meter, metrological traceability of capacitance above 10 kHz becomes more and more important. For this reason, NIM (National Institute of Metrology, China),NPLI (National Physical Laboratory, India) and NIMT (National Institute of Metrology, Thailand) have carried out research on capacitance metrology from 10 kHz to 10 MHz. To reach the main text of this paper, click on Final Report. Note that this text is that which appears in Appendix B of the BIPM key comparison database https://www.bipm.org/kcdb/. The final report has been peer-reviewed and approved for publication by the CCEM, according to the provisions of the CIPM Mutual Recognition Arrangement (CIPM MRA).
This paper demonstrates a lower and upper solution method to investigate the asymptotic behaviour of the conservative reaction-diffusion systems associated with Markovian process algebra models. In particular, we have proved the uniform convergence of the solution to its constant equilibrium for a case study as time tends to infinity, together with experimental results illustrations.
Finding the similarity between two workload behaviors is helpful in 1. creating proxy workloads 2. characterizing an unknown workload's behavior by matching its behavior against known workloads. In this article, we propose a method to measure the similarity between two workloads using machine learning-based analysis of the performance telemetry data collected for the execution runs of the two workloads. We also demonstrate the accuracy of the technique by measuring the similarity between a variety of know benchmark workloads.
This self-contained discussion relates the long-run average holding cost per unit time to the long-run average response time per customer in a $G/G/1$ queue with no assumption made on the order of service. The only restriction established is that the system be ergodic. This is achieved using standard queuing theory. The practical relevance of such a result is discussed in the context of simulation output analysis as well as through an application to formulating a Markov Decision Process that minimises long-run average response time per customer.
The models studied in the steady state involve two queues which are served either by a single server whose speed depends on the number of jobs present, or by several parallel servers whose number may be controlled dynamically. Job service times have a two-phase Coxian distribution and the second phase is given lower priority than the first. The trade-offs between holding costs and energy consumption costs are examined by means of a suitable cost functions. Two different two-dimensional Markov process are solved exactly. The solutions are used in several numerical experiments. Some counter-intuitive results are observed.
This article is primarily meant to present an early case study on using MLIR, a new compiler intermediate representation infrastructure, for high-performance code generation. Aspects of MLIR covered in particular include memrefs, the affine dialect, and polyhedral utilities and pass infrastructure surrounding those. This article is also aimed at showing the role compiler infrastructure could play in generating code that is competitive with highly tuned manually developed libraries, albeit in a more modular, reusable, and automatable way.
This article is a review of analytical performance modeling for computer systems. It discusses the motivation for this area of research, examines key issues, introduces some ideas, illustrates how it is applied, and points out a role that it can play in developing Computer Science.
Insufficient performance of optimization approaches for fitting of mathematical models is still a major bottleneck in systems biology. In this manuscript, the reasons and methodological challenges are summarized as well as their impact in benchmark studies. Important aspects for increasing evidence of outcomes of benchmark analyses are discussed. Based on general guidelines for benchmarking in computational biology, a collection of tailored guidelines is presented for performing informative and unbiased benchmarking of optimization-based fitting approaches. Comprehensive benchmark studies based on these recommendations are urgently required for establishing of a robust and reliable methodology for the systems biology community.
This paper proposes the $\rm{M}^{\rm X}/\rm{G}/1$ queueing model to represent arrivals of segmented packets when message segmentations occur. This queueing model enables us to derive the closed form of mean response time, given payload size, message size distribution and message arrival rate. From a numerical result, we show that the mean response time is more convex in payload sizes if message arrival rate is larger in a scenario where Web objects are delivered over a physical link.
The need for Linux system administrators to do performance management has returned with a vengeance. Why? The cloud. Resource consumption in the cloud is all about pay-as-you-go. This article shows you how performance models can find the most cost-effective deployment of an application on Amazon's cloud.
The notion of computer capacity was proposed in 2012, and this quantity has been estimated for computers of different kinds. In this paper we show that, when designing new processors, the manufacturers change the parameters that affect the computer capacity. This allows us to predict the values of parameters of future processors. As the main example we use Intel processors, due to the accessibility of detailed description of all their technical characteristics.
Building on the 1977 pioneering work of R. Fagin, we give a closed-form expression for the approximated Miss Rate (MR) of LRU Caches assuming a power-law popularity. Asymptotic behavior of this expression is an already known result when power-law parameter is above 1. It is extended to any value of the parameter. In addition, we bring a new analysis of the conditions (cache relative size, popularity parameter) under which the ratio of LRU MR to Static MR is worst-case.
Marcos Portnoi, Rafael Gonçalves Bezerra de Araújo
This paper describes NS - Network Simulator, the computer networks simulation tool. We offer an overview NS, and also analyze its characteristics and functions. Finally, we present in detail all steps for preparing a simulation of a simple model in NS.