Hasil untuk "Computer software"

Menampilkan 20 dari ~8151776 hasil · dari CrossRef, DOAJ, Semantic Scholar

JSON API
S2 Open Access 2024
Quantum computing with Qiskit

Ali Javadi-Abhari, Matthew Treinish, Kevin Krsulich et al.

We describe Qiskit, a software development kit for quantum information science. We discuss the key design decisions that have shaped its development, and examine the software architecture and its core components. We demonstrate an end-to-end workflow for solving a problem in condensed matter physics on a quantum computer that serves to highlight some of Qiskit's capabilities, for example the representation and optimization of circuits at various abstraction levels, its scalability and retargetability to new gates, and the use of quantum-classical computations via dynamic circuits. Lastly, we discuss some of the ecosystem of tools and plugins that extend Qiskit for various tasks, and the future ahead.

736 sitasi en Physics, Computer Science
S2 Open Access 1999
A scaled difference chi-square test statistic for moment structure analysis

A. Satorra, P. Bentler

A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model sayM0 implies on a less restricted oneM1. IfT0 andT1 denote the goodness-of-fit test statistics associated toM0 andM1, respectively, then typically the differenceTd=T0−T1 is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the modelsM0 andM1. As in the case of the goodness-of-fit test, it is of interest to scale the statisticTd in order to improve its chi-square approximation in realistic, that is, nonasymptotic and nonormal, applications. In a recent paper, Satorra (2000) shows that the difference between two SB scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are not available in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of modelsM0 andM1. A Monte Carlo study is provided to illustrate the performance of the competing statistics.

5088 sitasi en Mathematics, Computer Science
DOAJ Open Access 2025
GFEA: Leader Election Algorithm for Choosing a GroupDecision Support System Facilitator

Sabir Mohammedi Taieb, Mohamed Adnane Laredj

Group decision support systems (GDSSs) are computer-assisted collaborative work software that facilitates group meetings asynchronously and from different locations. Even so, collaborative work in GDSS demands coordination provided by a single controlling entity known as the GDSS facilitator. However, the problem of electing a GDSS Facilitator hasn’t been treated enough in the literature, and it is often neglected. Despite that, the large number of responsibilities assigned to the facilitator makes his role crucial to the effectiveness of the group meeting. Thus, the authors focused on finding an appropriate approach for electing the facilitator. The similarities between the problematics of electing a GDSS facilitator and a distributed system leader led the authors to consider applying a distributed election algorithm for electing a GDSS facilitator. Nonetheless, current algorithms only consider computer criteria and lack a formal weighting method. Consequently, we proposed a new distributed election algorithm called GFEA (GDSS Facilitator Election Algorithm) that is designed to choose a facilitator within a GDSS. This algorithm selects a facilitator among a set of decision-makers based on multiple election criteria weighted using an objective weighting method called MEREC. A backup leader is reserved to replace the leader if he fails, and a new tie-breaking mechanism is proposed. Moreover, the initiator failure is handled. By adopting distributed system leader election principles, GFEA provides a robust solution for a decisive GDSS challenge.

Science, Technology
DOAJ Open Access 2025
How to Foster Inclusive Practice in Research Methods Teaching

Meredith Wilkinson, Zeynep Barlas, Jonathan Farnell et al.

This paper outlines a practical approach to foster inclusive practice in research methods teaching. To achieve this, we follow the principles of Universal Design for Learning for our framework. We present four ways in which educators can make university research methods teaching more inclusive. These are: (1) make use of multiple modes of teaching, (2) integrate research methods into the curriculum rather than as stand-alone modules, (3) make assessments flexible and innovative, and (4) promote accessibility and effective use of digital tools and computer software. We draw upon our own experiences as research methods tutors and previous research to elucidate our points. This paper provides practical guidance for those teaching research methods to create more inclusive and equitable learning environments. Please note that our experience comes from teaching Psychology, which is what we base our case study on, but our discussion can be applied to a range of subject areas.

DOAJ Open Access 2025
A Deep Backtracking Bare‐Bones Particle Swarm Optimisation Algorithm for High‐Dimensional Nonlinear Functions

Jia Guo, Guoyuan Zhou, Ke Yan et al.

ABSTRACT The challenge of optimising multimodal functions within high‐dimensional domains constitutes a notable difficulty in evolutionary computation research. Addressing this issue, this study introduces the Deep Backtracking Bare‐Bones Particle Swarm Optimisation (DBPSO) algorithm, an innovative approach built upon the integration of the Deep Memory Storage Mechanism (DMSM) and the Dynamic Memory Activation Strategy (DMAS). The DMSM enhances the memory retention for the globally optimal particle, promoting interaction between standard particles and their historically optimal counterparts. In parallel, DMAS assures the updated position of the globally optimal particle is appropriately aligned with the deep memory repository. The efficacy of DBPSO was rigorously assessed through a series of simulations employing the CEC2017 benchmark suite. A comparative analysis juxtaposed DBPSO's performance against five contemporary evolutionary algorithms across two experimental conditions: Dimension‐50 and Dimension‐100. In the 50D trials, DBPSO attained an average ranking of 2.03, whereas in the 100D scenarios, it improved to an average ranking of 1.9. Further examination utilising the CEC2019 benchmark functions revealed DBPSO's robustness, securing four first‐place finishes, three second‐place standings, and three third‐place positions, culminating in an unmatched average ranking of 1.9 across all algorithms. These empirical results corroborate DBPSO's proficiency in delivering precise solutions for complex, high‐dimensional optimisation challenges.

Computational linguistics. Natural language processing, Computer software
DOAJ Open Access 2025
Towards fully automatized [177Lu]Lu-PSMA personalized dosimetry based on 360° CZT whole-body SPECT/CT: a proof-of-concept

Arnaud Dieudonné, Aya Terro, Arthur Dumouchel et al.

Abstract Background The advent of 360° CZT gamma-cameras allows to conceive personalised dosimetry studies from whole-body SPECT/CT data. We aimed to demonstrate the proof-of-concept of an automated personalized dosimetry pipeline for [177Lu]Lu-PSMA organ dosimetry, called SimpleDose, and to compare to other dosimetry approaches. Methods The organ segmentation is based on a nnU-Net framework that was trained to allow for the segmentation of 23 organs and structures over all the body. The method implemented to model the energy deposition is the collapsed-cone-superposition (CCS) taking into account non-uniform activity and density distributions. Ten patients with metastatic castration resistant prostate cancer treated [177Lu]Lu-PSMA-617 were included. All SPECT/CT acquisitions were performed on a VERITON-CT 200 (Spectrum Dynamics®, Caesarea, Israel) from head to mid-thigh with 5 min per bed. The absorbed-dose-rates were computed with SimpleDose and compared with organ-level MIRD approach and local-deposition-method (LDM) for bone marrow, kidneys, liver, lungs, pancreas, salivary glands and spleen. Finally, an example of multi-time-point and single-time-point dosimetry is given. Results The median (IQR) calculation time with SimpleDose (SD), for segmentation, computation of dose-rates and descriptive statistics was 161 (23) seconds at a resolution of 2.46 × 2.46 × 2.46 mm3 (Intel Xeon 20 × 3.70 GHz CPU computer). The median (IQR) differences between SD and MIRD and LDM, were respectively 1.8 (61) % and  − 16 (76) % in bone marrow, 2.4 (1.5) % and  − 93.1 (0.4) % in kidneys, 2.9 (3.4) % and  − 9.2 (3.0) % in liver, 21 (13) % and 13 (13) % in lungs, 11 (3.3) % and  − 11 (3.0) % in pancreas, 1.1 (12) % and 3.8 (8.4) % in salivary glands, 4.0 (4.3) % and  − 10.0 (4.5) % in spleen. For the clinical example, the multi-time-point dosimetry with 4 time-points took 14 min, while the single-time-point approach took 3.5 min from day 1 dataset and 3.3 min from day 3. Conclusion The SimpleDose platform demonstrated its capability to compute organ-absorbed-dose rates in a simple and fast manner with close results to the standard MIRD approach for soft-tissues organs. SimpleDose is freely available for demonstration purpose as a Software as a Service (SaaS) at https://oncometer3d.com .

Medical physics. Medical radiology. Nuclear medicine
DOAJ Open Access 2024
Event-Based Moving Target Defense in Cloud Computing With VM Migration: A Performance Modeling Approach

Lucas Santos, Carlos Brito, Iure Fe et al.

The domain of information security is undergoing significant evolution to address the increasingly complex challenges aimed at bolstering system resilience against attacks. The Moving Target Defense (MTD) methodology, which entails altering the system’s configuration—for instance, by relocating virtual machines (VM) or modifying IP addresses—serves to dynamically modify vulnerable components of a system. This strategy effectively disorients potential attackers, complicating their efforts to comprehend or anticipate the system’s configuration. Moreover, MTD can be proactively utilized by, for example, relocating VMs from a network segment that has been compromised. Consequently, MTD emerges as a viable approach for mitigating security concerns, particularly within cloud computing frameworks. A critical facet of MTD involves the system migration across different hardware, presenting logistical and strategic challenges that necessitate a thorough evaluation of factors such as operational downtime and the impact on system performance. Analytical models, particularly those based on stochastic Petri nets (SPN), offer a methodological advantage in strategizing MTD implementations by facilitating the assessment of potential outcomes in a non-live environment. This paper proposes an advanced model that extends prior research through the integration of an event-based MTD mechanism, which encompasses both the probability of intrusion detection and the ability to discern potential threats. Through the application of diverse migration initiation policies, this study aims to identify more efficacious strategies under specific conditions. The findings indicate that reliance on event-detection policies is advantageous when the system possesses a detection accuracy exceeding 50%, underscoring the critical role of precise intrusion detection in the efficacy of MTD strategies.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2024
Multi-Scale Deepfake Detection Method with Fusion of Spatial Features

Yiwen ZHANG, Manchun CAI, Yonghao CHEN, Yi ZHU, Lifeng YAO

With the rapid advancement in deep learning, deepfake technology has gained significant momentum as a form of image manipulation based on generative models. The proliferation of deepfake videos and images has a detrimental sociopolitical impact, highlighting the increasing significance of deepfake detection techniques. Existing deepfake detection methods based on Convolutional Neural Networks (CNN) and Vision Transformers (ViT) commonly suffer from challenges such as large sizes of model parameters, slow training speeds, susceptibility to overfitting, and limited robustness against video compression and noise. To address these challenges, a multi-scale deepfake detection method that integrates spatial features is proposed herein. Firstly, an Automatic White Balance (AWB) algorithm is employed to adjust the contrast of input images, thereby enhancing robustness of the model. Subsequently, Multi-scale ViT (MViT) and CNN are separately utilized to extract the multi-scale global and local features, respectively, of the input images. These global and local features are then fused together using an improved sparse cross-attention mechanism to enhance the recognition performance of the model. Finally, the fused features are classified using a Multi-Layer Perceptron (MLP). According to the experimental results, the proposed model achieves frame-level Area Under the Curve (AUC) scores of 0.986, 0.984, and 0.988 on the Deepfakes, FaceSwap, and Celeb-DF (v2) datasets, respectively, demonstrating strong robustness in cross-compression experiments. Additionally, comparative experiments before and after specific model improvements have validated the gains provided by each module in terms of detection results.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2024
Multi-label Patent Classification Based on Text and Historical Data

XU Xuejie, WANG Baohui

Patent classification,which is used to assign multiple international patent classification(IPC) codes to a given paten,is a very important task int the field of patent data mining.In recent years,many studies on this task focus on mining patent text to predict the first or second level codes for IPC.In real scenarios,a patent often has multiple IPC codes which is a multi-label classification task.Apart from the texts,each patent has a corresponding assignee and the assignee's historical patent application behavior may have a certain business tendency.The preference representation of this behavior can effectively improve the precision of patent classification.However,previous methods fail to make full use of patent historical data.A classification model is proposed for patent automatic classification.Main processing of this model is as follows:firstly,initialize the patent text representation with BERT pretraining language model,then use Text-CNN model to capture local features and take the output as the final patent text representation;secondly,Bi-LSTM is used to learn the preference representation by aggregating historical patent texts and labels through dual channels;finally,we fuse the texts and assignee's sequential preferences for prediction.Experiments on real data set and comparisons with different baselines show that the proposed patent classification algorithm based on patent text and historical data has a great improvement in precision.

Computer software, Technology (General)
DOAJ Open Access 2023
Three-Dimensional Analysis of Upper and Lower Arches Using Digital Technology: Measurement of the Index of Bolton and Correspondence between Arch Shapesand Orthodontic Arches

Marco Pasini, Elisabetta Carli, Federico Giambastiani et al.

Introduction: Thanks to the great development of digital technology, viaCAD (computer-aided design) and CAM (computer-aided manufacturing) systems, digital models canbe used as an aid for orthodontic planning decision-making processes as there are numerous studies in the literature that support the validity ofthe digital model measurements of anterior teeth and the total coefficient of Bolton analysis. The aim of the present study isto compare the average length value of the current upper and lower arches with that of a hypothetical nickel–titanium wire and to confirm the reliability and accuracy of digitally taken measurements of the anterior and total Bolton coefficients.In this retrospective study, dental casts of 138 Caucasian adolescent patients were scanned with an extraoral scanner, and Ortho3Shape software was adopted for the following dental cast measurements: actual and ideal lengths of the lower arches and anterior and total Bolton coefficients.In the present study, we found that the mean value of the anterior coefficients of the Bolton index was compatible with those of previous studies, confirming the reliability of digital measurements.Therefore, digital CAD/CAM models may be a viable alternative to plaster models, as they can facilitate model preservation and recovery. For future studies, it would be better to use intraoral scanners (IOSs) to ensure greater accuracy, since they only require one step and allow obtaining better results for the patients.

DOAJ Open Access 2022
Acquisition repeatability of MRI radiomics features in the head and neck: a dual-3D-sequence multi-scan study

Cindy Xue, Jing Yuan, Yihang Zhou et al.

Abstract Radiomics has increasingly been investigated as a potential biomarker in quantitative imaging to facilitate personalized diagnosis and treatment of head and neck cancer (HNC), a group of malignancies associated with high heterogeneity. However, the feature reliability of radiomics is a major obstacle to its broad validity and generality in application to the highly heterogeneous head and neck (HN) tissues. In particular, feature repeatability of radiomics in magnetic resonance imaging (MRI) acquisition, which is considered a crucial confounding factor of radiomics feature reliability, is still sparsely investigated. This study prospectively investigated the acquisition repeatability of 93 MRI radiomics features in ten HN tissues of 15 healthy volunteers, aiming for potential magnetic resonance-guided radiotherapy (MRgRT) treatment of HNC. Each subject underwent four MRI acquisitions with MRgRT treatment position and immobilization using two pulse sequences of 3D T1-weighed turbo spin-echo and 3D T2-weighed turbo spin-echo on a 1.5 T MRI simulator. The repeatability of radiomics feature acquisition was evaluated in terms of the intraclass correlation coefficient (ICC), whereas within-subject acquisition variability was evaluated in terms of the coefficient of variation (CV). The results showed that MRI radiomics features exhibited heterogeneous acquisition variability and uncertainty dependent on feature types, tissues, and pulse sequences. Only a small fraction of features showed excellent acquisition repeatability (ICC > 0.9) and low within-subject variability. Multiple MRI scans improved the accuracy and confidence of the identification of reliable features concerning MRI acquisition compared to simple test-retest repeated scans. This study contributes to the literature on the reliability of radiomics features with respect to MRI acquisition and the selection of reliable radiomics features for use in modeling in future HNC MRgRT applications.

Drawing. Design. Illustration, Computer applications to medicine. Medical informatics
DOAJ Open Access 2022
A Novel Multiple Access Scheme for 6G Assisted Massive Machine Type Communication

Ashu Taneja, Adi Alhudhaif, Shtwai Alsubai et al.

The diverse Internet-of-things (IoT) applications involve massive machine type communication (mMTC) with large number of communicating nodes. The energy and resource overhead owing to shorter battery lives and limited network resources are the main challenges of mMTC in IoT. To support this massive random access and to overcome these challenges, future wireless networks are envisioned with collision resolution capabilities, reduced latency and ultra-high reliability. This paper presents a novel scheme for 6G assisted massive machine type communication (mMTC) with collision resolution capabilities and reduced latency. A cell-free network model is proposed in which the communication of mMTC devices is assisted through access points (APs) cooperation. The performance of proposed network is evaluated for achieved signal-to-noise ratio (SNR) and accuracy of node detection for different node locations, fading parameters and cell-areas. With increase in cell area and shadow fading, the SNR achieved by active nodes decreases. Further, an algorithm is proposed in the paper that makes AP clusters for serving the communicating nodes. The tendency of network for successful node detection is determined for different cluster sizes with different activation probabilities. In the end, the proposed algorithm is compared with two other schemes, namely, random clustering scheme and nearest-neighbour clustering scheme. It is found that the proposed approach achieves best performance in the detection of active communicating nodes in the system model with 9.09% improvement as compared to random scheme and 1.1% as compared to nearest-neighbour scheme.

Electrical engineering. Electronics. Nuclear engineering
DOAJ Open Access 2022
Encryption Algorithm of Video Images Combining Hyper-Chaotic System and Logistic Mapping

WEI Chengjing, LI Guodong

Using the traditional single image encryption algorithm for video images is time-consuming and inefficient. To improve the efficiency of video image encryption, combined with the Cellular Neural Network(CNN) hyper-chaotic system and Logistic chaos mapping, an algorithm combining single frame encryption one by one and multi-frame combination encryption is proposed.According to the video frame, SHA-256 generates the initial value of logistic, and the logistic chaotic sequence is obtained through Logistic mapping iteration.The generated chaotic sequence diffuses the video frame by frame.The video frames are combined into a matrix in binary form, and the initial value generated by using the hash function according to the combination matrix is substituted into CNN hyper-chaotic system.The obtained chaotic sequence scrambles the combination matrix, and the diffusion and scrambling of each pixel of all video frames are completed in one step to shorten the encryption time.Simultaneously, the combination matrix is re-decomposed into a single frame image to obtain the final encrypted video image.Experiments show that using a high-dimensional hyper-chaotic system in the algorithm has higher security, effectively shortens the time spent encrypting video images, and can resist statistical attacks, differential attacks, and violent attacks.

Computer engineering. Computer hardware, Computer software
DOAJ Open Access 2020
Nonsingular Fast Terminal Adaptive Neuro-sliding Mode Control for Spacecraft Formation Flying Systems

Xiaohan Lin, Xiaoping Shi, Shilun Li et al.

In this paper, a nonsingular fast terminal adaptive neurosliding mode control for spacecraft formation flying systems is investigated. First, a supertwisting disturbance observer is employed to estimate external disturbances in the system. Second, a fast nonsingular terminal sliding mode control law is proposed to guarantee the tracking errors of the spacecraft formation converge to zero in finite time. Third, for the unknown parts in the spacecraft formation flying dynamics, we proposed an adaptive neurosliding mode control law to compensate them. The proposed sliding mode control laws not only achieve the formation but also alleviate the effect of the chattering. Finally, simulations are used to demonstrate the effectiveness of the proposed control laws.

Electronic computers. Computer science

Halaman 14 dari 407589