Valerie Barr, C. Stephenson
Hasil untuk "Computer Science"
Menampilkan 20 dari ~22561520 hasil · dari CrossRef, DOAJ, Semantic Scholar, arXiv
J. Leeuwen
M. Weiser
A. Newell, H. Simon
Allison Master, S. Cheryan, A. Meltzoff
Donghui Wang, Yanchun Liang, Dong Xu et al.
Abstract As computer science and information technology are making broad and deep impacts on our daily lives, more and more papers are being submitted to computer science journals and conferences. To help authors decide where they should submit their manuscripts, we present the Content-based Journals & Conferences Recommender System on computer science, as well as its web service at http://www.keaml.cn/prs/ . This system recommends suitable journals or conferences with a priority order based on the abstract of a manuscript. To follow the fast development of computer science and technology, a web crawler is employed to continuously update the training set and the learning model. To achieve interactive online response, we propose an efficient hybrid model based on chi-square feature selection and softmax regression. Our test results show that, the system can achieve an accuracy of 61.37% and suggest the best journals or conferences in about 5 s on average.
Yuan Wang, Norazmawati Md Sani, Jing Cai et al.
Background As aging populations continue to grow, smart home technologies—such as smart locks—have become increasingly essential to support older adults’ independent living. Long-term use remains a challenge, however, with most studies focusing on initial adoption rather than sustained engagement. Methods In this study, we examined the key factors related to older adults’ continuance intention toward smart locks, applying a socio-technical framework that integrated the Expectation-Confirmation Model of Information Systems (ECM-IS), the Task-Technology Fit (TTF) model, and external variables, including privacy and security, trust, and habit. We analyzed survey data from 422 Chinese participants aged 55 and older using Partial Least Squares Structural Equation Modeling (PLS-SEM) and Importance-Performance Matrix Analysis (IPMA). Results The model explained 71.6% of the variance in continuance intention (R2 = 0.716) and showed strong predictive relevance (Q2 = 0.623). Trust and perceived usefulness were positively related to continuance intention, followed by satisfaction. Task-technology fit and confirmation were significantly associated with perceived usefulness and satisfaction. Habit and privacy and security were not significant with respect to continuance intention. Conclusions These findings provide theoretical and practical insight for designing age-inclusive, trust-enhancing smart locks that better support older adults’ needs in post-adoption contexts.
T. V. Soumya, M. K. Sabu
Abstract The sequential three-way decision accepts additional information at each level and makes more accurate definite decisions with less uncertainty. This process can also be extended to two-way classification with the finer-grained information level. However, both the decision process cost and decision result cost of the model must be considered for optimal performance. The proposed model adapts the game-theoretic approach to deal with the trade-off between the decision process cost and the decision result cost, and thereby balance the number of levels of the model. The time complexity, information level, and feature importance contribute to the process cost while evaluation metrics stand for the result cost. The model starts with reliable initial results by using the most significant features at the first level itself and follows an objective function-based method to determine threshold pairs at each level, which avoids relying on domain experts. Furthermore, if the process cost outweighs the result cost, the number of levels is adjusted accordingly. Using the experimental datasets, instances are classified at each level at the optimal threshold pairs; therefore the trisection is obtained with the highest precision/recall value. The obtained results prove that the proposed model outperforms existing models in terms of precision, recall, and time complexity with balanced decision costs. In summary, the proposed model is cost-efficient, interpretable, termination-aware, and result-oriented, ensuring effective and practical decision-making.
Zaid Ameen Abduljabbar, Vincent Omollo Nyangaresi, Ahmed Ali Ahmed et al.
Abstract Vehicular Ad-Hoc Networks (VANETs) have facilitated the massive exchange of real-time traffic and weather conditions, which have helped prevent collisions, reduce accidents, and road congestions. This can effectively enhance driving safety and efficiency in technology-driven transportation systems. However, the transmission of massive and sensitive information across public wireless communication channels exposes the transmitted data to a myriad of privacy as well as security threats. Although past researches has developed many vehicular ad-hoc networks security preservation schemes, several of them are inefficient or susceptible to attacks. This work, introduces an approach that leverages reverse fuzzy extraction, bilinear pairing, and Physically Unclonable Function (PUF) to design an efficient and anonymity-preserving authentication scheme. We conduct an elaborate formal security analysis to demonstrate that the derived session key is secure. The semantic security analyses also demonstrate its resilience against typical VANET attacks such as impersonations, denial of service, and de-synchronization, instilling confidence in its effectiveness. Moreover, our approach incurs the lowest computational overheads at relatively low communication costs. Specifically, our protocol attains a 66.696% reduction in computation costs, and a 70% increment in the supported security functionalities.
Zhao-Song Li, Chao Liu, Xiao-Wei Li et al.
Abstract As a frontier technology, holography has important research values in fields such as bio-micrographic imaging, light field modulation and data storage. However, the real-time acquisition of 3D scenes and high-fidelity reconstruction technology has not yet made a breakthrough, which has seriously hindered the development of holography. Here, a novel holographic camera is proposed to solve the above inherent problems completely. The proposed holographic camera consists of the acquisition end and the calculation end. At the acquisition end of the holographic camera, specially configured liquid materials and liquid lens structure based on voice-coil motor-driving are used to produce the liquid camera, so that the liquid camera can quickly capture the focus stack of the real 3D scene within 15 ms. At the calculation end, a new structured focus stack network (FS-Net) is designed for hologram calculation. After training the FS-Net with the focus stack renderer and learnable Zernike phase, it enables hologram calculation within 13 ms. As the first device to achieve real-time incoherent acquisition and high-fidelity holographic reconstruction of a real 3D scene, our proposed holographic camera breaks technical bottlenecks of difficulty in acquiring the real 3D scene, low quality of the holographic reconstructed image, and incorrect defocus blur. The experimental results demonstrate the effectiveness of our holographic camera in the acquisition of focal plane information and hologram calculation of the real 3D scene. The proposed holographic camera opens up a new way for the application of holography in fields such as 3D display, light field modulation, and 3D measurement.
Muhammad Irwan Yanwari, Shogo Okamoto
Traditional tactile sensors primarily measure macroscopic surface features but do not directly estimate how humans perceive such surface roughness. Sensors that mimic human tactile processing could bridge this gap. This study proposes a method for predicting macroscopic roughness perception based on a sensing principle that closely resembles human tactile information processing. Humans are believed to assess macroscopic roughness based on the spatial distribution of subcutaneous deformation and resultant neural activities when touching a textured surface. To replicate this spatial-coding mechanism, we captured distributed contact information using a camera through a flexible, transparent material with fingerprint-like surface structures, simulating finger skin. Images were recorded under varying contact forces ranging from 1 N to 3 N. The spatial frequency components in the range of 0.1–1.0 mm<sup>−1</sup> were extracted from these contact images, and a linear combination of these components was used to approximate human roughness perception recorded via the magnitude estimation method. The results indicate that for roughness specimens with rectangular or circular protrusions of surface wavelengths between 2 and 5 mm, the estimated roughness values achieved an average error comparable to the standard deviation of participants’ roughness ratings. These findings demonstrate the potential of macroscopic roughness estimation based on human-like tactile information processing and highlight the viability of vision-based sensing in replicating human roughness perception.
Ifiok Udoidiok, Fuhao Li, Jielun Zhang
Machine learning (ML) has become a cornerstone of critical applications, but its vulnerability to data poisoning attacks threatens system reliability and trustworthiness. Prior studies have begun to investigate the impact of data poisoning and proposed various defense or evaluation methods; however, most efforts remain limited to quantifying performance degradation, with little systematic comparison of internal behaviors across model architectures under attack and insufficient attention to interpretability for revealing model vulnerabilities. To tackle this issue, we build a reproducible evaluation pipeline and emphasize the importance of integrating robustness with interpretability in the design of secure and trustworthy ML systems. To be specific, we propose a unified poisoning evaluation framework that systematically compares traditional ML models, deep neural networks, and large language models under three representative attack strategies including label flipping, random corruption, and adversarial insertion, at escalating severity levels of 30%, 50%, and 75%, and integrate LIME-based explanations to trace the evolution of model reasoning. Experimental results demonstrate that traditional models collapse rapidly under label noise, whereas Bayesian LSTM hybrids and large language models maintain stronger resilience. Further interpretability analysis uncovers attribution failure patterns, such as over-reliance on neutral tokens or misinterpretation of adversarial cues, providing insights beyond accuracy metrics.
Mauve Science Collaboration, Marcel Agueros, Don Dixon et al.
Mauve is a low-cost small satellite developed and operated by Blue Skies Space Ltd. The payload features a 13 cm telescope connected with a fibre that feeds into a UV-Vis spectrometer. The detector covers the 200-700 nm range in a single shot, obtaining low resolution spectra at R~20-65. Mauve has launched on 28th November 2025, reaching a 510 km Low-Earth Sun-synchronous orbit. The satellite will enable UV and visible observations of a variety of stellar objects in our Galaxy, filling the gaps in the ultraviolet space-based data. The researchers that have already joined the mission have defined the science themes, observational strategy and targets that Mauve will observe in the first year of operations. To date 10 science themes have been developed by the Mauve science collaboration for year 1, with observational strategies that include both long duration monitoring and short cadence snapshots. Here, we describe these themes and the science that Mauve will undertake in its first year of operations.
L. Sax, Kathleen J. Lehman, J. Jacobs et al.
Rory Ward, Dan Bigioi, Shubhajit Basak et al.
While current research predominantly focuses on image-based colorization, the domain of video-based colorization remains relatively unexplored. Many existing video colorization techniques operate frame-by-frame, often overlooking the critical aspect of temporal coherence between successive frames. This approach can result in inconsistencies across frames, leading to undesirable effects like flickering or abrupt color transitions between frames. To address these challenges, we combine the generative capabilities of a fine-tuned latent diffusion model with an autoregressive conditioning mechanism to ensure temporal consistency in automatic speaker video colorization. We demonstrate strong improvements on established quality metrics compared to existing methods, namely, PSNR, SSIM, FID, FVD, NIQE and BRISQUE. Specifically, we achieve an 18% improvement in performance when FVD is employed as the evaluation metric. Furthermore, we performed a subjective study, where users preferred LatentColorization to the existing state-of-the-art DeOldify 80% of the time. Our dataset combines conventional datasets and videos from television/movies. A short demonstration of our results can be seen in some example videos available at <uri>https://youtu.be/vDbzsZdFuxM</uri>.
Javier Osca, Jiri Vala
Stochastic Optical Quantum Circuit Simulator (SOQCS) is a C++ and Python library which offers a framework to define, simulate and study quantum linear optical circuits in presence of various imperfections typically encountered in experiments. Quantum circuits can be defined from basic components, including emitters, linear optical elements, delays and detectors. The imperfections come from partial distinguishability of photons, lossy propagation media, unbalanced beamsplitters and non-ideal emitters and detectors for example. SOQCS also provides various simulator cores and tools to analyze the output. Furthermore, the configuration of detectors also includes postselection. SOQCS is developed using a modular approach in which different modules are applied in an automated easy to use manner. Furthermore, the modular approach allows for further extensions of the SOQCS capabilities in future.
Jehad Al Dallal, Bader Alkhazi
The cohesion of an object-oriented class refers to the relatedness of its methods and attributes. Constructors, destructors, and access methods are special types of methods featuring unique characteristics that can artificially affect class cohesion quantification. Methods within a class can also directly or transitively invoke each other, representing another cohesion aspect not considered by most existing cohesion measures. The impact of considering special methods (SPs) and transitive relations (TRs) in cohesion measurement on the abilities of the measures to predict inheritance reusability has yet to be investigated. In this paper, we empirically explored this effect. We applied a statistical technique to test the significance of the cohesion value changes across seven scenarios of ignoring or considering SPs and TRs. In addition, we applied a machine learning-based technique to build inheritance reusability prediction models using each of the considered measures and scenarios, evaluated the classification performance of the prediction models, and statistically compared the inheritance reusability prediction results. The results show that for most of the considered measures, the ignorance/consideration of SPs and TRs changed the cohesion values and the corresponding prediction significantly. Based on the study findings, when building inheritance reusability prediction models, software engineers are advised to 1) combine cohesion with other quality factors; 2) exclude the TRs from cohesion quantification; and 3) decide whether to consider or ignore SPs in cohesion quantification based on the selected measure(s) to be used in the prediction model, as this decision differs from one measure to another.
Calum McHale, Susanne Cruickshank, Tamara Brown et al.
Abstract Objectives To determine the feasibility and acceptability of implementing the Mini-AFTERc intervention. Design Non-randomised cluster-controlled pilot trial. Setting Four NHS out-patient breast cancer centres in Scotland. Participants Ninety-two women who had successfully completed primary treatment for breast cancer were screened for moderate levels of fear of cancer recurrence (FCR). Forty-five were eligible (17 intervention and 28 control) and 34 completed 3-month follow-up (15 intervention and 21 control). Intervention Mini-AFTERc, a single brief (30 min) structured telephone discussion with a specialist breast cancer nurse (SBCN) trained to target the antecedents of FCR. Outcomes Feasibility and acceptability of Mini-AFTERc and the study design were assessed via recruitment, consent, retention rates, patient outcomes (measured at baseline, 2, 4, and 12 weeks), and post-study interviews with participants and SBCNs, which were guided by Normalisation Process Theory. Results Mini-AFTERc was acceptable to patients and SBCNs. SBCNs believe the implementation of Mini-AFTERc to be feasible and an extension of discussions that already happen routinely. SBCNs believe delivery, however, at the scale required would be challenging given current competing demands for their time. Recruitment was impacted by variability in the follow-up practices of cancer centres and COVID-19 lockdown. Consent and follow-up procedures worked well, and retention rates were high. Conclusions The study provided invaluable information about the potential challenges and solutions for testing the Mini-AFTERc intervention more widely where limiting high FCR levels is an important goal following recovery from primary breast cancer treatment. Trial registration ClinicalTrials.gov, NCT0376382 . Registered on 4 December 2018.
Yao He, Jing Yang, Shaobo Li et al.
Abstract Catastrophic forgetting in neural networks is a common problem, in which neural networks lose information from previous tasks after training on new tasks. Although adopting a regularization method that preferentially retains the parameters important to the previous task to avoid catastrophic forgetting has a positive effect; existing regularization methods cause the gradient to be near zero because the loss is at the local minimum. To solve this problem, we propose a new continuous learning method with Bayesian parameter updating and weight memory (CL-BPUWM). First, a parameter updating method based on the Bayes criterion is proposed to allow the neural network to gradually obtain new knowledge. The diagonal of the Fisher information matrix is then introduced to significantly minimize computation and increase parameter updating efficiency. Second, we suggest calculating the importance weight by observing how changes in each network parameter affect the model prediction output. In the process of model parameter updating, the Fisher information matrix and the sensitivity of the network are used as the quadratic penalty terms of the loss function. Finally, we apply dropout regularization to reduce model overfitting during training and to improve model generalizability. CL-BPUWM performs very well in continuous learning for classification tasks on CIFAR-100 dataset, CIFAR-10 dataset, and MNIST dataset. On CIFAR-100 dataset, it is 0.8%, 1.03% and 0.75% higher than the best performing regularization method (EWC) in three task partitions. On CIFAR-10 dataset, it is 2.25% higher than the regularization method (EWC) and 0.7% higher than the scaled method (GR). It is 0.66% higher than the regularization method (EWC) on the MNIST dataset. When the CL-BPUWM method was combined with the brain-inspired replay model under the CIFAR-100 and CIFAR-10 datasets, the classification accuracy was 2.35% and 5.38% higher than that of the baseline method, BI-R + SI.
Ashagrie Sharew Iyassu, Haile Mekonnen Fenta, Zelalem G. Dessie et al.
Abstract Background In causal analyses, some third factor may distort the relationship between the exposure and the outcome variables under study, which gives spurious results. In this case, treatment groups and control groups that receive and do not receive the exposure are different from one another in some other essential variables, called confounders. Method Place of birth was used as exposure variable and age-specific childhood vaccination status was used as outcome variables. Three approaches of confounder selection techniques such as all pre-treatment covariates, outcome cause covariates, and common cause covariates were proposed. Multiple logistic regression was used to estimate the propensity score for inverse probability treatment weighting (IPTW) confounder adjustment techniques. The proportional odds model was used to estimate the causal effect of place of birth on age-specific childhood vaccination. To validate the result obtained from observed data, we used a plasmode simulation of resampling 1000 samples from actual data 500 times. Result Outcome cause and common cause confounder identification techniques gave comparable results in terms of treatment effect in the plasmode data. However, outcome causes that contain common causes and predictors of the outcome confounder identification gave relatively better treatment effect results. The treatment effect result in the IPTW confounder adjustment method was better than that of the regression adjustment method. The effect of place of birth on log odds of cumulative probability of age-specific childhood vaccination was 0.36 with odds ratio of 1.43 for higher level vaccination status. Conclusion It is essential to use plasmode simulation data to validate the reproducibility of the proposed methods on the observed data. It is important to use outcome-cause covariates to adjust their confounding effect on the outcome. Using inverse probability treatment weighting gives unbiased treatment effect results as compared to the regression method of confounder adjustment. Institutional delivery increases the likelihood of childhood vaccination at the recommended schedule.
Halaman 2 dari 1128076