Deborah R. Compeau, C. Higgins, S. Huff
Hasil untuk "Information technology"
Menampilkan 20 dari ~25956493 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Bree McEwan, Clarice Wu, Harris Yang et al.
As communication scholars become increasingly interested in studying virtual reality (VR) as a communication channel it will be important to establish useful measures related to perceptual variables in virtual environments. One such variable is physical fidelity: the degree to which virtual environments replicate or resemble places in the physical world. Often in computer science and other fields interested in VR, this variable is measured as reaction time within the system. However, for social scientific VR scholars, it can be important to understand how much the user perceives the environment to have physical fidelity. In the existing literature when physical fidelity is measured as a perceptual variable, it is often conflated with measures of immersion or spatial presence. This paper presents a confirmatory factor analysis approach to establishing a well-fitting scale of perceptual physical fidelity over three separate samples as well as delineating the conceptual and operational differences between physical fidelity, immersion, and spatial presence.
Evangelia Karasmanaki, Garyfallos Arabatzis, Georgios Tsantopoulos
By investing in renewable energy sources (RES), citizens can participate actively in energy transition. The problem, however, is that citizen investment decisions are highly complex, while most strategies for capital mobilization rely on generic incentives or broad campaigns. To provide a new approach to mobilizing citizen capital, this study considers perceived barriers, as it is important to address aspects that disincline citizens from investing, and their preferred information sources, because attitudes are shaped and actions are empowered or disempowered through these channels. Drawing on a representative sample of Greek citizens, we used k-means clustering to segment citizens; the first cluster was inhibited to invest by loaning conditions, highlighting the need for banks to offer better terms for loans, while the second cluster was inhibited by a wide array of technical, economic, and systemic concerns requiring different stakeholders to address the barriers underlying these concerns. The third cluster was inhibited by barriers related to the technology of renewables and the availability of experts for installing and maintaining the systems, indicating the need to address such. Results also showed that several information sources can have a negative effect, suggesting that there should be policy intervention to enhance the accuracy of information.
Siyuan Liu, Yingchao Fan, Qi Hu et al.
Hyperspectral image (HSI) has more spectral information than conventional images, which helps to distinguish targets in a complex scene more accurately. However, HSI typically has a low spatial resolution, which limits their application scenarios. To achieve high-resolution HSI, we propose a spectral and spatial multiscale coupling fusion model (SSMSFuse) for hyperspectral and multispectral image (MSI). SSMSFuse couples the spatial information of MSI and the spectral information of HSI at multiscales by means of a two-branch network structure, thus obtaining the fused images with high spatial and spectral resolution. SSMSFuse consists of two branches, namely the spatial embedding network (Spa-Net) and the spectral embedding network (Spe-Net). Spa-Net is constructed using a multiscale convolutional neural network to better mine multilevel spatial features from MSI. Spe-Net is constructed using self-attention, which can model the long-distance spectral dependencies of HSI to better extract spectral information from HSI. Finally, to achieve interactive coupling of dual-branch information, we designed a spatial–spectral guidance fusion block to fuse features at different scales to avoid loss of spatial and spectral details. Experiments are carried out on four public datasets, and the results show that the proposed method can effectively improve the objective indicators of the fusion results, such as the peak signal to noise ratio, which is increased by 1.36%, and the root mean square error, which is increased by 9.72% on the CAVE dataset, and satisfactory subjective results are also obtained.
Raúl Gutiérrez, Víctor Rampérez, Horacio Paggi et al.
The information fusion field has recently been attracting a lot of interest within the scientific community, as it provides, through the combination of different sources of heterogeneous information, a fuller and/or more precise understanding of the real world than can be gained considering the above sources separately. One of the fundamental aims of computer systems, and especially decision support systems, is to assure that the quality of the information they process is high. There are many different approaches for this purpose, including information fusion. Information fusion is currently one of the most promising methods. It is particularly useful under circumstances where quality might be compromised, for example, either intrinsically due to imperfect information (vagueness, uncertainty) or because of limited resources (energy, time). In response to this goal, a wide range of research has been undertaken over recent years. To date, the literature reviews in this field have focused on problem-specific issues and have been circumscribed to certain system types. Therefore, there is no holistic and systematic knowledge of the state of the art to help establish the steps to be taken in the future. In particular, aspects like what impact different information fusion methods have on information quality, how information quality is characterised, measured and evaluated in different application domains depending on the problem data type or whether fusion is designed as a flexible process capable of adapting to changing system circumstances and their intrinsically limited resources have not been addressed. This paper aims precisely to review the literature on research into the use of information fusion techniques specifically to improve information quality, analysing the above issues in order to identify a series of challenges and research directions, which are presented in this paper.
Jerry N. Luftman, P. Lewis, Scott H. Oldach
FAN Wei, LI Haibo, ZHANG Zhujun
Issues of limited scene adaptability, inadequate evidence preservation, and low efficiency in traditional digital forensics were addressed by analyzing the feasibility of incorporating decentralized, tamper-resistant blockchain technology into digital forensic practices. Initially, a phased forensic process was proposed based on a hierarchical architecture for blockchain forensic technology, examining the advancements of blockchain at each stage of evidence acquisition, preservation, and presentation. Subsequently, limitations in existing research were analyzed, and a digital forensic framework incorporating comprehensive blockchain involvement was designed by utilizing the distributed advantages of blockchain. This framework integrated evidence information into the on-chain data structure and introduced a complementary graph analysis algorithm to standardize evidence collection across various scenarios. An off-chain distributed database was employed to achieve scalable, efficient storage, while smart contract templates enhance the reusability of contracts for similar forensic transactions. Lastly, potential future directions for the application of blockchain technology in forensic science were explored.
Chao Chen, Jintao Liang, Taohua Ren et al.
Owing to the rapid urbanization combined with global climate change, dramatic land-use change in coastal watersheds is occurred, which, in turn, cause the evolution of landscape patterns and threaten the valuable but fragile ecosystem. The coastal zone is characterized by severe cloud cover, frequent changes in land type, and fragmented landscape, so it is challenging to carry out the accurate landscape patterns analysis. To address this problem, this study employed the Google Earth engine cloud platform, Landsat time series, and landscape metrics in the Fragstats model to develop a comprehensive framework that integrates landscape pattern metrics and spatial analysis methods, considering both type level and landscape level. The Hangzhou Bay region was selected for conducting land-use classification and landscape patterns analysis. The results indicate that, during nearly four decades, with the continuous expansion of the urban, the urbanization process has accelerated, and the construction land has expanded by 6.93 times. By analyzing the evolution of landscape patterns, Hangzhou Bay heightened landscape fragmentation and patch shapes became more irregular caused by a trend toward intensified urbanization. The Shannon's diversity index continuously increased from 1.14 to 1.51, while the contagion index consistently decreased from 59.83% to 42.21%, suggesting an increase in land-use diversity, reduced aggregation, and extension tendencies between land patches, along with a decrease in the proportion of highly connected patches within the landscape. This study is anticipated to provide robust evidence for the rational planning of future development directions and the deployment of landscape ecological spatial services.
Fatikh Inayahtur Rahma, Siti Yumnah, Rokim Rokim
This research aims to describe two things: (1) the contents of the RPP; and (2) teachers' technology pedagogy content knowledge (TPACK). Furthermore, this research uses triangulation of methods and sources to ensure the accuracy of the information. The research findings show that: (1) SDI teacher Wahid Hasyim already knows TPACK and knows how to use technology even though initially it was not perfect and needed improvement to meet students' learning needs. Furthermore, research has shown that educators can effectively integrate the use of technology, various techniques, and various learning methodologies when creating their plans for implementing learning. Despite this, the TPACK RPP content is excellent.
Shangen Zhang, Hongyan Cui, Yong Li et al.
This study embarks on a comprehensive investigation of the effectiveness of repetitive transcranial direct current stimulation (tDCS)-based neuromodulation in augmenting steady-state visual evoked potential (SSVEP) brain-computer interfaces (BCIs), alongside exploring pertinent electroencephalography (EEG) biomarkers for assessing brain states and evaluating tDCS efficacy. EEG data were garnered across three distinct task modes (eyes open, eyes closed, and SSVEP stimulation) and two neuromodulation patterns (sham-tDCS and anodal-tDCS). Brain arousal and brain functional connectivity were measured by extracting features of fractal EEG and information flow gain, respectively. Anodal-tDCS led to diminished offsets and enhanced information flow gains, indicating improvements in both brain arousal and brain information transmission capacity. Additionally, anodal-tDCS markedly enhanced SSVEP-BCIs performance as evidenced by increased amplitudes and accuracies, whereas sham-tDCS exhibited lesser efficacy. This study proffers invaluable insights into the application of neuromodulation methods for bolstering BCI performance, and concurrently authenticates two potent electrophysiological markers for multifaceted characterization of brain states.
Asbjørn O. Orvedal, Hsuan-Yin Lin, Eirik Rosnes
We consider the problem of weakly-private information retrieval (WPIR) when data is encoded by a maximum distance separable code and stored across multiple servers. In WPIR, a user wishes to retrieve a piece of data from a set of servers without leaking too much information about which piece of data she is interested in. We study and provide the first WPIR protocols for this scenario and present results on their optimal trade-off between download rate and information leakage using the maximal leakage privacy metric.
Xiangyu Wu, Hailiang Zhang, Yang Yang et al.
In this paper, we present our champion solution to the Global Artificial Intelligence Technology Innovation Competition Track 1: Medical Imaging Diagnosis Report Generation. We select CPT-BASE as our base model for the text generation task. During the pre-training stage, we delete the mask language modeling task of CPT-BASE and instead reconstruct the vocabulary, adopting a span mask strategy and gradually increasing the number of masking ratios to perform the denoising auto-encoder pre-training task. In the fine-tuning stage, we design iterative retrieval augmentation and noise-aware similarity bucket prompt strategies. The retrieval augmentation constructs a mini-knowledge base, enriching the input information of the model, while the similarity bucket further perceives the noise information within the mini-knowledge base, guiding the model to generate higher-quality diagnostic reports based on the similarity prompts. Surprisingly, our single model has achieved a score of 2.321 on leaderboard A, and the multiple model fusion scores are 2.362 and 2.320 on the A and B leaderboards respectively, securing first place in the rankings.
Leif Azzopardi, Vishwa Vinay
This paper introduces the concept of accessibility from the field of transportation planning and adopts it within the context of Information Retrieval (IR). An analogy is drawn between the fields, which motivates the development of document accessibility measures for IR systems. Considering the accessibility of documents within a collection given an IR System provides a different perspective on the analysis and evaluation of such systems which could be used to inform the design, tuning and management of current and future IR systems.
Jakub Pokrywka
Halaman 19 dari 1297825