J. Hennessy, D. Patterson
Hasil untuk "Computer software"
Menampilkan 20 dari ~8151746 hasil · dari arXiv, CrossRef, DOAJ, Semantic Scholar
Manolis I. A. Lourakis, Antonis A. Argyros
Alessio Ceroni, K. Maass, H. Geyer et al.
John P. Chin, Virginia A. Diehl, K. Norman
Alistair Cockburn, Jim Highsmith
M. Genesereth, Steven P. Ketchpel
Sudhir Kumar, K. Tamura, M. Nei
L. Lindbom, Pontus Pihlgren, N. Jonsson
W. Wells, A. Colchester, S. Delp
John Millar Carroll
S. H. Kan
Svilen Kanev, Juan Pablo Darago, K. Hazelwood et al.
David A. Patterson, J. Hennessy
L. Akselrud, Y. Grin
BU Yunyang, QI Binting, BU Fanliang
In social media,people’s comments usually describe a certain sentiment region in the corresponding image,and there is correspondence information between image and text.Most previous multimodal sentiment analysis methods only explore the interactions between images and text from a single perspective,capturing the correspondence between image regions and text words,leading to results that are not optimal.In addition,data on social media is strongly personal and subjective,and the sentiment in the data is multidimensional and complex,which leads to the emergence of data with weak image and text sentiment consistency.To address the above two problems,a multimodal sentiment analysis model with interactive fusion of two perspectives under cross-modal inconsistency perception is proposed.On the one hand,cross-modal interaction of graphic and textual features from both global and local perspectives provides a more comprehensive and accurate sentiment analysis,which improves the perfor-mance and application of the model.On the other hand,the inconsistency scores of the graphical features are calculated to representthe degree of graphical inconsistency,as a way to dynamically regulate the weights of the unimodal and multimodal representations in the final sentiment features,thus improving the robustness of the model.Extensive experiments are conducted on two public datasets,MVSA-Single and MVSA-Multiple,and the results demonstrate the validity and superiority of the proposed multimodal sentiment analysis model compared to the existing baseline models,with F1 values increasing by 0.59 persentage points and 0.39 persentage points,respectively.
Mohamed Sami Rakha, Andriy Miranskyy, Daniel Alencar da Costa
Software defect prediction (SDP) is crucial for delivering high-quality software products. Recent research has indicated that prediction performance improvements in SDP are achievable by applying hyperparameter tuning to a particular SDP scenario. However, the positive impact resulting from the hyperparameter tuning step may differ based on the targeted SDP scenario. Comparing the impact of hyperparameter tuning across SDP scenarios is necessary to provide comprehensive insights and enhance the robustness, generalizability, and, eventually, the practicality of SDP modeling for quality assurance. Therefore, in this study, we contrast the impact of hyperparameter tuning across two pivotal and consecutive SDP scenarios: (1) Inner Version Defect Prediction (IVDP) and (2) Cross Version Defect Prediction (CVDP). The main distinctions between the two scenarios lie in the scope of defect prediction and the selected evaluation setups. This study's experiments use common evaluation setups, 28 machine learning (ML) algorithms, 53 post-release software datasets, two tuning algorithms, and five optimization metrics. We apply statistical analytics to compare the SDP performance impact differences by investigating the overall impact, the single ML algorithm impact, and variations across different software dataset sizes. The results indicate that the SDP gains within the IVDP scenario are significantly larger than those within the CVDP scenario. The results reveal that asserting performance gains for up to 24 out of 28 ML algorithms may not hold across multiple SDP scenarios. Furthermore, we found that small software datasets are more susceptible to larger differences in performance impacts. Overall, the study findings recommend software engineering researchers and practitioners to consider the effect of the selected SDP scenario when expecting performance gains from hyperparameter tuning.
Filipe R. Cogo, Gustavo A. Oliva, Ahmed E. Hassan
The rapid advancement of AI-assisted software engineering has brought transformative potential to the field of software engineering, but existing tools and paradigms remain limited by cognitive overload, inefficient tool integration, and the narrow capabilities of AI copilots. In response, we propose Compiler.next, a novel search-based compiler designed to enable the seamless evolution of AI-native software systems as part of the emerging Software Engineering 3.0 era. Unlike traditional static compilers, Compiler.next takes human-written intents and automatically generates working software by searching for an optimal solution. This process involves dynamic optimization of cognitive architectures and their constituents (e.g., prompts, foundation model configurations, and system parameters) while finding the optimal trade-off between several objectives, such as accuracy, cost, and latency. This paper outlines the architecture of Compiler.next and positions it as a cornerstone in democratizing software development by lowering the technical barrier for non-experts, enabling scalable, adaptable, and reliable AI-powered software. We present a roadmap to address the core challenges in intent compilation, including developing quality programming constructs, effective search heuristics, reproducibility, and interoperability between compilers. Our vision lays the groundwork for fully automated, search-driven software development, fostering faster innovation and more efficient AI-driven systems.
I Putu Agus Eka Darma Udayana, Made Sudarma, I Ketut Gede Darma Putra et al.
Electroencephalogram (EEG) is a non-invasive technology that is widely used to record the electrical activity of the brain. However, often the EEG signal is contaminated by noise, including ocular artefacts and muscle activity, which can interfere with accurate analysis and interpretation. This research aims to improve the quality of EEG signals related to concentration by comparing the effectiveness of two denoising methods, namely Independent Component Analysis (ICA) and Principal Component Analysis (PCA). Using commercial EEG headsets, this study recorded Alpha, Beta, Delta, and Theta signals from 20 participants while they performed tasks that required concentration. Evaluation of the effectiveness of the denoising technique is carried out by focusing on changes in standard deviation and calculating the Percentage Residual Difference (PRD) value of the EEG signal before and after denoising. The results show that ICA provides better denoising performance than PCA, as reflected by a significant reduction in standard deviation and a lower PRD value. These results indicate that the ICA method can effectively reduce noise and preserve important information from the original signal.
Tijs Karman, Niccolò Bigagli, Weijun Yuan et al.
We develop double microwave shielding, which has recently enabled evaporative cooling to the first Bose-Einstein condensate of polar molecules [Bigagli et al., Nature 631, 289 (2024)]. Two microwave fields of different frequency and polarization are employed to effectively shield polar molecules from inelastic collisions and three-body recombination. Here, we describe in detail the theory of double microwave shielding. We demonstrate that double microwave shielding effectively suppresses two- and three-body losses. Simultaneously, dipolar interactions and the scattering length can be flexibly tuned, enabling comprehensive control over interactions in ultracold gases of polar molecules. We show that this approach works universally for a wide range of molecules. This opens the door to studying many-body physics with strongly interacting dipolar quantum matter.
Abdullah Addas, Abdullah Addas, Muhammad Nasir Khan et al.
IntroductionThe regional disparity in higher education access can only be met when there are strategies for sustainable development and diversification of the economy, as envisioned in Saudi Vision 2030. Currently, 70% of universities are concentrated in the Central and Eastern regions, leaving the Northern and Southern parts of the country with limited opportunities.MethodsThe study created a framework with sensors and generative adversarial networks (GANs) that optimize the distribution of medical universities, supporting equity in access to education and balanced regional development. The research applies an artificial intelligence (AI)-driven framework that combines sensor data with GAN-based models to perform real-time geographic and demographic data analyses on the placement of higher education institutions throughout Saudi Arabia. This framework analyzes multisensory data by examining strategic university placement impacts on regional economies, social mobility, and the environment. Scenario modeling was used to simulate potential outcomes due to changes in university distribution.ResultsThe findings indicated that areas with a higher density of universities experience up to 20% more job opportunities and a higher GDP growth of up to 15%. The GAN-based simulations reveal that redistributive educational institutions in underrepresented regions could decrease environmental impacts by about 30% and enhance access. More specifically, strategic placement in underserved areas is associated with a reduction of approximately 10% in unemployment.DiscussionThe research accentuates the need to include AI and sensor technology to develop educational infrastructures. The proposed framework can be used for continuous monitoring and dynamic adaptation of university strategies to align them with evolving economic and environmental objectives. The study explains the transformative potential of AI-enabled solutions to further equal access to education for sustainable regional development throughout Saudi Arabia.
Halaman 12 dari 407588