F. Halzen, Alan D. Martin
Hasil untuk "Modern"
Menampilkan 20 dari ~4308059 hasil Β· dari CrossRef, arXiv, DOAJ, Semantic Scholar
D. Wrong
E. Elton, M. Gruber
A. Giddens
E. Elton
C. Esveld
J. B. Harborne, Kosasih Padmawinata, Iwang Soediro
Yueguo Gu
C. Warwick
A. Yariv
M. Arend, B. Westermann, N. Risch
T. Stolarski
Jane Bennett
K. Sivaramakrishnan
K. Roelants, D. Gower, M. Wilkinson et al.
Bauchet Pierre
M. Morris
B. Turney
Rahul Singh, Yousuf Sultan, Tajmilur Rahman et al.
Technology is advancing at an unprecedented pace. With the advent of cutting-edge technologies, keeping up with rapid changes are becoming increasingly challenging. In addition to that, increasing dependencies on the cloud technologies have imposed enormous pressure on modern web browsers leading to adapting new technologies faster and making them more susceptible to defects/bugs. Although, many studies have explored browser bugs, a comparative study among the modern browsers generalizing the bug categories and their nature was still lacking. To fill this gap, we undertook an empirical investigation aimed at gaining insights into the prevalent bugs in Google Chromium and Mozilla Firefox as the representatives of modern web browsers. We used GPT-4.o to identify the defect (bugs) categories and analyze the clusters of the most commonly appeared bugs in the two prominent web browsers. Additionally, we compared our LLM based bug categorization with the traditional NLP based approach using TF-IDF and K-Means clustering. We found that although Google Chromium and Firefox have evolved together since almost around the same time (2006-2008), Firefox suffers from high number of bugs having extremely high defect-prone components compared to Chromium. This exploratory study offers valuable insights on the browser bugs and defect-prone components to the developers, enabling them to craft web browsers and web-applications with enhanced resilience and reduced errors.
Jing Lei
Modern data analysis and statistical learning are marked by complex data structures and black-box algorithms. Data complexity stems from technologies such as imaging, remote sensing, wearable devices, and genomic sequencing. At the same time, black-box models, especially deep neural networks, have achieved impressive results. This combination raises new challenges for uncertainty quantification and statistical inference, which we refer to as ``black-box inference.'' Black-box inference is difficult due to the lack of traditional modeling assumptions and the opaque behavior of modern estimators. These factors make it hard to characterize the distribution of estimation errors. A popular solution is post-hoc randomization, which, under mild assumptions such as exchangeability, can yield valid uncertainty quantification. Such methods range from classical techniques like permutation tests, the jackknife, and the bootstrap to more recent innovations like conformal inference. These approaches typically require little knowledge of data distributions or the internal workings of estimators. Many rely on the idea that estimators behave similarly under small perturbations of the data -- a concept formalized as stability. Over time, stability has become a key principle in data science, influencing research on generalization error, privacy, and adaptive inference. This article investigates cross-validation (CV) -- a widely used resampling method -- through the lens of stability. We first review recent theoretical results on CV for estimating generalization error and model selection under stability assumptions. We then examine uncertainty quantification for CV-based risk estimates. Together, these insights yield new theory and tools, which we apply to topics including model selection, selective inference, and conformal prediction.
Halaman 19 dari 215403