J. Hartmanis, R. Stearns
Hasil untuk "Computer Science"
Menampilkan 20 dari ~22628874 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef
C. Samson, B. Espiau, M. L. Borgne
Lucas S. Lopes, Ricardo L. de Queiroz
The performance of neural image coders is heavily dependent on their architecture and, hence, on the selection of hyperparameters. Such performance, for a given architecture, is often ascertained by trial, that is, after training and inference, so that many trials may be conducted to select the hyperparameters. We propose a multi-objective hyperparameter optimization (MOHPO) method for neural image compression based on rate-distortion-complexity (RDC) analysis, which drastically reduces the number of networks to try (train and test), thereby saving resources. We validate it on well-established benchmark problems and demonstrate its use with popular autoencoders, measuring their complexities in terms of the number of parameters and floating-point operations. Our method, which we refer to as the greedy lower convex hull (GLCH), aims to track the lower convex hull of a cloud of hyperparameter possibilities. We compare our method with other well-established state-of-the-art MOHPO methods in terms of log-hypervolume difference as a function of the number of trained networks. The results indicate that the proposed method is highly competitive, particularly with fewer trained networks, which is a critical scenario in practice. Furthermore, it is deterministic, that is, it remains consistent across different runs.
S. Billinge, I. Levin
Y. Gil, E. Deelman, Mark Ellisman et al.
Workflows have emerged as a paradigm for representing and managing complex distributed computations and are used to accelerate the pace of scientific progress. A recent National Science Foundation workshop brought together domain, computer, and social scientists to discuss requirements of future scientific applications and the challenges they present to current workflow technologies.
S. Cheryan, J. Siy, Marissa Vichayapai et al.
Lin Zheng, Jinlong Li, Zhanbo Zhu et al.
Abstract In recent years, with the popularization of online education, real-time monitoring of learning engagement has become a key challenge for scholars. Existing studies mainly rely on questionnaires and physiological signal detection, which have limitations such as high subjectivity, poor real-time performance, and expensive equipment. Previous research has shown that head pose is closely related to cognitive state. However, current estimation models require substantial computational resources, making real-time deployment on mobile devices challenging. In this study, we validate the significant correlation between head pose and learning engagement based on the DAiSEE dataset (8,925 video clips) and propose a lightweight head pose estimation method. The LightNet proposed in this paper uses an improved feature extraction module (MG-Net) and an Attention-based multi-scale fusion model (AMF). Experiments conducted on the 300W-LP and BIWI benchmark datasets demonstrate that, compared with existing state-of-the-art methods, LightNet substantially reduces model complexity by decreasing the number of parameters to just 0.45 $$\times 10^6$$ × 10 6 , representing over 90% reduction in model size. Despite this significant compression, LightNet maintains a high level of accuracy, with the mean absolute error (MAE) increasing by only 0.15°, indicating a minimal loss in prediction precision. Moreover, the model achieves a notable improvement in processing speed, exceeding 50% increase relative to baseline approaches. This combination of a lightweight architecture, competitive accuracy, and accelerated inference speed underscores LightNet’s effectiveness and its potential suitability for real-time applications. This study not only expands the application of head pose in education but also provides a feasible solution for real-time engagement monitoring on resource-constrained devices.
Mehmet Hadi Suzer, Ferit Kiray, Emrah Ramazanoglu et al.
Sustainable nitrogen (N) management in arable crops requires the real-time assessment of crop growth and N uptake, particularly in water-limited environments. In the present study, we conducted two large-scale field experiments with rainfed and irrigated wheat in South-East Turkey to evaluate the effectiveness of drone- and satellite-based spectral indices, in combination with neural network models, for estimating biomass and nitrogen uptake. Four N fertilizer rates in the irrigated fields (N<sub>0</sub>: 0, N<sub>6</sub>: 60, N<sub>12</sub>: 120, and N<sub>16</sub>: 160 kg N ha<sup>−1</sup>) and five N rates in the rainfed fields (N<sub>0</sub>: 0, N<sub>2</sub>: 20, N<sub>4</sub>: 40, N<sub>5</sub>: 50, and N<sub>6</sub>: 60 kg N ha<sup>−1</sup>) were tested. Highest fresh biomass was 57.7 ± 1.1 and 15.9 ± 1.0 t/ha<sup>−1</sup> for irrigated and rainfed treatments, respectively, with 2.5-fold higher grain yield in irrigated (8.2 ± 1.2 t/ha<sup>−1</sup>) compared to rainfed (2.9 ± 0.9 t/ha<sup>−1</sup>) wheat. Drone-based spectral indices, especially those based on the red-edge region (CL<sub>Red_edge</sub>), correlated strongly with biomass (R<sup>2</sup> > 0.9 in irrigated wheat) but failed to explain crop N concentration throughout the vegetation period. This limitation was attributed to the nitrogen dilution effect, where increasing biomass during crop growth leads to a decline in the concentration of nitrogen, complicating its accurate estimation via remote sensing. To address this, we employed a two-layer feed-forward neural network model and used SPAD and plant height values as supplementary input parameters to enhance estimations based on vegetation indices. This approach substantially enhanced the predictions of N uptake (R<sup>2</sup> up to 0.95), while even simplified model version using only NDVI and plant height parameters achieved significant performance (R<sup>2</sup> = 0.84). Overall, our results showed that spectral indices are reliable predictors of biomass but insufficient for estimating nitrogen concentration or uptake. Integrating indices with complementary crop traits in nonlinear models provides acceptable estimates of N uptake, supporting more precise fertilizer management and sustainable wheat production under water-limited conditions.
Opeyemi Bamigbade, Mark Scanlon, John Sheppard
Embeddings remain the best way to represent image features, but do not always capture all latent information. This is still a problem in representation learning, and computer vision descriptors struggle with precision and accuracy. Improving image embedding with other features is necessary for tasks like image geolocation, especially for indoor scenes where descriptive cues can have less distinctive characteristics. This work proposes a model architecture that integrates image N-dominant colours and colour histogram vectors in different colour spaces with image embedding from deep metric learning and classification perspectives. The results indicate that the integration of colour features improves image embedding, surpassing the performance of using embedding alone. In addition, the classification approach yields higher accuracy compared to deep metric learning methods. Interestingly, different saturation points were observed for image colour-improved embedding features in models and colour spaces. These findings have implications for the design of more robust image geolocation systems, particularly in indoor environments.
Muhammad Yusuf Halim, Ahmad Luthfi
Unmanned Aerial Vehicles (UAV) have become vital tools in industrial sectors such as coal mining for site inspections and operational monitoring. However, unauthorized UAV flights present security risks that necessitate forensic investigation. This study examines a forensic case involving a DJI Mini 3 UAV suspected of crossing company boundaries. Using the Conceptual Digital Forensics Model for the Drone Forensic Field, both static and dynamic forensic acquisition methods were applied. Static acquisition recovered 53 photographs, 11 videos, 11 audio files, 10 deleted photos, 4 deleted videos, and 3 unidentified log files. Dynamic acquisition yielded 64 media files including 63 photographs (.JPG and .jpg) with 10 deleted, 14 videos (.MP4, .MOV, .SWF) with 6 deleted, 11 audio files, 4 plain text files, 31 deleted files, 3 EXIF metadata records containing GPS coordinates, and 3 unidentified log files. The GPS data from EXIF metadata was visualized in Google Earth to map flight paths and confirm boundary violations. These findings demonstrate that dynamic acquisition retrieves a more comprehensive artifact set than static acquisition. This study highlights the importance of UAV digital forensics in supporting security investigations and ensuring compliance with industrial UAV policies.
Dongni Liao, Jialin Wang
This paper studies discontinuous quasilinear sub-elliptic systems associated with Hörmander’s vector fields under controllable and natural growth conditions. By a new <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mi mathvariant="script">A</mi></semantics></math></inline-formula>-harmonic approximation reformulation for bilinear forms <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi mathvariant="script">A</mi><mo>∈</mo><mi>Bil</mi><mo>(</mo><msup><mi mathvariant="double-struck">R</mi><mrow><mi>k</mi><mi>N</mi></mrow></msup><mo>,</mo><msup><mi mathvariant="double-struck">R</mi><mrow><mi>k</mi><mi>N</mi></mrow></msup><mo>)</mo></mrow></semantics></math></inline-formula>, we obtain optimal partial Hölder continuity with exact exponents for weak solutions with vanishing mean oscillation coefficients.
Xiaohua Wu, Xiaohui Tao, Wenjie Wu et al.
Social surveys in computational social science are well-designed by elaborate domain theories that can effectively reflect the interviewee's deep thoughts without concealing their true feelings. The candidate questionnaire options highly depend on the interviewee's previous answer, which results in the complexity of social survey analysis, the time, and the expertise required. The ability of large language models (LLMs) to perform complex reasoning is well-enhanced by prompting learning such as Chain-of-thought (CoT) but still confined to left-to-right decision-making processes or limited paths during inference. This means they can fall short in problems that require exploration and uncertainty searching. In response, a novel large language model prompting method, called Random Forest of Thoughts (RFoT), is proposed for generating uncertainty reasoning to fit the area of computational social science. The RFoT allows LLMs to perform deliberate decision-making by generating diverse thought space and randomly selecting the sub-thoughts to build the forest of thoughts. It can extend the exploration and prediction of overall performance, benefiting from the extensive research space of response. The method is applied to optimize computational social science analysis on two datasets covering a spectrum of social survey analysis problems. Our experiments show that RFoT significantly enhances language models' abilities on two novel social survey analysis problems requiring non-trivial reasoning.
Noa Izsak
In the aftermath of COVID-19, many universities implemented supplementary "reinforcement" roles to support students in demanding courses. Although the name for such roles may differ between institutions, the underlying idea of providing structured supplementary support is common. However, these roles were often poorly defined, lacking structured materials, pedagogical oversight, and integration with the core teaching team. This paper reports on the redesign of reinforcement sessions in a challenging undergraduate course on formal methods and computational models, using a large language model (LLM) as a reflective planning tool. The LLM was prompted to simulate the perspective of a second-year student, enabling the identification of conceptual bottlenecks, gaps in intuition, and likely reasoning breakdowns before classroom delivery. These insights informed a structured, repeatable session format combining targeted review, collaborative examples, independent student work, and guided walkthroughs. Conducted over a single semester, the intervention received positive student feedback, indicating increased confidence, reduced anxiety, and improved clarity, particularly in abstract topics such as the pumping lemma and formal language expressive power comparisons. The findings suggest that reflective, instructor-facing use of LLMs can enhance pedagogical design in theoretically dense domains and may be adaptable to other cognitively demanding computer science courses.
Emir Catir, Robin Claesson, Rodothea Myrsini Tsoupidi
Large Language Models (LLMs), such as GitHub Copilot and ChatGPT have become popular among programming students. Students use LLMs to assist them in programming courses, including generating source code. Previous work has evaluated the ability of LLMs in solving introductory-course programming assignments. The results have shown that LLMs are highly effective in generating code for introductory Computer Science (CS) courses. However, there is a gap in research on evaluating LLMs' ability to generate code that solves advanced programming assignments. In this work, we evaluate the ability of four LLM tools to solve programming assignments from advanced CS courses in three popular programming languages, Java, Python, and C. We manually select 12 problems, three problems from introductory courses as the baseline and nine programming assignments from second- and third-year CS courses. To evaluate the LLM-generated code, we generate a test suite of 1000 test cases per problem and analyze the program output. Our evaluation shows that although LLMs are highly effective in generating source code for introductory programming courses, solving advanced programming assignments is more challenging. Nonetheless, in many cases, LLMs identify the base problem and provide partial solutions that may be useful to CS students. Furthermore, our results may provide useful guidance for teachers of advanced programming courses on how to design programming assignments.
V. Venkatraman Krishnan, L. Shao, V. Balakrishnan et al.
Binary (and trinary) radio pulsars are natural laboratories in space for understanding gravity in the strong field regime, with many unique and precise tests carried out so far, including the most precise tests of the strong equivalence principle and of the radiative properties of gravity. The Square Kilometre Array (SKA) telescope, with its high sensitivity in the Southern Hemisphere, will vastly improve the timing precision of recycled pulsars, allowing for a deeper search of potential deviations from general relativity (GR) in currently known systems. A Galactic census of pulsars will, in addition, will yield the discovery of dozens of relativistic pulsar systems, including potentially pulsar -- black hole binaries, which can be used to test the cosmic censorship hypothesis and the ``no-hair'' theorem. Aspects of gravitation to be explored include tests of strong equivalence principles, gravitational dipole radiation, extra field components of gravitation, gravitomagnetism, and spacetime symmetries. In this chapter, we describe the kinds of gravity tests possible with binary pulsar and outline the features and abilities that SKA must possess to best contribute to this science.
Miao-Kun Sun
N. Amenta, Sunghee Choi, Tamal K. Dey et al.
Halaman 38 dari 1131444