Hongqun Yang, Zhenghe Xu, M. Fan et al.
Hasil untuk "Ocean engineering"
Menampilkan 20 dari ~9447139 hasil · dari arXiv, DOAJ, CrossRef, Semantic Scholar
G. Amy, N. Ghaffour, Zhenyu Li et al.
R. Katz, M. Parlange, P. Naveau
Ankita Rajaram Naik, Apurva Swarnakar, Kartik Mittal
Over the past few decades, underwater image enhancement has attracted an increasing amount of research effort due to its significance in underwater robotics and ocean engineering. Research has evolved from implementing physics-based solutions to using very deep CNNs and GANs. However, these state-of-art algorithms are computationally expensive and memory intensive. This hinders their deployment on portable devices for underwater exploration tasks. These models are trained on either synthetic or limited real-world datasets making them less practical in real-world scenarios. In this paper, we propose a shallow neural network architecture, Shallow-UWnet which maintains performance and has fewer parameters than the state-of-art models. We also demonstrated the generalization of our model by benchmarking its performance on a combination of synthetic and real-world datasets.
J. Carton, G. Chepurin, Xianhe Cao et al.
Lei Zhang
The quantum threat to cybersecurity has accelerated the standardization of Post-Quantum Cryptography (PQC). Migrating legacy software to these quantum-safe algorithms is not a simple library swap, but a new software engineering challenge: existing vulnerability detection, refactoring, and testing tools are not designed for PQC's probabilistic behavior, side-channel sensitivity, and complex performance trade-offs. To address these challenges, this paper outlines a vision for a new class of tools and introduces the Automated Quantum-safe Adaptation (AQuA) framework, with a three-pillar agenda for PQC-aware detection, semantic refactoring, and hybrid verification, thereby motivating Quantum-Safe Software Engineering (QSSE) as a distinct research direction.
Alexander Korn, Lea Zaruchas, Chetan Arora et al.
Large Language Models, particularly decoder-only generative models such as GPT, are increasingly used to automate Software Engineering tasks. These models are primarily guided through natural language prompts, making prompt engineering a critical factor in system performance and behavior. Despite their growing role in SE research, prompt-related decisions are rarely documented in a systematic or transparent manner, hindering reproducibility and comparability across studies. To address this gap, we conducted a two-phase empirical study. First, we analyzed nearly 300 papers published at the top-3 SE conferences since 2022 to assess how prompt design, testing, and optimization are currently reported. Second, we surveyed 105 program committee members from these conferences to capture their expectations for prompt reporting in LLM-driven research. Based on the findings, we derived a structured guideline that distinguishes essential, desirable, and exceptional reporting elements. Our results reveal significant misalignment between current practices and reviewer expectations, particularly regarding version disclosure, prompt justification, and threats to validity. We present our guideline as a step toward improving transparency, reproducibility, and methodological rigor in LLM-based SE research.
Jakub Talaga, Pawel Netzel, Dominika Cywicka
Forest tree species diversity plays a critical role in maintaining ecosystem resilience and function. However, large-scale assessments remain challenging due to the limitations of field-based and supervised remote sensing methods, which require costly training data and species-level labeling. In this study, we propose an unsupervised approach to estimating tree species diversity based solely on satellite imagery (Sentinel-2 or Landsat-8) acquired during the 2019 growing season. The method integrates vegetation indices (GNDVI, EVI, NDMI), self-organizing maps, and spectral clustering to derive the evenness index without the need for species classification. Validation against field data from over 10 000 hexagonal grid cells (10 square kilometers each) across Poland shows strong agreement, with Pearson’s <italic>r</italic> = 0.87 (Sentinel-2, <inline-formula><tex-math notation="LaTeX">$R^{2}$</tex-math></inline-formula> = 0.75) and <italic>r</italic> = 0.81 (Landsat-8, <inline-formula><tex-math notation="LaTeX">$R^{2}$</tex-math></inline-formula> = 0.66). Because the approach does not require ground-based training data, it can be directly integrated into operational forest monitoring frameworks, including national forest inventory programs. This scalable, label-free method enables the repeatable monitoring of tree species diversity at national and continental scales.
Marc Bruni, Fabio Gabrielli, Mohammad Ghafari et al.
Prompt engineering reduces reasoning mistakes in Large Language Models (LLMs). However, its effectiveness in mitigating vulnerabilities in LLM-generated code remains underexplored. To address this gap, we implemented a benchmark to automatically assess the impact of various prompt engineering strategies on code security. Our benchmark leverages two peer-reviewed prompt datasets and employs static scanners to evaluate code security at scale. We tested multiple prompt engineering techniques on GPT-3.5-turbo, GPT-4o, and GPT-4o-mini. Our results show that for GPT-4o and GPT-4o-mini, a security-focused prompt prefix can reduce the occurrence of security vulnerabilities by up to 56%. Additionally, all tested models demonstrated the ability to detect and repair between 41.9% and 68.7% of vulnerabilities in previously generated code when using iterative prompting techniques. Finally, we introduce a "prompt agent" that demonstrates how the most effective techniques can be applied in real-world development workflows.
Shavindra Wickramathilaka, John Grundy, Kashumi Madampe et al.
The use of diverse mobile applications among senior users is becoming increasingly widespread. However, many of these apps contain accessibility problems that result in negative user experiences for seniors. A key reason is that software practitioners often lack the time or resources to address the broad spectrum of age-related accessibility and personalisation needs. As current developer tools and practices encourage one-size-fits-all interfaces with limited potential to address the diversity of senior needs, there is a growing demand for approaches that support the systematic creation of adaptive, accessible app experiences. To this end, we present AdaptForge, a novel model-driven engineering (MDE) approach that enables advanced design-time adaptations of mobile application interfaces and behaviours tailored to the accessibility needs of senior users. AdaptForge uses two domain-specific languages (DSLs) to address age-related accessibility needs. The first model defines users' context-of-use parameters, while the second defines conditional accessibility scenarios and corresponding UI adaptation rules. These rules are interpreted by an MDE workflow to transform an app's original source code into personalised instances. We also report evaluations with professional software developers and senior end-users, demonstrating the feasibility and practical utility of AdaptForge.
Qiaolin Qin, Ronnie de Souza Santos, Rodrigo Spinola
Context. The rise of generative AI (GenAI) tools like ChatGPT and GitHub Copilot has transformed how software is learned and written. In software engineering (SE) education, these tools offer new opportunities for support, but also raise concerns about over-reliance, ethical use, and impacts on learning. Objective. This study investigates how undergraduate SE students use GenAI tools, focusing on the benefits, challenges, ethical concerns, and instructional expectations that shape their experiences. Method. We conducted a survey with 130 undergraduate students from two universities. The survey combined structured Likert-scale items and open-ended questions to investigate five dimensions: usage context, perceived benefits, challenges, ethical and instructional perceptions. Results. Students most often use GenAI for incremental learning and advanced implementation, reporting benefits such as brainstorming support and confidence-building. At the same time, they face challenges including unclear rationales and difficulty adapting outputs. Students highlight ethical concerns around fairness and misconduct, and call for clearer instructional guidance. Conclusion. GenAI is reshaping SE education in nuanced ways. Our findings underscore the need for scaffolding, ethical policies, and adaptive instructional strategies to ensure that GenAI supports equitable and effective learning.
Mauro Marcelino, Marcos Alves, Bianca Trinkenreich et al.
[Context] An evidence briefing is a concise and objective transfer medium that can present the main findings of a study to software engineers in the industry. Although practitioners and researchers have deemed Evidence Briefings useful, their production requires manual labor, which may be a significant challenge to their broad adoption. [Goal] The goal of this registered report is to describe an experimental protocol for evaluating LLM-generated evidence briefings for secondary studies in terms of content fidelity, ease of understanding, and usefulness, as perceived by researchers and practitioners, compared to human-made briefings. [Method] We developed an RAG-based LLM tool to generate evidence briefings. We used the tool to automatically generate two evidence briefings that had been manually generated in previous research efforts. We designed a controlled experiment to evaluate how the LLM-generated briefings compare to the human-made ones regarding perceived content fidelity, ease of understanding, and usefulness. [Results] To be reported after the experimental trials. [Conclusion] Depending on the experiment results.
Max Ofsa, Taylan G. Topcu
Systems engineering (SE) is evolving with the availability of generative artificial intelligence (AI) and the demand for a systems-of-systems perspective, formalized under the purview of mission engineering (ME) in the US Department of Defense. Formulating ME problems is challenging because they are open-ended exercises that involve translation of ill-defined problems into well-defined ones that are amenable for engineering development. It remains to be seen to which extent AI could assist problem formulation objectives. To that end, this paper explores the quality and consistency of multi-purpose Large Language Models (LLM) in supporting ME problem formulation tasks, specifically focusing on stakeholder identification. We identify a relevant reference problem, a NASA space mission design challenge, and document ChatGPT-3.5's ability to perform stakeholder identification tasks. We execute multiple parallel attempts and qualitatively evaluate LLM outputs, focusing on both their quality and variability. Our findings portray a nuanced picture. We find that the LLM performs well in identifying human-focused stakeholders but poorly in recognizing external systems and environmental factors, despite explicit efforts to account for these. Additionally, LLMs struggle with preserving the desired level of abstraction and exhibit a tendency to produce solution specific outputs that are inappropriate for problem formulation. More importantly, we document great variability among parallel threads, highlighting that LLM outputs should be used with caution, ideally by adopting a stochastic view of their abilities. Overall, our findings suggest that, while ChatGPT could reduce some expert workload, its lack of consistency and domain understanding may limit its reliability for problem formulation tasks.
Jiongqi Lin, Wuyin Weng, Linfan Shi et al.
The ever-increasing global demand for shrimp has spurred the growth of the shrimp farming and processing industries. Byproducts derived from shrimp processing, including shrimp heads, viscera, and shells, are underutilized and pose potential environmental pollution risks. Shrimp and its byproducts contain a wide number of components, including proteins, lipids, chitin, carotenoids, and minerals. Therefore, utilizing shrimp and its byproducts holds significant economic and environmental importance, with applications in food, pharmaceutical, and other industries. Shrimp processing technologies, including thermal and non-thermal processing techniques, are reviewed. Besides, the applications of shrimp and its byproducts are summarized, covering their use in food and nutritional supplements, development of active edible films, animal feed additives, and environmental and biotechnological applications. Additionally, the barriers and prospects of utilizing shrimp processing byproducts are also discussed. The extracted active ingredients possess various biological activities, such as antioxidant, antimicrobial, antihypertensive, and anti-inflammatory properties, and can serve as natural and safe food or feed additives or as important ingredients for functional foods and feeds due to their unique functional and nutritional characteristics. More importantly, the bioactive compounds contained in shrimp byproducts offer new approaches for the development of food additives and nutritional supplements. Looking ahead, the development and utilization of shrimp byproducts will move towards environmentally friendly directions, such as energy conversion, bioremediation technologies, and the manufacturing of bioplastics. Moreover, the integration with artificial intelligence technologies is expected to present broad prospects for development.
Youbo Nan, Xiutong Wang, Hui Xu et al.
Abstract Triboelectric nanogenerator (TENG) is an emerging wave energy harvesting technology with excellent potential. However, due to issues with sealing, anchoring, and difficult deployment over large areas, TENG still cannot achieve large‐scale wave energy capture. Here, a submerged and completely open solid–liquid TENG (SOSL‐TENG) is developed for ocean wave energy harvesting. The SOSL‐TENG is adapted to various water environments. Due to its simple structure, it is easy to deploy into various marine engineering facilities in service. Importantly, this not only solves the problem of difficult construction of TENG networks at present, but also effectively utilizes high‐quality wave energy resources. The working mechanism and output performance of the SOSL‐TENG are systematically investigated. With optimal triggering conditions, the transferred charge (Qtr) and short‐circuit current (Isc) of SOSL‐TENG are 2.58 μC and 85.9 μA, respectively. The wave tank experiment is taken for fully demonstrating the superiority of the SOSL‐TENG network in large‐scale collection and conversion of wave energy. Due to the excellent output performance, TENG can harvest wave energy to provide power for various commercial electronic devices such as LED beads, hygrothermograph, and warning lights. Importantly, the SOSL‐TENG networks realizes self‐powered for electrochemical systems, which provides a direction for energy cleanliness in industrial systems. This work provides a prospective strategy for large‐scale deployment of TENG applications, especially for harvesting wave energy in spray splash zones or at the surface of the water.
Xiaopeng Xi, Xiaosheng Si, Yichun Niu et al.
Timely prognostics of remaining useful life (RUL) are increasingly critical for engineering systems, especially as long-life components face complex and evolving degradation risks. Nevertheless, conventional degradation models are frequently inadequate in capturing the memory effects and latent global state dependencies inherent in practical degradation processes. These limitations hinder the generalizability of existing methods. To overcome these challenges, this paper proposes a class of nonlinear degradation models that explicitly incorporate generalized spatiotemporal dependencies and memory effects among multiple similar components. The models are formulated using continuous stochastic differential equations and discretized via two numerical schemes to enable efficient parameter estimation through maximum likelihood (ML) methods. Subsequently, RUL predictions are derived using Monte Carlo simulation, with point estimates extracted from the resulting frequency histograms. The proposed method is validated through a numerical example and a blast furnace case study.
Qishen Lv, Rui Yang, Chengmin Zhang et al.
Fusing infrared images with visible images facilitates obtaining more abundant and accurate information content. However, existing infrared and visible image fusion methods often lack attention to the semantic information and global context information in the original images. To address these issues, we propose a novel deep learning framework for infrared and visible image fusion, which is named Semantic Segmentation Driven Infrared and Visible Image Fusion Framework (SSDFusion). Within the fusion framework, the Local Global Feature Extraction Fusion Module is employed, complemented by the decoder. Furthermore, under the guidance of semantic segmentation, SSDFusion achieves a better understanding of complex scene region information, enhancing fusion task performance. Finally, an adaptive loss function is implemented throughout SSDFusion to fine-tune the balance between the semantic segmentation task and the image fusion task by adjusting their proportional contributions. This approach aids in more accurately preserving the semantic information in the image, thereby enhancing the performance of the fusion framework. We conducted comparative experiments on the MSRS dataset with existing advanced fusion methods. The experimental results show that SSDFusion performs best in both qualitative and quantitative metrics. Analysis of the public datasets indicates that our algorithm can improve the entropy (EN), spatial frequency (SF), standard deviation (SD), mutual information (MI), visual information fidelity (VIF), and edge-based similarity measure (Q<inline-formula> <tex-math notation="LaTeX">${}_{\text {AB/F}}$ </tex-math></inline-formula>) metrics with about 15.33%, 91.55%, 17.09%, 93.39%, 66.94%, and 122.56% gains, respectively. The ablation study further demonstrates that the local global feature fusion module, the adaptive fusion loss function, and the integration of semantic segmentation and image fusion have significant effects on improving the model performance. SSDFusion also exhibits excellent performance in terms of computational efficiency and parameter count. Furthermore, we have also verified the good generalization ability of SSDFusion on the RoadScene and M3FD datasets.
Halaman 12 dari 472357