In the rapidly advancing field of geological oceanography, the Pacific Ocean and its marginal basins—encompassing key regions like the Sea of Japan, Bohai Bay Basin, South China Sea, and western Pacific seamounts—serve as a critical arena for unlocking Earth’s marine geological mysteries and resource potential [...]
This paper explores the legal framework surrounding the classification of “other persons” aboard vessels under the Ship Safety Act, particularly in comparison to international maritime conventions such as SOLAS. The term “other persons” has been a source of ambiguity and safety concerns, especially following several maritime accidents, including the 2024 collision near Yeoseodo. In Korea, truck drivers and other non-crew individuals have been permitted to board vessels as “other persons,” often exceeding permissible limits, raising significant safety and regulatory issues. This research examines the inconsistencies between Korea’s Ship Safety Act and international standards, noting that other major maritime nations impose stricter limits and clearer definitions on non-passenger personnel. Recommendations include aligning domestic laws with international conventions by redefining “other persons” and enforcing a stricter cap on non-crew passengers to enhance safety. The paper also addresses the need for categorizing individuals boarding vessels into clearer groups – crew, passengers, industrial personnel, and specialized personnel – to ensure legal clarity and improve compliance with global maritime safety standards. Through a comparative legal analysis, the paper advocates for the adoption of international norms in Korea’s maritime regulations.
To enable early identification of failure risks in ship systems and equipment, a dynamic cloud center of gravity model is developed for real-time system-level health assessment. First, the Functional Analysis System Technique (FAST) was applied to decompose the operational functions and dependencies of the intelligent machinery room system, enabling the structured establishment of a hierarchical evaluation index system. The comprehensive weight is derived through synergistic application of the fuzzy set (FS) theory and entropy weight. This process integrated expert-defined functional boundaries with measurable parameters critical to system performance. Then, an improved cloud center of gravity method based on the Gaussian cloud model and sliding time window method is used for the system’s adaptive health value calculation. The dynamic health model can achieve continuous online assessment and track the further evolution of the system. Finally, the proposed model is applied to the Fuel Oil Supply System (FOSS). The integration of system performance output and disassembly inspection results demonstrates that the method proposed in the article more accurately reflects the true health status changes in the system when mapping health values.
Nguyen Thi Huyen Trang, Taiga Mitsuyuki, Yoshiaki Hirakawa
et al.
This study investigates the seakeeping performance of a wind power generation ship (WPG ship). This type of vessel uses rigid sails for propulsion and submerged turbines in the form of either two or four booms to generate energy. The research includes both tank tests and simulations using Ansys AQWA, validated with the new strip method (NSM). The vessel used in this study is the container ship KCS. Overall, the power generator increases the ship’s stability and reduces roll but has almost no impact on pitch. The findings show that the 4-boom configuration offers better stability and seakeeping than the 2-boom configuration. The ship’s speed has a significant impact on the ship’s RAO, especially roll and pitch, both for the bare hull and the hull with power generation equipment. When the ship’s speed increases slightly, the roll RAO tends to decrease, but as the speed becomes higher, the RAO tends to increase. Wind conditions notably increase the roll RAO peak, reducing stability, while pitch changes are minimal. The KCS model maintains operational capability in winds up to Beaufort scale 11.
Edge caching is an emerging technology that empowers caching units at edge nodes, allowing users to fetch contents of interest that have been pre-cached at the edge nodes. The key to pre-caching is to maximize the cache hit percentage for cached content without compromising users' privacy. In this letter, we propose a federated learning (FL) assisted edge caching scheme based on lightweight architecture denoising diffusion probabilistic model (LDPM). Our simulation results verify that our proposed scheme achieves a higher cache hit percentage compared to existing FL-based methods and baseline methods.
Browser agents enable autonomous web interaction but face critical reliability and security challenges in production. This paper presents findings from building and operating a production browser agent. The analysis examines where current approaches fail and what prevents safe autonomous operation. The fundamental insight: model capability does not limit agent performance; architectural decisions determine success or failure. Security analysis of real-world incidents reveals prompt injection attacks make general-purpose autonomous operation fundamentally unsafe. The paper argues against developing general browsing intelligence in favor of specialized tools with programmatic constraints, where safety boundaries are enforced through code instead of large language model (LLM) reasoning. Through hybrid context management combining accessibility tree snapshots with selective vision, comprehensive browser tooling matching human interaction capabilities, and intelligent prompt engineering, the agent achieved approximately 85% success rate on the WebGames benchmark across 53 diverse challenges (compared to approximately 50% reported for prior browser agents and 95.7% human baseline).
Yining Hong, Christopher S. Timperley, Christian Kästner
Machine learning (ML) components are increasingly integrated into software products, yet their complexity and inherent uncertainty often lead to unintended and hazardous consequences, both for individuals and society at large. Despite these risks, practitioners seldom adopt proactive approaches to anticipate and mitigate hazards before they occur. Traditional safety engineering approaches, such as Failure Mode and Effects Analysis (FMEA) and System Theoretic Process Analysis (STPA), offer systematic frameworks for early risk identification but are rarely adopted. This position paper advocates for integrating hazard analysis into the development of any ML-powered software product and calls for greater support to make this process accessible to developers. By using large language models (LLMs) to partially automate a modified STPA process with human oversight at critical steps, we expect to address two key challenges: the heavy dependency on highly experienced safety engineering experts, and the time-consuming, labor-intensive nature of traditional hazard analysis, which often impedes its integration into real-world development workflows. We illustrate our approach with a running example, demonstrating that many seemingly unanticipated issues can, in fact, be anticipated.
Hashini Gunatilake, John Grundy, Rashina Hoda
et al.
Empathy, defined as the ability to understand and share others' perspectives and emotions, is essential in software engineering (SE), where developers often collaborate with diverse stakeholders. It is also considered as a vital competency in many professional fields such as medicine, healthcare, nursing, animal science, education, marketing, and project management. Despite its importance, empathy remains under-researched in SE. To further explore this, we conducted a socio-technical grounded theory (STGT) study through in-depth semi-structured interviews with 22 software developers and stakeholders. Our study explored the role of empathy in SE and how SE activities and processes can be improved by considering empathy. Through applying the systematic steps of STGT data analysis and theory development, we developed a theory that explains the role of empathy in SE. Our theory details the contexts in which empathy arises, the conditions that shape it, the causes and consequences of its presence and absence. We also identified contingencies for enhancing empathy or overcoming barriers to its expression. Our findings provide practical implications for SE practitioners and researchers, offering a deeper understanding of how to effectively integrate empathy into SE processes.
Ocean salinity plays an important role in oceanographic research as one of the fundamental parameters. An optical salinometer based on the Michelson interferometer (MI) suitable for in situ measurement in deep-sea environments is proposed in this work, and it features real-time calibration and multichannel multiplexing using the frequency modulated continuous wave (FMCW) technique. The symmetrical sapphire structure used to withstand deep-sea pressure can not only achieve automatic temperature compensation, but also counteract the changes in optical path length under deep-sea pressure. A model formula suitable for optical salinity demodulation is proposed through the nonlinear least squares fitting method. In vertical profile testing, the optical salinometer demonstrated remarkable tracking performance, achieving an error of less than 0.001 psu. The sensor displays a stable salinity demodulation error within ±0.002 psu during a three-month long-term test at a depth of 4000 m. High stability and resolution make this optical salinometer have broad development prospects in ocean observation.
As AI systems grow increasingly specialized and complex, managing hardware heterogeneity becomes a pressing challenge. How can we efficiently coordinate and synchronize heterogeneous hardware resources to achieve high utilization? How can we minimize the friction of transitioning between diverse computation phases, reducing costly stalls from initialization, pipeline setup, or drain? Our insight is that a network abstraction at the ISA level naturally unifies heterogeneous resource orchestration and phase transitions. This paper presents a Reconfigurable Stream Network Architecture (RSN), a novel ISA abstraction designed for the DNN domain. RSN models the datapath as a circuit-switched network with stateful functional units as nodes and data streaming on the edges. Programming a computation corresponds to triggering a path. Software is explicitly exposed to the compute and communication latency of each functional unit, enabling precise control over data movement for optimizations such as compute-communication overlap and layer fusion. As nodes in a network naturally differ, the RSN abstraction can efficiently virtualize heterogeneous hardware resources by separating control from the data plane, enabling low instruction-level intervention. We build a proof-of-concept design RSN-XNN on VCK190, a heterogeneous platform with FPGA fabric and AI engines. Compared to the SOTA solution on this platform, it reduces latency by 6.1x and improves throughput by 2.4x-3.2x. Compared to the T4 GPU with the same FP32 performance, it matches latency with only 18% of the memory bandwidth. Compared to the A100 GPU at the same 7nm process node, it achieves 2.1x higher energy efficiency in FP32.
Óscar Pedreira, Félix García, Mario Piattini
et al.
Gamification has been applied in software engineering to improve quality and results by increasing people's motivation and engagement. A systematic mapping has identified research gaps in the field, one of them being the difficulty of creating an integrated gamified environment comprising all the tools of an organization, since most existing gamified tools are custom developments or prototypes. In this paper, we propose a gamification software architecture that allows us to transform the work environment of a software organization into an integrated gamified environment, i.e., the organization can maintain its tools, and the rewards obtained by the users for their actions in different tools will mount up. We developed a gamification engine based on our proposal, and we carried out a case study in which we applied it in a real software development company. The case study shows that the gamification engine has allowed the company to create a gamified workplace by integrating custom developed tools and off-the-shelf tools such as Redmine, TestLink, or JUnit, with the gamification engine. Two main advantages can be highlighted: (i) our solution allows the organization to maintain its current tools, and (ii) the rewards for actions in any tool accumulate in a centralized gamified environment.
DNA sequence alignment is an important workload in computational genomics. Reference-guided DNA assembly involves aligning many read sequences against candidate locations in a long reference genome. To reduce the computational load of this alignment, candidate locations can be pre-filtered using simpler alignment algorithms like edit distance. Prior work has explored accelerating filtering on simulated compute-in-DRAM, due to the massive parallelism of compute-in-memory architectures. In this paper, we present work-in-progress on accelerating filtering using a commercial compute-in-SRAM accelerator. We leverage the recently released Gemini accelerator platform from GSI Technology, which is the first, to our knowledge, commercial-scale compute-in-SRAM system. We accelerate the Myers' bit-parallel edit distance algorithm, producing average speedups of 14.1x over single-core CPU performance. Individual query/candidate alignments produce speedups of up to 24.1x. These early results suggest this novel architecture is well-suited to accelerating the filtering step of sequence-to-sequence DNA alignment.
Rodrigo Huerta, Mojtaba Abaie Shoushtary, Antonio González
GPU architectures have become popular for executing general-purpose programs. Their many-core architecture supports a large number of threads that run concurrently to hide the latency among dependent instructions. In modern GPU architectures, each SM/core is typically composed of several sub-cores, where each sub-core has its own independent pipeline. Simulators are a key tool for investigating novel concepts in computer architecture. They must be performance-accurate and have a proper model related to the target hardware to explore the different bottlenecks properly. This paper presents a wide analysis of different parts of Accel-sim, a popular GPGPU simulator, and some improvements of its model. First, we focus on the front-end and developed a more realistic model. Then, we analyze the way the result bus works and develop a more realistic one. Next, we describe the current memory pipeline model and propose a model for a more cost-effective design. Finally, we discuss other areas of improvement of the simulator.