This paper reviews research in ocean engineering over the last 50+ years with the aim to (I) understand the technological challenges and evolution in the field, (II) investigate whether ocean engineering studies meet present global demands, (III) explore new scientific/engineering tools that may suggest pragmatic solutions to problems, and (IV) identify research and management gaps, and the way forward. Six major research divisions are identified, namely (I) Ocean Hydrodynamics, (II) Risk Assessment and Safety, (III) Ocean Climate and Geophysics: Data and Models, (IV) Control and Automation in the Ocean, (V) Structural Engineering and Manufacturing for the Ocean, and (VI) Ocean Renewable Energy. As much as practically possible research sub-divisions of the field are also identified. It is highlighted that research topics dealing with ocean renewable energy, control and path tracking of ships, as well as computational modelling of wave-induced motions are growing. Updating and forecasting energy resources, developing computational methods for wave generation, and introducing novel methods for the optimised control of energy converters are highlighted as the potential research opportunities. Ongoing studies follow the global needs for environmentally friendly renewable energies, though engineering-based studies often tend to overlook the longer-term potential influence of climate change. Development and exploitation of computational engineering methods with focus on continuum mechanics problems remain relevant. Notwithstanding this, machine learning methods are attracting the attention of researchers. Analysis of COVID-19 transmission onboard is rarely conducted, and 3D printing-based studies still need more attention from researchers.
LIN Yongfeng, CHENG Zhen, LIU Wenmei, ZOU Zehua, LIU Hong, LIU Guangming, LIU Qingmei
In this study, the physicochemical properties of polysaccharides from Houttuynia cordata Thunb. fermented with Lactiplantibacillus plantarum HM6008 (FHCTP) were determined, and the antiallergic activity was evaluated using rat basophilic leukemia (RBL)-2H3 cells. The results showed that fermentation increased the ratio of mannose to sulfate in FHCTP. Compared with H. cordata Thunb. polysaccharides (HCTP), the particle size of FHCTP decreased by 26.67%, and its stability in aqueous solution increased. The inhibition rate of FHCTP on the degranulation of RBL-2H3 cells was significantly higher than that of HCTP, (82.79 ± 5.19)% versus (53.75 ± 1.95)%. After FHCTP intervention, the expression of fragment crystallizable epsilon receptor I (FcεRI) was significantly down-regulated, and the average fluorescence intensity decreased from 2 458.00 ± 7.50 to 1 495.00 ± 28.50. Both FHCTP and HCTP effectively inhibited the isomerization of cytoskeletal proteins and the increase of intracellular calcium ion concentration. In addition, in the mouse passive cutaneous anaphylaxis assay, FHCTP showed a more significant inhibitory effect on dye extravasation in mouse ears, indicating stronger antiallergic activity. In conclusion, FHCTP has better stabilizing effect on mast cells and effectively alleviates mast cell-mediated passive cutaneous anaphylaxis in mice. The results of this research are expected to promote the development and application of antiallergic products from edible and medicinal materials.
A comprehensive review is conducted on the application of Lagrangian mesh-free methods for simulating flows in various types of porous media, ranging from fixed structures like coastal breakwaters to deformable and transportable media. Deformable porous media refer to soil structures that may deform under the influence of currents and waves, while transportable media involve processes such as sediment transport and scour around hydraulic, coastal, and ocean structures. This review addresses problem dimensionality, governing equations, domain discretization schemes, interaction mechanisms, and applications. The literature analysis reveals that while various numerical techniques have been employed to model the complex interaction between fluid and solid phases, not all methods are physically or mathematically justifiable. However, some approaches have significantly advanced the modeling process over the past two decades. Based on these findings, a modeling framework is proposed to guide the construction of mesh-free models for simulating flow interactions with natural or engineered porous structures. It highlights two effective approaches: (i) Three-dimensional (3D) pore-scale microscopic modeling of flow through large-sized solid particles using coupled smoothed particle hydrodynamics (SPH) and discrete element method (DEM), and (ii) two-dimensional (2D) macroscopic modeling of flow in small-sized porous media using the mixture theory and SPH. The framework highlights the mixture-theory-based methods as particularly effective for large-scale simulations and the advanced SPH-DEM coupling techniques that enable precise simulations of complex fluid–solid interactions. The framework serves as a guide for researchers developing mesh-free numerical models to simulate fluid flows in porous media for hydraulic, coastal, and ocean engineering applications.
Towing operations are widely applied in various fields such as maritime accident rescue, assisting large vessels entering and exiting ports, and transporting large ocean platforms. Tugboats and the towed objects form a complex multi-body system connected by flexible cables, and during operations, they are subjected to the effects of complex marine environmental loads. Current research focuses on using numerical simulations and model tests in water tanks to study the motion response of towed objects and cables under the action of environmental loads. There is a lack of research that combines the mechanical response and structural strength with the load conditions of towing operations. Taking cables as an example, most studies focus on the mechanical properties of cables without considering the impact of towing conditions. After reviewing the literature, this paper summarizes the shortcomings of the existing research and points out several potential research directions in the field of towing: the mechanical response of cables during the initial stage of towing, experiments on towing by multiple tugboats, research on composite fiber cables using experimental and finite element simulation methods, and structural optimization of components related to towing operations.
Accurate estimation of tuna catch is crucial for effective pelagic fishery management and resource conservation. However, existing manual counting methods suffer from issues such as low accuracy and poor timeliness, highlighting the urgent need for an efficient and automated solution. This paper proposes an automatic tuna counting method based on the YOLOv8n-DMTNet target detection algorithm combined with the improved ByteTrack tracking algorithm. The method uses YOLOv8n as the base model, enhanced with detail-enhanced convolution and a multi-scale feature fusion pyramid network, which significantly improves detection accuracy in complex marine environments. Additionally, a dynamic, task-aligned detection head is introduced to optimize the synergy between classification and localization tasks. To further improve counting accuracy, the ByteTrack algorithm is employed for target tracking, and a region-specific counting method is designed to prevent double counting and omission due to occlusion and motion irregularities. Experimental results show that the improved YOLOv8n-DMTNet model achieves a 9.2% increase in mAP@0.5 and a 6.4% increase in mAP@0.5:0.95 compared to YOLOv8n in the tuna detection task, while reducing the number of parameters by 42.3% and computational complexity by 33.3%. The counting accuracy reaches 93.5%, and the method demonstrates superior performance in terms of accuracy, robustness, and computational resource efficiency, making it well-suited for resource-constrained fishing vessel environments. This approach provides reliable technical support for automated catch counting in pelagic fisheries.
Tatsuya Kaneko, Hidetaka Houtani, Ryota Wada
et al.
Short-term phase-resolved ocean wave field prediction is desired for safe and efficient offshore operation. The dynamics of ocean waves are influenced by ambient currents, nonlinearity, and finite depth, which are difficult to characterize analytically. In this paper, we propose the application of video prediction methods by training the model with in-situ wave characteristics. The problem is set as predicting 5 s future waves in the square area within the predictable zone, using the surface elevation time-series data observed in the ∼200 m × ∼200 m range as input. The proposed model was evaluated using publicly available observational ocean wave field data and synthetic wave data compared to the 2D-FFT method based on linear wave theory. As a result, the video prediction showed higher accuracy than 2D-FFT for ocean wave field data. The key to successful wave prediction with observational data could be that the complex wave propagation property in the ocean was learned through in-situ training. Any explicit modeling of physics that can affect the wave propagation property is not necessary for wave prediction with video prediction.
Large language models (LLMs) have rapidly gained popularity and are being embedded into professional applications due to their capabilities in generating human-like content. However, unquestioned reliance on their outputs and recommendations can be problematic as LLMs can reinforce societal biases and stereotypes. This study investigates how LLMs, specifically OpenAI's GPT-4 and Microsoft Copilot, can reinforce gender and racial stereotypes within the software engineering (SE) profession through both textual and graphical outputs. We used each LLM to generate 300 profiles, consisting of 100 gender-based and 50 gender-neutral profiles, for a recruitment scenario in SE roles. Recommendations were generated for each profile and evaluated against the job requirements for four distinct SE positions. Each LLM was asked to select the top 5 candidates and subsequently the best candidate for each role. Each LLM was also asked to generate images for the top 5 candidates, providing a dataset for analysing potential biases in both text-based selections and visual representations. Our analysis reveals that both models preferred male and Caucasian profiles, particularly for senior roles, and favoured images featuring traits such as lighter skin tones, slimmer body types, and younger appearances. These findings highlight underlying societal biases influence the outputs of LLMs, contributing to narrow, exclusionary stereotypes that can further limit diversity and perpetuate inequities in the SE field. As LLMs are increasingly adopted within SE research and professional practices, awareness of these biases is crucial to prevent the reinforcement of discriminatory norms and to ensure that AI tools are leveraged to promote an inclusive and equitable engineering culture rather than hinder it.
Patricia Schöntag, David Nakath, Judith Fischer
et al.
The development and evaluation of machine vision in underwater environments remains challenging, often relying on trial-and-error-based testing tailored to specific applications. This is partly due to the lack of controlled, ground-truthed testing environments that account for the optical challenges, such as color distortion from spectrally variant light attenuation, reduced contrast and blur from backscatter and volume scattering, and dynamic light patterns from natural or artificial illumination. Additionally, the appearance of ocean water in images varies significantly across regions, depths, and seasons. However, most machine vision evaluations are conducted under specific optical water types and imaging conditions, therefore often lack generalizability. Exhaustive testing across diverse open-water scenarios is technically impractical. To address this, we introduce the \textit{Optical Ocean Recipes}, a framework for creating realistic datasets under controlled underwater conditions. Unlike synthetic or open-water data, these recipes, using calibrated color and scattering additives, enable repeatable and controlled testing of the impact of water composition on image appearance. Hence, this provides a unique framework for analyzing machine vision in realistic, yet controlled underwater scenarios. The controlled environment enables the creation of ground-truth data for a range of vision tasks, including water parameter estimation, image restoration, segmentation, visual SLAM, and underwater image synthesis. We provide a demonstration dataset generated using the Optical Ocean Recipes and briefly demonstrate the use of our system for two underwater vision tasks. The dataset and evaluation code will be made available.
Combining the strengths of Lagrangian and Eulerian descriptions, the coupled Lagrangian–Eulerian methods play an increasingly important role in various subjects. This work reviews their development and application in ocean engineering. Initially, we briefly outline the advantages and disadvantages of the Lagrangian and Eulerian descriptions and the main characteristics of the coupled Lagrangian–Eulerian approach. Then, following the developmental trajectory of these methods, the fundamental formulations and the frameworks of various approaches, including the arbitrary Lagrangian–Eulerian finite element method, the particle-in-cell method, the material point method, and the recently developed Lagrangian–Eulerian stabilized collocation method, are detailedly reviewed. In addition, the article reviews the research progress of these methods with applications in ocean hydrodynamics, focusing on free surface flows, numerical wave generation, wave overturning and breaking, interactions between waves and coastal structures, fluid-rigid body interactions, fluid–elastic body interactions, multiphase flow problems and visualization of ocean flows, etc. Furthermore, the latest research advancements in the numerical stability, accuracy, efficiency, and consistency of the coupled Lagrangian–Eulerian particle methods are reviewed; these advancements enable efficient and highly accurate simulation of complicated multiphysics problems in ocean and coastal engineering. By building on these works, the current challenges and future directions of the hybrid Lagrangian–Eulerian particle methods are summarized.
Nonlinear evolution equations are unavoidable for precisely modelling and understanding nonlinear wave phenomena. The study of nonlinear waves enriches our comprehension of natural phenomena and supports technological advancements across various disciplines. In this work, we have proposed a new expansion method to find the travelling wave solutions of nonlinear evolution equations. This method is named as FμF+G− expansion method. We applied the proposed technique to construct the exact travelling wave solutions to two well-known nonlinear equations arising in ocean engineering. These equations are extended (2 + 1)-dimensional Boussinesq equation and (3 + 1)-dimensional generalized shallow water wave equation. Propagation of obtained travelling wave solutions are illustrated by surface plots and two-dimensional graphs plotted for suitable parametric values. We observed soliton, kink, breather, lump and periodic wave structures. The results show efficiency and reliability of the proposed method.
Video synthetic aperture radar (video SAR) has been successfully applied in many fields and the registration of the video SAR images has been proven to be a crucial step in their preprocessing. However, video SAR images exhibit more severe image differences because of the unique imaging mechanism and the immature imaging methods. This results in existing registration methods failing to achieve satisfactory registration outcomes for video SAR images. The convolutional neural network (CNN) can contribute to improving registration performance. Nevertheless, CNN-based registration methods must be driven by a large amount of labeled data, which is impractical for video SAR images. Therefore, to tackle these problems, we propose an unsupervised end-to-end deep registration method for video SAR images. First, an end-to-end deep registration model (DRM) is proposed to improve the registration performance for video SAR images. In the proposed DRM, the offset field is utilized to indirectly calculate the registered parameters and we construct a CNN, MUnet, to regress the offset field accurately. We also develop a differentiable H-transform and a differentiable spatial transformation to implement the mapping from end to end while allowing DRM to backpropagate the losses during the training phase. Meanwhile, we borrow intensity-based methods to further optimize the registration results. Furthermore, we propose an unsupervised deep training strategy that can use the generated pseudodata with pseudolabel to train the proposed DRM in the absence of large amounts of labeled data. Experiment results on multiple data demonstrate the effectiveness of the proposed registration method.
Due to the limitations of small targets in remote sensing images, such as background noise, poor information, and so on, the results of commonly used detection algorithms in small target detection is not satisfactory. To improve the accuracy of detection results, we develop an improved algorithm based on YOLOv8, called LAR-YOLOv8. First, in the feature extraction network, the local module is enhanced by using the dual-branch architecture attention mechanism, while the vision transformer block is used to maximize the representation of the feature map. Second, an attention-guided bidirectional feature pyramid network is designed to generate more discriminative information by efficiently extracting feature from the shallow network through a dynamic sparse attention mechanism, and adding top–down paths to guide the subsequent network modules for feature fusion. Finally, the RIOU loss function is proposed to avoid the failure of the loss function and improve the shape consistency between the predicted and ground-truth box. Experimental results on NWPU VHR-10, RSOD, and CARPK datasets verify that LAR-YOLOv8 achieves satisfactory results in terms of mAP (small), mAP, model parameters, and FPS, and can prove that our modifications made to the original YOLOv8 model are effective.
The maritime sector is increasingly integrating Information and Communication Technology (ICT) and Artificial Intelligence (AI) technologies to enhance safety, environmental protection, and operational efficiency. With the introduction of the MASS Code by the International Maritime Organization (IMO), which regulates Maritime Autonomous Surface Ships (MASS), ensuring the safety of AI-integrated systems on these vessels has become critical. To achieve safe navigation, it is essential to identify potential risks during the system planning stage and design systems that can effectively address these risks. This paper proposes RA4MAIS (Risk Assessment for Maritime Artificial Intelligence Safety), a risk identification method specifically useful for developing AI-integrated maritime systems. RA4MAIS employs a systematic approach to uncover potential risks by considering internal system failures, human interactions, environmental conditions, AI-specific characteristics, and data quality issues. The method provides structured guidance to identify unknown risk situations and supports the development of safety requirements that guide system design and implementation. A case study on an Electronic Chart Display and Information System (ECDIS) with an AI-integrated collision avoidance function demonstrates the applicability of RA4MAIS, highlighting its effectiveness in identifying specific risks related to AI performance and reliability. The proposed method offers a foundational step towards enhancing the safety of software systems, contributing to the safe operation of autonomous ships.
With the rapid development of deep learning, researchers are actively exploring its applications in the field of industrial anomaly detection. Deep learning methods differ significantly from traditional mathematical modeling approaches, eliminating the need for intricate mathematical derivations and offering greater flexibility. Deep learning technologies have demonstrated outstanding performance in anomaly detection problems and gained widespread recognition. However, when dealing with multivariate data anomaly detection problems, deep learning faces challenges such as large-scale data annotation and handling relationships between complex data variables. To address these challenges, this study proposes an innovative and lightweight deep learning model—the Attention-Based Deep Convolutional Autoencoding Prediction Network (AT-DCAEP). The model consists of a characterization network based on convolutional autoencoders and a prediction network based on attention mechanisms. The AT-DCAEP exhibits excellent performance in multivariate time series data anomaly detection without the need for pre-labeling large-scale datasets, making it an efficient unsupervised anomaly detection method. We extensively tested the performance of AT-DCAEP on six publicly available datasets, and the results show that compared to current state-of-the-art methods, AT-DCAEP demonstrates superior performance, achieving the optimal balance between anomaly detection performance and computational cost.
Cyprien Alexandre, Rodolphe Devillers, David Mouillot
et al.
Detecting and tracking ships remotely is now required in a wide range of contexts, from military security to illegal immigration control, as well as the management of fisheries and marine protected areas. Among the available methods, radar remote sensing is increasingly used due to its advantages of being rarely affected by cloud cover and allowing image acquisition during both day and night. The growing availability over the past decade of free synthetic aperture radar (SAR) data, such as Sentinel-1 images, enabled the widespread use of C-band images for ship detection. There is, however, a broad range of SAR data processing methods proposed in the literature, challenging the selection of the most appropriate one for a given application. Here, we conducted a systematic review of the literature on ship detection methods using C-band SAR data from 2015 to 2022. The review shows a partition between traditional and deep learning (DL) methods. Earlier methods were mainly based on constant false alarm rate or polarimetry, which require limited computing resources but critically depend on ships’ physical environment. Those approaches are gradually replaced by DL, due to the growth of computing capacities, the wide availability of SAR images, and the publication of DL training datasets. However, access to these computing capacities may not be easy for all users, which could become a major obstacle to their development. While both methods have the same objective, they differ both technically and in their approaches to the problem. Traditional methods mainly focus on ship size in spatial units (meters), whereas DL methods are mainly based on the number of ship pixels, regardless of image resolution. These latter methods can result in a lack of information on ship size and, therefore, a lack of knowledge that could be useful to specific applications, such as fisheries and protected area management.
Nurezayana Zainal, Mohanavali Sithambranathan, Umar Farooq Khattak
et al.
Because of its versatility and ability to work with difficult materials, Electrical Discharge Machining (EDM) has become an essential tool in many different industries. It can produce precise shapes and intricate details. EDM has transformed fabrication processes in a variety of industries, including aerospace and electronics, medical implants and surgical instruments, and the shaping of small components. Its capacity to machine undercuts and deep cavities with little material removal makes it ideal for producing complex geometries that would be challenging or impossible to accomplish with conventional machining techniques. Several attempts have been carried out to solve the optimization problem involved in the EDM process. This paper emphasizes optimizing the EDM process using three metaheuristic algorithms: Glowworm Swarm Optimization (GSO), Grey Wolf Optimizer (GWO), and Whale Optimization Algorithm (WOA). The study's outcome showed that the GWO algorithm outperformed the GSO and WOA algorithms in solving the EDM optimization problem and achieved the minimum surface roughness value of 1.7593µm.