Fatma H.A. Mustafa, Seham A.H. Hassan, Ola A. Ashary
et al.
The shrimp aquaculture industry faces challenges that hinder production, including pathogens and infections from bacteria-causing diseases. The present study aimed to examine the impact of dietary synthesized chitosan stabilized selenium nanoparticles (CSSeNPs) on the growth performance of gut microbiota and immune-associated genes for shrimp L. vannamei aquaculture. The experiments were accomplished for 12 weeks in triplicate fiberglass circular tanks (200 L). A 300 shrimp post-larvae (PL) weighing an average of 0.14 ± 0.001 g was utilized. Four experimental groups were assigned to 12 tanks. Each tank contained 25 PL. Each group fed with one of the following diets: control (C), commercial diet without CSSeNPs (Control; T1), 25 mg/kg CSSeNPs (T2), 35 mg/kg CSSeNPs (T3), and 45 mg/kg CSSeNPs (T4). The specific growth rate (SGR) and feed conversion ratio (FCR) showed improvement (P ≤ 0.05) in the T2 and T3 groups compared to the control group. The relative expression of all examined genes LPS/b-glucan binding protein, glutathione peroxidase and superoxide dismutase (L-GBP, GPX, and SOD) were increased (P < 0.05) at T3. CSSeNPs demonstrated anti-fish pathogen efficacy against Gram-positive and Gram-negative bacteria. The Enterococcus faecalis bacterium displayed the highest vulnerability to inhibition. In contrast, Escherichia coli exhibited the lowest level of susceptibility. Moreover, a quantitative assessment of the prebiotic-like effect of CSSeNPs on shrimp gut showed increased lactic acid bacteria. Also, no Vibrio sp., Salmonella sp., Shigella sp., or Pseudomonas sp. were detected in the T3 group feed on 35 mg/Kg CSSeNPs. This work provides a new safe additive that is eco-friendly for the shrimp diet and improves growth, immunity, and disease control.
Noise interference and multipath effects in complex marine environments seriously constrain the performance of hydroacoustic positioning systems. Traditional millisecond-level signal application and processing methods are widely used in existing research; however, it is difficult to meet the requirements of centimeter-level positioning accuracy in marine engineering. To address this problem, this study proposes a hydroacoustic positioning method based on a short baseline system for the cooperative reception of multi-channel signals. The method adopts ultra-short pulse signals with microsecond pulse width, and significantly improves the system signal-to-noise ratio and anti-interference capability through multi-channel signal alignment and coherent superposition techniques; meanwhile, a joint energy gradient-phase detection algorithm is designed, which solves the instability problem of the traditional cross-correlation algorithm in the detection of ultra-short pulse signals through the identification of signal stability intervals and accurate phase estimation. Simulation verification shows that the 8-hydrophone × 4-channel configuration can achieve 36.06% signal-to-noise gain under harsh environmental conditions (−10 dB), and the performance of the joint energy gradient-phase detection algorithm is improved by about 19.1% compared with the traditional method in an integrated manner. Marine tests further validate the engineering practicability of the method, with an average SNR gain of 2.27 dB achieved for multi-channel signal reception, and the TDOA estimation stability of the new algorithm is up to 32.0% higher than that of the conventional method, which highlights the significant advantages of the proposed method in complex marine environments. The results show that the proposed method can effectively mitigate the noise interference and multipath effects in complex marine environments, significantly improve the accuracy and stability of hydroacoustic positioning, and provide reliable technical support for centimeter-level accuracy applications in marine engineering.
Innovative marine talents are the cornerstone of developing a leading maritime nation. Cultivating high-quality talent in marine science has become an urgent priority for higher education. Within this process, pedagogy remains a critical determinant of educational quality. Based on the theory of “micro-thinking”, this paper re-evaluates the fundamental concepts of teaching. From this perspective, innovation in higher education should originate from the core components of education: educators and students, teaching materials, and instructional processes. The aim of this paper is to further promote the course reform in environmental oceanography and foster educational innovations that contribute to cultivating highly qualified, well-trained marine science professionals for the society.
Multimodal contrastive learning is a methodology for linking different data modalities; the canonical example is linking image and text data. The methodology is typically framed as the identification of a set of encoders, one for each modality, that align representations within a common latent space. In this work, we focus on the bimodal setting and interpret contrastive learning as the optimization of (parameterized) encoders that define conditional probability distributions, for each modality conditioned on the other, consistent with the available data. This provides a framework for multimodal algorithms such as crossmodal retrieval, which identifies the mode of one of these conditional distributions, and crossmodal classification, which is similar to retrieval but includes a fine-tuning step to make it task specific. The framework we adopt also gives rise to crossmodal generative models. This probabilistic perspective suggests two natural generalizations of contrastive learning: the introduction of novel probabilistic loss functions, and the use of alternative metrics for measuring alignment in the common latent space. We study these generalizations of the classical approach in the multivariate Gaussian setting. In this context we view the latent space identification as a low-rank matrix approximation problem. This allows us to characterize the capabilities of loss functions and alignment metrics to approximate natural statistics, such as conditional means and covariances; doing so yields novel variants on contrastive learning algorithms for specific mode-seeking and for generative tasks. The framework we introduce is also studied through numerical experiments on multivariate Gaussians, the labeled MNIST dataset, and on a data assimilation application arising in oceanography.
The equilibrium between hydrated and hydrolysed forms of CO2 in water is central to a multitude of processes in geology, oceanography and biology. Chemistry of the carbonate system is well understood in bulk solution, however processes such as mineral weathering and biomineralisation frequently occur in nano-confined spaces where carbonate chemistry is less explored. For confined systems, the speciation equilibria are expected to tilt due to surface reactivity, electric fields and reduced configurational entropy. In this discussion paper we provide measurements of interaction force between negatively charged aluminosilicate (mica) sheets across aqueous carbonate/bicarbonate solutions confined to nanoscale films in equilibrium with a reservoir of the solution. By fitting the measurements to a Poisson-Boltzmann equation modified to account for charge regulation at the bounding walls, we discuss features of the bicarbonate speciation in confinement. We find that (i) the presence of bicarbonate in the bulk reservoir causes a repulsive excess pressure in the slit compared to pH-neutral salt solutions at the same concentration, arising from a higher (negative) effective charge on the mica surfaces; (ii) the electrostatic screening length is lower for solutions of Na2CO3 compared to NaHCO3 at the same bulk concentration, due to a shift in the speciation equilibria with pH and in accordance with Debye-Hückel theory; (iii) hydration forces are observed at distances below 2 nm with features of size 0.1 nm and 0.3 nm; this was reproducible across the various bicarbonate electrolytes studied, and contrasts with hydration forces of uniform step size measured in pH-neutral electrolytes.
Rebecca Gjini, Matthias Morzfeld, Oliver R. A. Dunbar
et al.
Ensemble Kalman methods were initially developed to solve nonlinear data assimilation problems in oceanography, but are now popular in applications far beyond their original use cases. Of particular interest is climate model calibration. As hybrid physics and machine-learning models evolve, the number of parameters and complexity of parameterizations in climate models will continue to grow. Thus, robust calibration of these parameters plays an increasingly important role. We focus on learning climate model parameters from minimizing the misfit between modeled and observed climate statistics in an idealized setting. Ensemble Kalman methods are a natural choice for this problem because they are derivative-free, scalable to high dimensions, and robust to noise caused by statistical observations. Given the many variants of ensemble methods proposed, an important question is: Which ensemble Kalman method should be used for climate model calibration? To answer this question, we perform systematic numerical experiments to explore the relative computational efficiencies of several ensemble Kalman methods. The numerical experiments involve statistical observations of Lorenz-type models of increasing complexity, frequently used to represent simplified atmospheric systems, and some feature neural network parameterizations. For each test problem, several ensemble Kalman methods and a derivative-based method "race" to reach a specified accuracy, and we measure the computational cost required to achieve the desired accuracy. We investigate how prior information and the parameter or data dimensions play a role in choosing the ensemble method variant. The derivative-based method consistently fails to complete the race because it does not adaptively handle the noisy loss landscape.
Daria Botvynko, Pierre Haslée, Lucile Gaultier
et al.
We present an end-to-end deep learning framework for short-term forecasting of global sea surface dynamics based on sparse satellite altimetry data. Building on two state-of-the-art architectures: U-Net and 4DVarNet, originally developed for image segmentation and spatiotemporal interpolation respectively, we adapt the models to forecast the sea level anomaly and sea surface currents over a 7-day horizon using sequences of sparse nadir altimeters observations. The model is trained on data from the GLORYS12 operational ocean reanalysis, with synthetic nadir sampling patterns applied to simulate realistic observational coverage. The forecasting task is formulated as a sequence-to-sequence mapping, with the input comprising partial sea level anomaly (SLA) snapshots and the target being the corresponding future full-field SLA maps. We evaluate model performance using (i) normalized root mean squared error (nRMSE), (ii) averaged effective resolution, (iii) percentage of correctly predicted velocities magnitudes and angles, and benchmark results against the operational Mercator Ocean forecast product. Results show that end-to-end neural forecasts outperform the baseline across all lead times, with particularly notable improvements in high variability regions. Our framework is developed within the OceanBench benchmarking initiative, promoting reproducibility and standardized evaluation in ocean machine learning. These results demonstrate the feasibility and potential of end-to-end neural forecasting models for operational oceanography, even in data-sparse conditions.
Reliable long-term forecasting of Earth system dynamics is fundamentally limited by instabilities in current artificial intelligence (AI) models during extended autoregressive simulations. These failures often originate from inherent spectral bias, leading to inadequate representation of critical high-frequency, small-scale processes and subsequent uncontrolled error amplification. Inspired by the nested grids in numerical models used to resolve small scales, we present TritonCast. At the core of its design is a dedicated latent dynamical core, which ensures the long-term stability of the macro-evolution at a coarse scale. An outer structure then fuses this stable trend with fine-grained local details. This design effectively mitigates the spectral bias caused by cross-scale interactions. In atmospheric science, it achieves state-of-the-art accuracy on the WeatherBench 2 benchmark while demonstrating exceptional long-term stability: executing year-long autoregressive global forecasts and completing multi-year climate simulations that span the entire available $2500$-day test period without drift. In oceanography, it extends skillful eddy forecast to $120$ days and exhibits unprecedented zero-shot cross-resolution generalization. Ablation studies reveal that this performance stems from the synergistic interplay of the architecture's core components. TritonCast thus offers a promising pathway towards a new generation of trustworthy, AI-driven simulations. This significant advance has the potential to accelerate discovery in climate and Earth system science, enabling more reliable long-term forecasting and deeper insights into complex geophysical dynamics.
Jeffrey J. Early, Gerardo Hernández-Dueñas, Leslie M. Smith
et al.
A challenge in physical oceanography is quantifying the energy content of waves and balanced flows and the fluxes that connect these reservoirs with their sources and sinks. Methodological limitations have prevented decompositions for realistic flows with non-hydrostatic motions and variable stratification. We present a framework that separates the flow into wave and geostrophic components using the principle that waves have no Eulerian available potential vorticity signature. Starting from new expressions for available energy and potential vorticity conservation, we construct a basis of wave and geostrophic modes, complete and orthogonal with respect to quadratic approximations of the conserved quantities. Using the resulting non-hydrostatic projection operators, the nonlinear equations of motion are expressed as coupled wave and geostrophic equations, quantifying cascade and transfer fluxes of wave and geostrophic energy. We apply the method to non-hydrostatic mid-ocean simulations with geostrophic mean-flow, near-inertial, and tidal forcing. From these experiments, we construct source-sink-reservoir diagrams for exact and quadratic fluxes, quantifying the fluxes between geostrophic and wave components. Because the cascade fluxes obey total energy conservation, we construct energy flow diagrams within the wave and geostrophic reservoirs and diagnose nonlocal transfers. The simulations show a geostrophic inverse cascade, a forward wave cascade, and a direct transfer of geostrophic to wave energy, with no indication of a forward geostrophic cascade. The mean-flow-only simulation shows weak spontaneous wave emission during spin-up, which diminishes to zero. Finally, we evaluate the decomposition by comparing linearized and fully conserved available potential vorticity, finding that errors become significant at scales below 15\,km.
<p>Oceanic bromoform (CHBr<span class="inline-formula"><sub>3</sub></span>) is an important precursor of atmospheric bromine. Although highly relevant for the future halogen burden and ozone layer in the stratosphere, global CHBr<span class="inline-formula"><sub>3</sub></span> production in the ocean and its emissions are still poorly constrained in observations and are mostly neglected in climate models. Here, we newly implement marine CHBr<span class="inline-formula"><sub>3</sub></span> in the second version of the state-of-the-art Norwegian Earth System Model (NorESM2) with fully coupled interactions of ocean, sea ice, and atmosphere. Our results are validated using oceanic and atmospheric observations from the HalOcAt (Halocarbons in the Ocean and Atmosphere) database. The simulated mean oceanic concentrations (6.61 <span class="inline-formula">±</span> 3.43 pmol L<span class="inline-formula"><sup>−1</sup>)</span> are in good agreement with observations from open-ocean regions (5.02 <span class="inline-formula">±</span> 4.50 pmol L<span class="inline-formula"><sup>−1</sup>)</span>, while the mean atmospheric mixing ratios (0.76 <span class="inline-formula">±</span> 0.39 ppt) are lower than observed but within the range of uncertainty (1.45 <span class="inline-formula">±</span> 1.11 ppt). The NorESM2 ocean emissions of CHBr<span class="inline-formula"><sub>3</sub></span> (214 Gg yr<span class="inline-formula"><sup>−1</sup>)</span> are within the range of or higher than previously published estimates from bottom-up approaches but lower than estimates from top-down approaches. Annual mean fluxes are mostly positive (sea-to-air fluxes); driven by oceanic concentrations, sea surface temperature, and wind speed; and dependent on season and location. During winter, model results imply that some oceanic regions in high latitudes act as sinks of atmospheric CHBr<span class="inline-formula"><sub>3</sub></span> due to their elevated atmospheric mixing ratios. We further demonstrate that key drivers for oceanic and atmospheric CHBr<span class="inline-formula"><sub>3</sub></span> variability are spatially heterogeneous. In the tropical West Pacific, which is a hot spot for oceanic bromine delivery to the stratosphere, wind speed is the main driver for CHBr<span class="inline-formula"><sub>3</sub></span> fluxes on an annual basis. In the North Atlantic, as well as in the Southern Ocean region, atmospheric and oceanic CHBr<span class="inline-formula"><sub>3</sub></span> variabilities interact during most of the seasons except for the winter months, when sea surface temperature is the main driver. Our study provides an improved process-based understanding of the biogeochemical cycling of CHBr<span class="inline-formula"><sub>3</sub></span> and more reliable natural emission estimates, especially on seasonal and spatial scales, compared to previously published model estimates.</p>
Recent advancements in quantum computing suggest the potential to revolutionize computational algorithms across various scientific domains including oceanography and atmospheric science. The field is still relatively young and quantum computation is so different from classical computation that suitable frameworks to represent oceanic and atmospheric dynamics are yet to be explored. Quantum annealing, one of the major paradigms, focuses on combinatorial optimization tasks. In this paper, we solve the classical Stommel problem by quantum annealing (QA) and simulated annealing (SA), a classical counterpart of quantum annealing. We cast the linear partial differential equation into an optimization problem by the least-squares method and discretize the cost function in two ways: finite difference and truncated basis expansion. In either case, SA successfully reproduces the expected solution when appropriate parameters are chosen, demonstrating that annealing has the potential. In contrast, QA using the D-Wave quantum annealing machine fails to obtain good solutions for some cases owing to hardware limitations; in particular, the highly limited connectivity graph of the machine limits the size of the solvable problems, at least under currently available algorithms. Either expanding the connectivity graph or improving the graph embedding algorithms would probably be necessary for quantum annealing machines to be usable for oceanic and atmospheric dynamics problems. While this finding emphasizes the need for hardware improvements and enhancements in graph embedding algorithms for practical applications of quantum annealers, the results from simulated annealing suggest its potential to address practical geophysical dynamics problems. As quantum calculation continues to evolve, addressing these challenges may lead to transformative advancements in ocean and atmosphere modeling.
Andres Molares-Ulloa, Elisabet Rocruz, Daniel Rivero
et al.
Diarrhetic Shellfish Poisoning (DSP) is a global health threat arising from shellfish contaminated with toxins produced by dinoflagellates. The condition, with its widespread incidence, high morbidity rate, and persistent shellfish toxicity, poses risks to public health and the shellfish industry. High biomass of toxin-producing algae such as DSP are known as Harmful Algal Blooms (HABs). Monitoring and forecasting systems are crucial for mitigating HABs impact. Predicting harmful algal blooms involves a time-series-based problem with a strong historical seasonal component, however, recent anomalies due to changes in meteorological and oceanographic events have been observed. Stream Learning stands out as one of the most promising approaches for addressing time-series-based problems with concept drifts. However, its efficacy in predicting HABs remains unproven and needs to be tested in comparison with Batch Learning. Historical data availability is a critical point in developing predictive systems. In oceanography, the available data collection can have some constrains and limitations, which has led to exploring new tools to obtain more exhaustive time series. In this study, a machine learning workflow for predicting the number of cells of a toxic dinoflagellate, Dinophysis acuminata, was developed with several key advancements. Seven machine learning algorithms were compared within two learning paradigms. Notably, the output data from CROCO, the ocean hydrodynamic model, was employed as the primary dataset, palliating the limitation of time-continuous historical data. This study highlights the value of models interpretability, fair models comparison methodology, and the incorporation of Stream Learning models. The model DoME, with an average R2 of 0.77 in the 3-day-ahead prediction, emerged as the most effective and interpretable predictor, outperforming the other algorithms.
Lorenzo Iafolla, Emiliano Fiorenza, Massimo Chiappini
et al.
Sea wave monitoring is key in many applications in oceanography such as the validation of weather and wave models. Conventional in situ solutions are based on moored buoys whose measurements are often recognized as a standard. However, being exposed to a harsh environment, they are not reliable, need frequent maintenance, and the datasets feature many gaps. To overcome the previous limitations, we propose a system including a buoy, a micro-seismic measuring station, and a machine learning algorithm. The working principle is based on measuring the micro-seismic signals generated by the sea waves. Thus, the machine learning algorithm will be trained to reconstruct the missing buoy data from the micro-seismic data. As the micro-seismic station can be installed indoor, it assures high reliability while the machine learning algorithm provides accurate reconstruction of the missing buoy data. In this work, we present the methods to process the data, develop and train the machine learning algorithm, and assess the reconstruction accuracy. As a case of study, we used experimental data collected in 2014 from the Northern Tyrrhenian Sea demonstrating that the data reconstruction can be done both for significant wave height and wave period. The proposed approach was inspired from Data Science, whose methods were the foundation for the new solutions presented in this work. For example, estimating the period of the sea waves, often not discussed in previous works, was relatively simple with machine learning. In conclusion, the experimental results demonstrated that the new system can overcome the reliability issues of the buoy keeping the same accuracy.
Because of the rudimentary reporting methods and general lack of documentation, the creation of a severe weather database within the Philippines has been difficult yet relevant target for climatology purposes and historical interest. Previous online severe weather documentation i.e. of tornadoes, waterspouts, and hail events, has also often been few, inconsistent, inactive, or is now completely decommissioned. Several countries or continents support severe weather information through either government-sponsored or independent organizations. For this work, Project SWAP stands as a collaborative exercise, with clear data attribution and open avenues for augmentation, and the creation of a common data model to store the phenomenon's information will assist in maintaining and updating the aforementioned online archive in the Philippines. This paper presents the methods necessary for creating the SWAP database, provide broader climatological analysis of spatio-temporal patterns in severe weather occurrence within the Philippine context, and outline potential use cases for the data. We also highlight the project's current limitations as is to any other existing and far larger database, and emphasize the need for understanding these events' and their mesoscale environments, inline to the current severe weather climatologies across the globe.
Shalok Bharti, Sudhir Kumar, Inderjeet Singh
et al.
Friction stir welding (FSW) has been recognized as a revolutionary welding process for marine applications, effectively tackling the distinctive problems posed by maritime settings. This review paper offers a comprehensive examination of the current advancements in FSW design, specifically within the marine industry. This paper provides an overview of the essential principles of FSW and its design, emphasizing its comparative advantages when compared with conventional welding techniques. The literature review reveals successful implementations in the field of shipbuilding and offshore constructions, highlighting design factors as notable enhancements in joint strength, resistance to corrosion, and fatigue performance. This study examines the progress made in the field of FSW equipment and procedures, with a specific focus on their application in naval construction. Additionally, it investigates the factors to be considered when selecting materials and ensuring their compatibility in this context. The analysis of microstructural and mechanical features of FSW joints is conducted, with a particular focus on examining the impact of welding settings. The study additionally explores techniques for mitigating corrosion and safeguarding surfaces in marine environments. The study also provides a forward-looking perspective by proposing potential areas of future research and highlighting the issues that may arise in the field of FSW for maritime engineering. The significance of incorporating environmental and economic considerations in the implementation of FSW for extensive marine projects is emphasized.
Many coastal bridges have been destroyed or damaged by tsunami waves. Some studies have been conducted to investigate wave impact on bridge decks, but there is little concerning the effect of bridge superelevation. A three-dimensional (3D) dam break wave model based on OpenFOAM was developed to study tsunami-like wave impacts on bridge decks with superelevation. The Reynolds-averaged Navier–Stokes equations and the k-ɛ turbulence model were used. The numerical model was satisfactorily checked against Stoker’s analytical solution and the published hydrodynamic experiment. The validated model was employed to carry out parametric analyses to investigate the effects of upstream and downstream water depths and the bridge deck’s superelevation. The results show that the tsunami force is proportional to the relative wave height. The dam break wave impact on the bridge deck can be identified as two distinct scenarios according to whether the wave height is higher than the bridge deck top. The trend of the tsunami force is also different in different scenarios. The superelevation will significantly influence the tsunami forces acting on the box girder, with some exceptions.
Ocean modeling is a powerful tool for simulating the physical, chemical, and biological processes of the ocean, which is the foundation for marine science research and operational oceanography. Modern numerical ocean modeling mainly consists of governing equations and numerical algorithms. Nonlinear instability, computational expense, low reusability efficiency and high coupling costs have gradually become the main bottlenecks for the further development of numerical ocean modeling. Recently, artificial intelligence-based modeling in scientific computing has shown revolutionary potential for digital twins and scientific simulations, but the bottlenecks of numerical ocean modeling have not been further solved. Here, we present AI-GOMS, a large AI-driven global ocean modeling system, for accurate and efficient global ocean daily prediction. AI-GOMS consists of a backbone model with the Fourier-based Masked Autoencoder structure for basic ocean variable prediction and lightweight fine-tuning models incorporating regional downscaling, wave decoding, and biochemistry coupling modules. AI-GOMS has achieved the best performance in 30 days of prediction for the global ocean basic variables with 15 depth layers at 1/4° spatial resolution. Beyond the good performance in statistical metrics, AI-GOMS realizes the simulation of mesoscale eddies in the Kuroshio region at 1/12° spatial resolution and ocean stratification in the tropical Pacific Ocean. AI-GOMS provides a new backbone-downstream paradigm for Earth system modeling, which makes the system transferable, scalable and reusable.
Christopher M. O’Reilly, Stephan T. Grilli, Christian F. Janßen
et al.
We report on the development and validation of a 3D hybrid Lattice Boltzmann Model (LBM), with Large Eddy Simulation (LES), to simulate the interactions of incompressible turbulent flows with ocean structures. The LBM is based on a perturbation method, in which the velocity and pressure are expressed as the sum of an inviscid flow and a viscous perturbation. The far- to near-field flow is assumed to be inviscid and represented by potential flow theory, which can be efficiently modeled with a Boundary Element Method (BEM). The near-field perturbation flow around structures is modeled by the Navier–Stokes (NS) equations, based on a Lattice Boltzmann Method (LBM) with a Large Eddy Simulation (LES) of the turbulence. In the paper, we present the hybrid model formulation, in which a modified LBM collision operator is introduced to simulate the viscous perturbation flow, resulting in a novel <i>perturbation</i> LBM (pLBM) approach. The pLBM is then extended for the simulation of turbulence using the LES and a wall model to represent the viscous/turbulent sub-layer near solid boundaries. The hybrid model is first validated by simulating turbulent flows over a flat plate, for moderate to large Reynolds number values, Re <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mo>∈</mo><mo>[</mo><mn>3.7</mn><mo>×</mo><msup><mn>10</mn><mn>4</mn></msup><mo>;</mo><mn>1.2</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup><mo>]</mo></mrow></semantics></math></inline-formula>; the plate friction coefficient and near-field turbulence properties computed with the model are found to agree well with both experiments and direct NS simulations. We then simulate the flow past a NACA-0012 foil using a regular LBM-LES and the new hybrid pLBM-LES models with the wall model, for Re = <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>1.44</mn><mo>×</mo><msup><mn>10</mn><mn>6</mn></msup></mrow></semantics></math></inline-formula>. A good agreement is found for the computed lift and drag forces, and pressure distribution on the foil, with experiments and results of other numerical methods. Results obtained with the pLBM model are either nearly identical or slightly improved, relative to those of the standard LBM, but are obtained in a significantly smaller computational domain and hence at a much reduced computational cost, thus demonstrating the benefits of the new hybrid approach.
Analysis of acknowledgments is particularly interesting as acknowledgments may give information not only about funding, but they are also able to reveal hidden contributions to authorship and the researcher's collaboration patterns, context in which research was conducted, and specific aspects of the academic work. The focus of the present research is the analysis of a large sample of acknowledgement texts indexed in the Web of Science (WoS) Core Collection. Record types 'article' and 'review' from four different scientific domains, namely social sciences, economics, oceanography and computer science, published from 2014 to 2019 in a scientific journal in English were considered. Six types of acknowledged entities, i.e., funding agency, grant number, individuals, university, corporation and miscellaneous, were extracted from the acknowledgement texts using a Named Entity Recognition (NER) tagger and subsequently examined. A general analysis of the acknowledgement texts showed that indexing of funding information in WoS is incomplete. The analysis of the automatically extracted entities revealed differences and distinct patterns in the distribution of acknowledged entities of different types between different scientific domains. A strong association was found between acknowledged entity and scientific domain and acknowledged entity and entity type. Only negligible correlation was found between the number of citations and the number of acknowledged entities. Generally, the number of words in the acknowledgement texts positively correlates with the number of acknowledged funding organizations, universities, individuals and miscellaneous entities. At the same time, acknowledgement texts with the larger number of sentences have more acknowledged individuals and miscellaneous categories.