M. Dale Stokes, David R. Nadeau, James J. Leichter
An ongoing challenge in pelagic oceanography and limnology is to quantify and understand the distribution of suspended particles and particle aggregates with sufficient temporal and spatial fidelity to understand their dynamics. These particles include biotic (mesoplankton, organic fragments, fecal pellets, etc.) and abiotic (dusts, precipitates, sediments and flocks, anthropogenic materials, etc.) matter and their aggregates (i.e., marine snow), which form a large part of the total particulate matter > 200 μm in size in the ocean. The transport of organic material from surface waters to the deep-sea floor is of particular interest, as it is recognized as a key factor controlling the global carbon cycle and hence, a critical process influencing the sequestration of carbon dioxide from the atmosphere. Here we describe the development of an oceanographic instrument, the Pelagic Laser Tomographer (PLT), that uses high-resolution optical technology, coupled with post-processing analysis, to scan the 3D content of the water column to detect and quantify 3D distributions of small particles. Existing optical instruments typically trade sampling volume for spatial resolution or require large, complex platforms. The PLT addresses this gap by combining high-resolution laser-sheet imaging with large effective sampling volumes in a compact, deployable system. The PLT can generate spatial distributions of small particles (~100 µm and larger) across large water volumes (order 100–1000 m<sup>3</sup>) during a typical deployment, and allow measurements of particle patchiness over spatial scales to less than 1 mm. The instrument’s small size (6 kg), high resolution (~100 µm in each 3000 cm<sup>2</sup> tomographic image slice), and analysis software provide a tool for pelagic studies that have typically been limited by high cost, data storage, resolution, and mechanical constraints, all usually necessitating bulky instrumentation and infrequent deployment, typically requiring a large research vessel.
Abstract Near-inertial currents can be generated by abrupt shifts in wind. Some of these currents project onto low-mode near-inertial waves (NIWs), which can travel thousands of kilometers. Here, a reduced-physics model [the coupled-mode shallow water (CSW) model] is proposed with the goal of simulating global low-mode NIW generation and propagation. In this study, CSW performance is analyzed in an idealized setting based on the Ocean Storms Experiment (D’Asaro et al.). We show that NIW generation is analogous to internal-tide generation, except waves are excited by inertial pumping (convergences and divergences in inertial currents) rather than by barotropic flow over sloping topography. A theoretical solution for internal-tide generation (Llewellyn Smith and Young) predicts CSW NIW generation on an f plane. Numerical simulations on f and β planes confirm that NIWs are only generated when inertial pumping occurs along the inertia–gravity dispersion curve, as predicted by theory. Therefore, the frequency bandwidth of inertial pumping (due to the β effect or mesoscale vorticity) limits the generation of NIWs at very short wavelengths, even if inertial pumping occurs at small scales. We also show that weak damping associated with linearized bottom drag (or unresolved processes and imperfect numerical methods) can significantly alter the fraction of wind work that is radiated as NIWs.
This article examines how artistic practices respond to the emergence of a “negative community” after a disaster, where people are bound together by displacement, abandonment, and infrastructural control rather than choice or solidarity. Drawing on fieldwork in coastal Japan following the 2011 earthquake, tsunami, and nuclear accident, this article reflects on how art can resist the reduction of catastrophe to either spectacle or state‐managed recovery. Through practices of observation, witnessing, and collective engagement, art creates vital spaces of proximity, care, and dissent. In doing so, it unsettles imposed forms of community and opens possibilities for imagining a new social life beyond the structures of ruin and control.
High-resolution and high-precision marine gravity reference maps are core prerequisites for the practical application of gravity-assisted inertial navigation algorithms, and their accuracy directly determines the performance of the navigation system. In view of the problems existing in the current geographic rectangular grid gravity reference map, such as severe polar deformation, poor adjacent consistency, and low positioning accuracy in high latitudes, this study introduces a hexagonal grid system to construct a gravity reference map. It systematically analyzes its compatibility and accuracy in navigation applications. A multi-resolution hexagonal grid scheme with a 7-aperture structure is further proposed to meet the characterization requirements of gravitational fields with different complexities. Experimental verification shows that the accuracy of the gravity-assisted inertial navigation algorithm improved by 0.4%, while that of gravity sequence matching improved by 50%. The proposed hybrid resolution grid can achieve a maximum gravity data compression rate of 68% while ensuring navigation accuracy, especially with regard to the computational efficiency and accuracy requirements of gravity-assisted inertial navigation.
Andre Massahiro Shimaoka, Renato Cordeiro Ferreira, Alfredo Goldman
This study explores the integration of eXtreme Programming (XP) and the Cross-Industry Standard Process for Data Mining (CRISP-DM) in agile Data Science projects. We conducted a case study at the e-commerce company Elo7 to answer the research question: How can the agility of the XP method be integrated with CRISP-DM in Data Science projects? Data was collected through interviews and questionnaires with a Data Science team consisting of data scientists, ML engineers, and data product managers. The results show that 86% of the team frequently or always applies CRISP-DM, while 71% adopt XP practices in their projects. Furthermore, the study demonstrates that it is possible to combine CRISP-DM with XP in Data Science projects, providing a structured and collaborative approach. Finally, the study generated improvement recommendations for the company.
Human whole-brain functional connectivity networks have been shown to exhibit both local/quasilocal (e.g., a set of functional sub-circuits induced by node or edge attributes) and non-local (e.g., higher-order functional coordination patterns) properties. Nonetheless, the non-local properties of topological strata induced by local/quasilocal functional sub-circuits have yet to be addressed. To that end, we proposed a homological formalism that enables the quantification of higher-order characteristics of human brain functional sub-circuits. Our results indicate that each homological order uniquely unravels diverse, complementary properties of human brain functional sub-circuits. Noticeably, the H1 homological distance between rest and motor task was observed at both the whole-brain and sub-circuit consolidated levels, which suggested the self-similarity property of human brain functional connectivity unraveled by a homological kernel. Furthermore, at the whole-brain level, the rest–task differentiation was found to be most prominent between rest and different tasks at different homological orders: (i) Emotion task (H0), (ii) Motor task (H1), and (iii) Working memory task (H2). At the functional sub-circuit level, the rest–task functional dichotomy of the default mode network is found to be mostly prominent at the first and second homological scaffolds. Also at such scale, we found that the limbic network plays a significant role in homological reconfiguration across both the task and subject domains, which paves the way for subsequent investigations on the complex neuro-physiological role of such network. From a wider perspective, our formalism can be applied, beyond brain connectomics, to study the non-localized coordination patterns of localized structures stretching across complex network fibers.
Nima Anari, Kuikui Liu, Shayan Oveis Gharan
et al.
We give a self-contained proof of the strongest version of Mason’s conjecture, namely that for any matroid the sequence of the number of independent sets of given sizes is ultra log-concave. To do this, we introduce a class of polynomials, called completely log-concave polynomials, whose bivariate restrictions have ultra log-concave coefficients. At the heart of our proof we show that for any matroid, the homogenization of the generating polynomial of its independent sets is completely log-concave.
ObjectiveAiming at the urgent need for research on vulnerability analysis methods for ship targets, a vulnerability analysis method for supply ship targets under anti-ship missile strikes is proposed.MethodsTaking a typical supply ship target as the research object, anti-ship missiles are selected as the strike weapon, and the structure-activity relationship of the target is analyzed. The dynamic response of a supply ship under the action of anti-ship missile internal explosion load is obtained through numerical simulation. The damage and damage criteria of key components under the action of anti-ship missile internal explosion load are studied, and the function damage degree of the supply ship is obtained. The target vulnerability of the supply ship is analyzed, and the attack method that causes the maximum damage to its navigation function is determined.ResultsA vulnerability analysis method flow is formed which includes an analysis of target structure-activity relationships, damage modes of key components, damage criteria and degree of target functional damage. Based on the vulnerability distribution information, the target should be targeted at the location where non-redundant components are concentrated, achieving the goal of causing maximum range damage. ConclusionThe findings of this study can provide technical support for future research on ship target vulnerability.
Based on the background of future modern maritime combats, a multi-agent deep reinforcement learning scheme was proposed to complete the cooperative round-up task in the swarm game confrontation of unmanned surface vehicles (USVs). First, based on different combat modes and application scenarios, a multi-agent deep deterministic policy gradient algorithm based on distributed execution was determined, and its principle was introduced. Second, specific combat scenario platforms were simulated, and multi-agent network models, reward function mechanisms, and training strategies were designed. The experimental results show that the method proposed in this article can effectively solve the problem of cooperative round-up decision-making facing USVs from the enemy, and it has high efficiency in different combat scenarios. This work provides theoretical and reference value for the research on intelligent decision-making of USVs in complicated combat scenarios in the future.
The CMS collaboration, A. Hayrapetyan, A. Tumasyan
et al.
Abstract Measurements of inclusive and normalized differential cross sections of the associated production of top quark-antiquark and bottom quark-antiquark pairs, t t ¯ b b ¯ $$ \textrm{t}\overline{\textrm{t}}\textrm{b}\overline{\textrm{b}} $$ , are presented. The results are based on data from proton-proton collisions collected by the CMS detector at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 138 fb −1. The cross sections are measured in the lepton+jets decay channel of the top quark pair, using events containing exactly one isolated electron or muon and at least five jets. Measurements are made in four fiducial phase space regions, targeting different aspects of the t t ¯ b b ¯ $$ \textrm{t}\overline{\textrm{t}}\textrm{b}\overline{\textrm{b}} $$ process. Distributions are unfolded to the particle level through maximum likelihood fits, and compared with predictions from several event generators. The inclusive cross section measurements of this process in the fiducial phase space regions are the most precise to date. In most cases, the measured inclusive cross sections exceed the predictions with the chosen generator settings. The only exception is when using a particular choice of dynamic renormalization scale, μ R = 1 2 ∏ i = t , t ¯ , b , b ¯ m T , i 1 / 4 $$ {\mu}_{\textrm{R}}=\frac{1}{2}{\prod}_{i=\textrm{t},\overline{\textrm{t}},\textrm{b},\overline{\textrm{b}}}{m}_{\textrm{T},i}^{1/4} $$ , where m T , i 2 = m i 2 + p T , i 2 $$ {m}_{\textrm{T},i}^2={m}_i^2+{p}_{\textrm{T},i}^2 $$ are the transverse masses of top and bottom quarks. The differential cross sections show varying degrees of compatibility with the theoretical predictions, and none of the tested generators with the chosen settings simultaneously describe all the measured distributions.
Nuclear and particle physics. Atomic energy. Radioactivity
Nishat Raihan, Mohammed Latif Siddiq, Joanna C. S. Santos
et al.
Large language models (LLMs) are becoming increasingly better at a wide range of Natural Language Processing tasks (NLP), such as text generation and understanding. Recently, these models have extended their capabilities to coding tasks, bridging the gap between natural languages (NL) and programming languages (PL). Foundational models such as the Generative Pre-trained Transformer (GPT) and LLaMA series have set strong baseline performances in various NL and PL tasks. Additionally, several models have been fine-tuned specifically for code generation, showing significant improvements in code-related applications. Both foundational and fine-tuned models are increasingly used in education, helping students write, debug, and understand code. We present a comprehensive systematic literature review to examine the impact of LLMs in computer science and computer engineering education. We analyze their effectiveness in enhancing the learning experience, supporting personalized education, and aiding educators in curriculum development. We address five research questions to uncover insights into how LLMs contribute to educational outcomes, identify challenges, and suggest directions for future research.
C.V. Raman (1888 - 1970) was a creative scientist, enthusiastic teacher and a science celebrity in India. In all these roles, he communicated science effectively. In this essay, I ask how and why did he communicate science. I take a few examples from his research writings and show his ability to explain science lucidly. By looking into his thoughts on teaching and those of his students, I explore Raman, the teacher. Finally, I discuss a few aspects of his methods to communicate science to the public. I emphasize his exposition and reveal a dichotomy.
The current understanding of wind-generated wave climate from buoy-based measurements is mainly focused on a limited number of locations and has not been updated to include measurements in the past decade. This study quantifies wave climate variability and change during the historical period of 1980–2020 through a comprehensive analysis of wave height measurements at 43 buoys off the U.S. Pacific, Atlantic, and Gulf of Mexico Coasts. Variabilities and trends in the annual and monthly mean and 95th percentile significant wave heights (<i>SWH</i>) and the number of extreme wave events are quantified for the cold and warm seasons. We calculate the <i>SWH</i> long-term and decadal trends, and temporal variabilities using the ordinary least squares regression and coefficient of variation, respectively. Independent extreme wave events are identified using a method based on the peaks-over-threshold and the autocorrelation function, which accounts for the geographical variation in the timespan between independent extreme events. Results show that the warm season’s interannual variabilities in monthly and annual <i>SWH</i> are smaller in the Pacific while larger in the Atlantic and Gulf, with the largest variabilities observed at buoys in the Gulf and lower latitudes of the Atlantic. Strong significant alternating decadal trends in <i>SWH</i> are found in the Pacific and Atlantic regions. Buoys in the Atlantic and Gulf regions have experienced higher numbers of extreme wave events (anomalies) compared to the Pacific region. In general, the long-term trend in the number of extreme events during the cold season is positive at buoys located at higher latitudes but negative at lower latitudes.
Marshal Renuka Kunte, Kunal Chatterjee, D Basannar
Introduction: Pelvic stress fractures have been reported commonly among women military trainees from worldwide. They have been reported as the most common stress fracture in women in Indian military training academies. With the recent increase in avenues for women to join the armed forces and other paramilitary forces, there is a need for medical personnel to be familiar with the epidemiology of pelvis stress fractures unique to women. This article presents a clinical and epidemiological profile of these fractures and suggests approaches for their prevention. Methods: Fifty-one cases of pelvic stress fractures were observed among a cohort of 608 women trainees in the military training establishments of the Indian, Army, Navy, and Air force. Data on clinical presentation, diagnosis, and management were collected for those women trainees who developed radiologically confirmed pelvic stress fractures. Information was also taken to identify the possible risk factors. Measures which can be implemented during training have been suggested to reduce the risk of these fractures during training. Results: All women trainees with stress fracture pelvis presented with groin pain and difficulty in running and drill usually in the 6–9th week of training. X-rays showed involvement of the inferior pubic ramus. All of them made good recovery with conservative management. A large number of training-related factors and certain individual-related anatomical and physiological factors seem to have a role specifically leading to these fractures. Approaches to minimize the risk of pelvic stress fractures have been recommended for making the changes in training and managing individual factors. Conclusion: Specific measures for the prevention of pelvic stress fractures are required to be instituted, addressing the risk factors for Indian women undergoing military training along with general measures.
In order to investigate the driving characteristics of multi-axis special vehicle under the limit condition of missing tires, a five-axis special vehicle dynamics simulation test model including vehicle parameters, power transmission and braking system, axle and suspension system, steering system, and tire system was established based on the vehicle dynamics software TruckSim. Focusing on the analysis of the effect of tire deficiency and based on the tire six component test, the simulation test model of tire parameter was modified and the 0—80—0 km/h linear acceleration brake comfort simulation test and double line operating stability simulation test were conducted to study the smooth features and stability characteristics under the condition of tire deficiency at different positions. Based on the deviation of the centroid of the vehicle, the maximum number of missing tires at different driving speeds was analyzed, and the tire layout methods as well as the degree of influence of each axle tire on the vehicle at different driving speeds were proposed. The results show that the multi-axle special vehicle has the limit condition of driving under the condition of tire deficiency, and the tire deficiency at different positions has little effect on the driving speed of the vehicle. The influence of each axle tire on the driving of this type of vehicle is ranked in order of importance, which are the first axle, the fifth axle, the third axle, the second axle, and the fourth axle. When the vehicle travels at 50 km/h, 30 km/h, and 20 km/h, the maximum numbers of missing tires are 1, 2, and 3, respectively. This paper can provide theoretical support for the assessment of driving safety of multi-axle special vehicles.
Engineering (General). Civil engineering (General), Chemical engineering
ObjectivesAiming at the bottleneck of the insufficient electric endurance of unattended vehicles, the configuration design, motion performance and energy capture efficiency analysis of a manta ray bionic unmanned underwater vehicle (UUV) are carried out. MethodsThe configuration of a multi-module bionic long-endurance manta ray UUV is proposed, and its motion and energy capture mechanism are deduced and obtained under the principle of wave energy capture by a floating hydraulic cylinder. Next, based on the multi-module floating body theory and three-dimensional potential theory, hydrodynamic calculation and analysis are carried out, and the motion response and wave energy capture law of the multi-module manta ray UUV are revealed under different wave directions and different connected stiffness and damping of the UUV. ResultsFinally, the wave energy capture efficiency of the multi-module manta ray UUV in waves is studied in combination with the optimal stiffness and damping of the hydraulic cylinder.ConclusionsIt is concluded that the wave energy capture characteristics of a multi-module manta ray UUV can be analyzed by its motion equations and energy capture formula.
Inequality prevails in science. Individual inequality means that most perish quickly and only a few are successful, while gender inequality implies that there are differences in achievements for women and men. Using large-scale bibliographic data and following a computational approach, we study the evolution of individual and gender inequality for cohorts from 1970 to 2000 in the whole field of computer science as it grows and becomes a team-based science. We find that individual inequality in productivity (publications) increases over a scholar's career but is historically invariant, while individual inequality in impact (citations), albeit larger, is stable across cohorts and careers. Gender inequality prevails regarding productivity, but there is no evidence for differences in impact. The Matthew Effect is shown to accumulate advantages to early achievements and to become stronger over the decades, indicating the rise of a "publish or perish" imperative. Only some authors manage to reap the benefits that publishing in teams promises. The Matthew Effect then amplifies initial differences and propagates the gender gap. Women continue to fall behind because they continue to be at a higher risk of dropping out for reasons that have nothing to do with early-career achievements or social support. Our findings suggest that mentoring programs for women to improve their social-networking skills can help to reduce gender inequality.
Chase Yakaboski, Gregory Hyde, Clement Nyanhongo
et al.
AI for Science (AI4Science), particularly in the form of self-driving labs, has the potential to sideline human involvement and hinder scientific discovery within the broader community. While prior research has focused on ensuring the responsible deployment of AI applications, enhancing security, and ensuring interpretability, we also propose that promoting openness in AI4Science discoveries should be carefully considered. In this paper, we introduce the concept of AI for Open Science (AI4OS) as a multi-agent extension of AI4Science with the core principle of maximizing open knowledge translation throughout the scientific enterprise rather than a single organizational unit. We use the established principles of Knowledge Discovery and Data Mining (KDD) to formalize a language around AI4OS. We then discuss three principle stages of knowledge translation embedded in AI4Science systems and detail specific points where openness can be applied to yield an AI4OS alternative. Lastly, we formulate a theoretical metric to assess AI4OS with a supporting ethical argument highlighting its importance. Our goal is that by drawing attention to AI4OS we can ensure the natural consequence of AI4Science (e.g., self-driving labs) is a benefit not only for its developers but for society as a whole.
Art can be a powerful tool in science engagement efforts to help facilitate learning and public discourse around space and space exploration. The Art of Planetary Science is an annual exhibition combining science and art which aims to help people to connect more meaningfully to science outside of traditional education models. Works solicited from scientists and from the public explore the beauty of the universe, as well as communicate and abstract scientific concepts from an artistic framework. These events offer the public a unique perspective on science and an opportunity to participate in dialogue around how and why we explore space. As an extension of the exhibition, a series of workshops for artists and educators focuses on techniques in creating science-driven art and how it can be used as a tool for scientific inquiry. We will discuss our success with these efforts and the important role that art can play in shaping the evolving narrative of humanity's relationship to space.