This paper proposes a human-centered conceptual model integrating lean and Industry 4.0 based on the literature review and validated it through a case study in the context of an advanced automotive first-tier supplier. Addressing a significant gap in existing research on lean Industry 4.0 implementations, the study provides both theoretical insights and practical findings. It emphasizes the importance of a human-centered approach, identifies key enablers and barriers. In the implementation process of the case study, it is considered at group level and model site level through operational, social and technological perspectives in a five-phase multi-method approach. It shows what effective human-centered lean Industry 4.0 implementation look like and how advanced lean tools can be digitized. It highlights 26 positive and 10 negative aspects of the case and their causal relation. With the appropriate internal and external technological knowhow and people skills, it shows how successful implementation can benefit the organization and employees based on the conceptual model that serves as a first step toward lean Industry 5.0.
DaemonSec is an early-stage startup exploring machine learning (ML)-based security for Linux daemons, a critical yet often overlooked attack surface. While daemon security remains underexplored, conventional defenses struggle against adaptive threats and zero-day exploits. To assess the perspectives of IT professionals on ML-driven daemon protection, a systematic interview study based on semi-structured interviews was conducted with 22 professionals from industry and academia. The study evaluates adoption, feasibility, and trust in ML-based security solutions. While participants recognized the potential of ML for real-time anomaly detection, findings reveal skepticism toward full automation, limited security awareness among non-security roles, and concerns about patching delays creating attack windows. This paper presents the methods, key findings, and implications for advancing ML-driven daemon security in industry.
Carunia Mulya Firdausy, Fadhlan Zuhdi, Khoiru Rizqy Rambe
et al.
The serious consequences of greenhouse gas emissions destabilizing the world's climate have led the European Union (EU) to introduce the Carbon Border Adjustment Mechanism (CBAM) regulation. This regulation will be fully implemented by the EU in 2026 for its trading partner countries for six products, namely fertilizers, cement, iron and steel, aluminium, electricity, and hydrogen. This study, taking Indonesia as one of the EU's trading partners, aims: (1) to estimate the values of Indonesian export products to the EU affected by CBAM and their competitiveness, and the dynamic changes in the competitiveness level of the export product subject to the CBAM, and (2) to analyze the perceptions on the readiness of government and business actors to face the implementation of the CBAM. By applying the Revealed Comparative Advantage (RCA), Revealed Symmetric Comparative Advantage (RSCA) Indexes, and Export Product Dynamic (EPD) methods using the data published by the International Trade Centre (ITC) and Indonesia Iron and Steel Industry Association, the results indicate that the values and shares of Indonesia's export products to the European Union under CBAM are very low. Fertilizers and cement products are in the retreat position, which indicates a negative growth in a country's export share of a product, accompanied by a decrease in total exports, resulting in an uncompetitive and stagnant position. While aluminium and iron or steel products are in the lost opportunity position, indicating that the global export markets for iron-steel and aluminium products are very open, with opportunities for exports. Furthermore, the results of the PESTEL (Political, Economic, Social, Technological, Environmental, and Legal) factor analysis based on interviews and focus group discussions with resource persons emphasize the importance of this country having policies in place to face CBAM including a policy to increase its carbon price, economic incentives, anticipating the social impact of CBAM on increasing unemployment, and fostering research and innovation in technologies with low carbon footprint.
Oxy-fuel combustion technology is a critical pathway for carbon capture in the cement industry. However, the high-concentration CO<sub>2</sub> atmosphere significantly alters multiphysics coupling in the calciner and systematic studies on its comprehensive effects remain limited. To address this, a Computational Particle Fluid Dynamics (CPFD) model using the MP-PIC method was implemented using the commercial software Barracuda Virtual Reactor 22.1.2 to simulate an industrial-scale oxy-fuel cement calciner and validated against industrial data. Under oxy-fuel combustion with 50% oxygen concentration in the tertiary air, simulations showed a 38.4% increase in the solid–gas mass ratio compared to conventional air combustion, resulting in a corresponding 37.7% increase in total pressure drop. Flow resistance was concentrated primarily in the constriction structures. Local temperatures exceeded 1200 °C in high-oxygen regions. The study reveals a competition between the inhibitory effect of high CO<sub>2</sub> partial pressure on limestone decomposition and the promoting effect of elevated overall temperature. Although the CO<sub>2</sub>-rich atmosphere thermodynamically suppresses calcination, the higher operating temperature under oxy-fuel combustion effectively compensates, achieving a raw meal decomposition rate of 92.7%, which meets kiln feed requirements. This research elucidates the complex coupling mechanisms among flow, temperature, and reactions in a full-scale oxy-fuel calciner, providing valuable insights for technology design and optimization.
Artificial intelligence (AI) has seen fast paced development in industry and academia. However, striking recent advances by industry have stunned the field, inviting a fresh perspective on the role of academic research on this progress. Here, we characterize the impact and type of AI produced by both environments over the last 25 years and establish several patterns. We find that articles published by teams consisting exclusively of industry researchers tend to get greater attention, with a higher chance of being highly cited and citation-disruptive, and several times more likely to produce state-of-the-art models. In contrast, we find that exclusively academic teams publish the bulk of AI research and tend to produce higher novelty work, with single papers having several times higher likelihood of being unconventional and atypical. The respective impact-novelty advantages of industry and academia are robust to controls for subfield, team size, seniority, and prestige. We find that academic-industry collaborations produce the most impactful work overall but do not have the novelty level of academic teams. Together, our findings identify the unique and nearly irreplaceable contributions that both academia and industry make toward the progress of AI.
The interaction between shale bedding planes and fluids significantly weakens their structural integrity, profoundly affecting borehole stability in shale reservoirs. However, traditional analyses often overlook fluid intrusion from the borehole into the bedding planes, leading to an inaccurate understanding of the mechanisms behind shale deterioration and inadequate guidance for drilling engineering design. This study models the process of drilling fluid permeating bedding shale through fluid intrusion experiments. It evaluates how forces acting on the bedding plane and the drilling cycle affect strength evolution, deriving rules governing changes in the mechanical parameters of both the shale matrix and the bedding planes. We developed a borehole stability calculation model that incorporates bedding plane considerations by integrating the established rules for mechanical parameter changes. The model analyzes the effects of the bedding plane, well inclination angle, wellbore azimuth angle, bedding plane inclination angle, and drilling cycle on the collapse pressure and collapse area with different types of drilling fluids. The results indicate that the presence of bedding planes significantly influences borehole stability. Therefore, both matrix and bedding plane damage should be considered to accurately calculate the collapse pressure and area. The well inclination angle, wellbore azimuth angle, and bedding plane inclination angle also impact borehole stability. It is recommended that the horizontal section of the wellbore be drilled in the direction of the minimum horizontal in situ stress. As the drilling cycle extends, the collapse pressure gradually increases, with the largest increase occurring in the direction of the minimum stress. Additionally, the increase in collapse pressure is greater when using water-based drilling fluid than when using oil-based drilling fluid. These findings provide theoretical insights for drilling engineering design in bedding shale environments, aiming to enhance borehole drilling safety.
Kalunga Sina-Nduku Tryphene, Deko Oyema Bruno, Link Bukasa Muamba
et al.
Petrographic and petrophysical characterizations of the pre-salt Chela formation of the Nsiamfumu and Liawenda fields in the onshore Coastal Basin of the D.R. Congo it was performed. These characterizations, as part of the static modelling of the reservoirs, involved reading log measurements, thin-slice observations of rock samples taken from this formation, calculations and interpretation of logs, CPI (Computer Processed Interpretation) and DST (Drill Stem Testing) from well tests.
The following observations were obtained:
• Wells Lw-1: the formation is characterized by dolomitic limestone and micaceous black shale with a thickness of 34 m having heavy oil indices and an average porosity of 17% and a salinity greater than 300 gr/L;
• Well Lw-2: the formation is characterized by islands of sand, coarse sandstone and a few quartz pebbles with a thickness of 10m showing evidence of light oil, gas and an average porosity of 24%;
• Shaft Lw-3: the formation is dominated by micaceous shale and dolomite interbeds with a thickness of 14m showing little evidence of hydrocarbons, only salt water and residual oils;
• Puits Ns-1: the formation is characterized by beige to grey dolomite, grey shale and white sand with a thickness of 6m showing evidence of light oil and an average porosity of 20%.
The electronics industry has made remarkable progress over the past 25 years in reducing the emission intensity of long-lived volatile fluorinated compounds (FCs) that typically represent 80 to 90% of uncontrolled direct (scope 1) greenhouse gas (GHG) emissions during the manufacturing of semiconductor, display, and photovoltaic devices. However, while Normalized Emission Rates (NERs) have decreased in terms of CO2-equivalent emissions per surface area of electronic devices produced, absolute FC emissions from the sector have continued to grow at a compound annual rate of 3.4% between 1995 and 2020. Despite these trends, industry has not, to date, renewed their sectoral commitments to strengthen global FC emission reduction goals for the 2020–2030 decade, and it is unlikely that recently announced net-zero emission objectives from a few leading companies can reverse upwards industry emission trends in the near-term. Meanwhile, the persisting gap between “top-down” atmospheric measurements-based FC emission estimates and “bottom-up” emissions estimates is increasingly concerning as recent studies suggest that the gap is likely due, in part, to an underestimation of FC emissions from the electronics sector. Thus, the accuracy of industry-average (Tier 2) emission factors is increasingly questionable. Considering that most FCs essentially permanently persist in the atmosphere on a human time scale, the electronics industry needs to reassert its collective leadership on climate action, increase its ambition to reduce absolute emissions, and ground net-zero commitments in science by embarking on a concerted effort to monitor, report, and verify their process and abatement emission factors. To this effect, this article provides practicable solutions to cross-check bottom-up and top-down emission factors at the facility level and suggests that further implementing cost-effective FC abatement technologies, possibly in conjunction with a sectoral cap-and-trade mechanism, can help achieve residual FC emission levels compatible with net-zero neutralization principles and the 1.5 °C objective of the Paris Agreement.
Ješić Milica, Martinović Bojan, Stančić Stefan
et al.
Gas wells, particularly those situated onshore, play a vital role in the global energy sector by supplying a significant portion of natural gas. However, operational challenges, notably gas hydrate formation, pose substantial issues, leading to complications such as flowline blockages and unexpected well shutdowns. Gas hydrates, crystalline structures resembling ice, form under specific conditions of low temperature and high pressure. This paper explores the complex process of hydrate formation in gas wells, emphasizing the challenges it presents and the need for specialized strategies to address these issues. The primary focus is a case study of an onshore gas well experiencing recurrent hydrate-related problems. Leveraging PipeSim software, a well model is developed, followed by a sensitivity analysis under various operational scenarios. The study investigates mitigation strategies, including choke position adjustments and methanol introduction, crucial for the safe production of oil and gas fields. The significance of this study lies in its aim to optimize well performance and mitigate risks associated with hydrate formation. Findings contribute to existing knowledge and offer practical solutions for industry practitioners and researchers dealing with onshore gas wells. The paper's structure includes a review of related work, details on the experimental setup and results, and concluding remarks. The perennial challenge of hydrate formation in gas wells necessitates a case-specific assessment and individualized approaches. Nodal analysis and well modeling software have become indispensable tools for engineers in developing preventative measures. This paper presents a methodological approach using a specific well as an example, evaluating the effectiveness of three methodologies: downhole choke installation, methanol dosing, and well transfer to a high-pressure separator.
Muhamad Tahriri Rozaini, Denys I. Grekov, Mohamad Azmi Bustam
et al.
HKUST-1 is a metal-organic framework (MOF) that is widely studied as an adsorbent for CO<sub>2</sub> capture because of its high adsorption capacity and good CO<sub>2</sub>/CH<sub>4</sub> selectivity. However, the numerous synthesis routes for HKUST-1 often result in the obtention of MOF in powder form, which limits its application in industry. Here, we report the shaping of HKUST-1 powder via the extrusion method with the usage of bio-sourced polylactic acid (PLA) as a binder. The characterization of the composite was determined by XRD, FTIR, TGA and SEM analyses. The specific surface area was determined from the N<sub>2</sub> adsorption isotherm, whereas the gas adsorption capacities were investigated via measurements of CO<sub>2</sub> and CH<sub>4</sub> isotherms of up to 10 bar at ambient temperature. The material characterization reveals that the composite preserves HKUST-1’s crystalline structure, morphology and textural properties. Furthermore, CO<sub>2</sub> and CH<sub>4</sub> adsorption isotherms show that there is no degradation of gravimetric gas adsorption capacity after shaping and the composite yields a similar isosteric adsorption heat as pristine HKUST-1 powder. However, some trade-offs could be observed, as the composite exhibits a lower bulk density than pristine HKUST-1 powder and PLA has no impact on pristine HKUST-1’s moisture stability. Overall, this study demonstrates the possibility of shaping commercial HKUST-1 powder, using PLA as a binder, into a larger solid-state-form adsorbent that is suitable for the separation of CO<sub>2</sub> from CH<sub>4</sub> with a well-preserved pristine MOF gas-adsorption performance.
New attempt to answer the questions: 1) what aquatic biological resources are possessed in the Russian Far-Eastern Seas and North Pacific and what is their quantity? 2) which of them and how many are caught annually? 3) are these resources managed rationally in Russia? 4) and what is the reason for current situation and can it be improved? Based on the data of trawl surveys, a GIS is created with maps of the biological resources distribution, separately for actually harvested, potentially commercial, and non-commercial ones. The biomass, potential yield and its cost are calculated for the Chukchi, Bering, Okhotsk, and Japan Seas and North-West Pacific. These water bodies are ranked by their value for commercial fisheries. The obtained estimates are compared with the data on annual catch, export, import, and consumption of fish products in Russia. Possibility of larger landing is concluded for each water body, in terms of the harvested species biomass. The harvest from the Okhotsk Sea alone can reach more than 5.2 million tons that exceeds the level of «strategic ceiling» for the entire Russian fishery proposed in «Strategy for development of the fisheries complex of Russian Federation for the period to 2030». For the whole water area under consideration, this level can be exceeded by 3.5–4.7 times, or even by 5.6 times if the potentially commercial species will be exploited. Dynamics of domestic catch, trade and consumption of the biological resources are analyzed. The conclusion is that socioeconomic conditions rather than the state of biological resources are the reason that Russia still does not harvest so much. Some measures are proposed to improve the management of the Russian fishery industry. The obtained results cah alsobe used for maintenance rational use of the biological resources, food security, and nature protection, including evaluation of cost for biological resources in certain water bodies and assessment of damage caused to the nature and biological resources by pollution, construction, oil and gas production, or technogenic accidents.
Neural networks and machine learning have long been used by almost everyone in their daily lives, perhaps not always consciously. When an algorithm of social networks identifies the faces of people in a photo or a voice assistant helps us search for some information, machine learning techniques underpin all of these activities.
 In recent years neural networks are finding more and more applications in the fields of oil and gas exploration and production. This article aims to illustrate an example of the application of neural networks in the analysis of seismic data for an active oilfield by predicting 3D cube of petrophysical properties to further detail the geological model and search for additional hydrocarbon accumulations.
 One of the key conditions for successful prediction of petrophysical properties using neural networks is a wide sample of well data for effective training of a non-linear operator. In our case, since it is a producing field, there were more than 100 wells available, which fully meets the requirements of the algorithm. Another important condition for application of this technique is having high-quality well ties for the used wells, this step of the workflow will also be described within the article.
 A distinct feature of neural network analysis, in contrast to classical inversion, is that it does not use a seismic wavelet. The neural network automatically determines such an operator that best describes the correlation between several seismic traces in the wellbore area and the log curve. This feature reduces the analysis time and produces express results if the above mentioned conditions are met, which makes the neural network technique an effective tool for dynamic analysis of seismic data.
Technological innovation is one of the most important variables in the evolution of the textile industry system. As the innovation process changes, so does the degree of technological diffusion and the state of competitive equilibrium in the textile industry system, and this leads to fluctuations in the economic growth of the industry system. The fluctuations resulting from the role of innovation are complex, irregular and imperfectly cyclical. The study of the chaos model of the accumulation of innovation in the evolution of the textile industry can help to provide theoretical guidance for technological innovation in the textile industry, and can help to provide suggestions for the interaction between the government and the textile enterprises themselves. It is found that reasonable government regulation parameters contribute to the accelerated accumulation of innovation in the textile industry.
We analytically derive the transport tensor of thermal conductivity in an ultracold, but not yet quantum degenerate, gas of Bosonic lanthanide atoms using the Chapman-Enskog procedure. The tensor coefficients inherit an anisotropy from the anisotropic collision cross section for these dipolar species, manifest in their dependence on the dipole moment, dipole orientation, and $s$-wave scattering length. These functional dependencies open up a pathway for control of macroscopic gas phenomena via tuning of the microscopic atomic interactions. As an illustrative example, we analyze the time evolution of a temperature hot-spot which shows preferential heat diffusion orthogonal to the dipole orientation, a direct consequence of anisotropic thermal conduction.
Bin Han, Mohammad Asif Habibi, Bjoern Richerzhagen
et al.
Having the Fifth Generation (5G) mobile communication system recently rolled out in many countries, the wireless community is now setting its eyes on the next era of Sixth Generation (6G). Inheriting from 5G its focus on industrial use cases, 6G is envisaged to become the infrastructural backbone of future intelligent industry. Especially, a combination of 6G and the emerging technologies of Digital Twins (DT) will give impetus to the next evolution of Industry 4.0 (I4.0) systems. This article provides a survey in the research area of 6G-empowered industrial DT system. With a novel vision of 6G industrial DT ecosystem, this survey discusses the ambitions and potential applications of industrial DT in the 6G era, identifying the emerging challenges as well as the key enabling technologies. The introduced ecosystem is supposed to bridge the gaps between humans, machines, and the data infrastructure, and therewith enable numerous novel application scenarios.
Global energy consumption has reached unprecedented levels over the last century due to population growth and economic growth. There have been significant changes in the global energy economy to reduce greenhouse gas emissions and air pollutants. Due to this trend, many countries around the world are promoting electric technologies as fuel-saving alternatives. The Israeli energy industry integrates renewable sources into its supply system and streamlines consumption. Nevertheless, Israelis know too little about smart meters, energy storage systems, and other modern power grid technology, which enables a decentralized approach to energy management referred to as distributed energy systems (DES). Using distributed energy systems to generate energy on-site and manage loads can reduce costs, improve reliability, and secure revenue. An effective public education program can help prepare public opinion and reduce barriers to smart use and energy efficiency in the home. By educating schoolchildren, we will present a way to prepare the public in the community to accept distributed energy systems and renewable energy. In challenging times, it is vital to make great efforts and to remember that change begins with education and that the best way to achieve intelligent usage and energy efficiency is to start with our children.
Evangelia Siouti, Ksakousti Skyllakou, Ioannis Kioutsioukis
et al.
Air pollution forecasting systems are useful tools for the reduction in human health risks and the eventual improvement of atmospheric quality on regional or urban scales. The SmartAQ (Smart Air Quality) forecasting system combines state-of-the-art meteorological and chemical transport models to provide detailed air pollutant concentration predictions at a resolution of 1 × 1 km<sup>2</sup> for the urban area of interest for the next few days. The Weather Research and Forecasting (WRF) mesoscale numerical weather prediction model is used to produce meteorological fields and the PMCAMx (Particulate Matter Comprehensive Air quality Model with extensions) chemical transport model for the simulation of air pollution. SmartAQ operates automatically in real time and provides, in its current configuration, a three-day forecast of the concentration of tens of gas-phase air pollutants (NO<sub>x</sub>, SO<sub>2</sub>, CO, O<sub>3</sub>, volatile organic compounds, etc.), the complete aerosol size/composition distribution, and the source contributions for all primary and secondary pollutants. The system simulates the regional air quality in Europe at medium spatial resolution and can focus, using high resolution, on any urban area of the continent. The city of Patras in Greece is used for the first SmartAQ application, taking advantage of the available Patras’ dense low-cost sensor network for PM<sub>2.5</sub> (particles smaller than 2.5 μm) concentration measurements. Advantages of SmartAQ include (a) a high horizontal spatial resolution of 1 × 1 km<sup>2</sup> for the simulated urban area; (b) advanced treatment of the organic aerosol volatility and chemistry; (c) use of an updated emission inventory that includes not only the traditional sources (industry, transport, agriculture, etc.), but also biomass burning from domestic heating and cooking; (d) forecasting of not only the pollutant concentrations, but also of the sources contributions for each one of them using the Particulate matter Source Apportionment Technology (PSAT) algorithm.