This research presents an innovative 330-ml beverage packaging solution that merges accessibility, design, and ergonomics for user comfort. The design incorporates shape principles, clear typography, and minimalist colours and symbols, balancing visual appeal and functionality. As demand for attractive options grows, this study investigates consumer preferences through surveys and interviews. It identifies key factors influencing choices, enhancing the design through quality function deployment (QFD) and product design development (PDD) methodologies. This approach integrates shape design, typography, and colour theory to meet aesthetic and functional needs. The packaging features ‘a bear-paw-shaped watermark’ for stability and ergonomic design for comfortable handling. A minimalist label, with fruit-inspired illustrations, modern fonts, and finishes like matte or glossy, enhances visual appeal. The ‘Trendy-Bottle Design Platform’ aids creation with templates, 3D visualisation, and collaboration tools. Key elements like brand details, nutritional facts, allergen information, a recycle sign, and sustainability symbols ensure functionality and alignment with consumer needs. From a customer perspective, trendy bottles should offer ergonomic designs, spill-proof functionality, eco-friendly materials, and affordability. Manufacturing considerations include optimising shelf space, with rectangular bottles saving space despite higher costs, and rounded bottles saving material but increasing shelf costs. Collaboration ensures moulds meet aesthetic and functional needs, improving appeal.
A robust and pragmatical technique was developed to classify flow pathways during long-term waterflooding operations in a hydrocarbon reservoir. More specifically, pore structure analysis, wettability tests, relative permeability tests, and long-term waterflooding experiments were conducted and integrated. Then, effects of pore-throat structures, displacement rates, crude oil viscosities, and wettability on the oil displacement efficiency across different flow pathways were systematically investigated, allowing us to classify flow pathways into the primary and secondary ones. For the former, pore-throat structure significantly affects the efficiency of displacement: for mouth-bar microfacies, cores with larger pore-throat radii and lower fractal dimensions exhibit superior displacement performance, whereas, for point-bar microfacies, it exhibits greater sensitivity to variations in injection parameters. Increasing the injection rate from 0.2 mL/min to 0.5 mL/min can lead to a 7.31% improvement in oil recovery. Also, high-viscosity crude oil leads to an overall decline in displacement efficiency, with a more pronounced reduction observed in the point-bar microfacies, suggesting that complex pore-throat structures are more sensitive to viscous resistance. For the latter, wettability shows its dominant impact with an increase in oil recovery to 7.12% if the wettability index is increased from 0.17 to 0.21 in the point-bar microfacies.
This systematic review explores how business intelligence (BI) and operations research (OR) help organizations ensure sustainable practices in supply chain management (SCM). Drawing on 56 peer-reviewed studies, this review synthesizes how BI tools support sustainability by transforming large and complex datasets into actionable insights, enhancing transparency, improving forecasting, optimizing production and inventory, reducing waste, and enabling circular economy practices. Complementarily, OR provides methodological rigor through optimization models, simulation, and multicriteria decision-making, enabling organizations to balance economic, environmental, and social objectives in supply chain design and operations. The findings reveal that BI and OR jointly contribute to 11 of the 17 United Nations Sustainable Development Goals (SDGs), demonstrating their strategic relevance for global sustainable development. This paper’s contribution is twofold: it consolidates fragmented academic research through an integrative framework clarifying how BI and OR reinforce sustainability within SCM, and it provides practitioners with evidence of how these tools can generate both operational efficiency and a competitive advantage while meeting environmental and social responsibilities. Future research should focus on bridging existing gaps in the literature and advancing the practical applications of these technologies.
Article describes the development of an application for generating 3D models of bearings used in the design of machine tool spindles. The models are created in CATIA v5 software using parametric modelling, which allows for easy modification of bearing dimensions and types according to the selected type from the database. The application has two main parts: spindle design and the generation of bearings. In the future, we plan to develop other headstock components such as spindle front ends, seals, clamping and other elements in this way. The user interface is simple and designed for fast work. The software has a modular structure and can be easily expanded and modified.
Making computers automatically extract latent scientific knowledge from literature is highly desired for future materials and chemical research in the artificial intelligence era. Herein, the natural language processing (NLP)‐based machine learning technique to build language models and automatically extract hidden information regarding perovskite solar cell (PSC) materials from 29 060 publications is employed. The concept that there are light‐absorbing materials, electron‐transporting materials, and hole‐transporting materials in PSCs is successfully learned by the NLP‐based machine learning model without a time‐consuming human expert training process. The NLP model highlights a hole‐transporting material that receives insufficient attention in the literature, which is then elaborated via density functional theory calculations to provide an atomistic view of the perovskite/hole‐transporting layer heterostructures and their optoelectronic properties. Finally, the above results are confirmed by device experiments. The present study demonstrates the viability of NLP as a universal machine learning tool to extract useful information from existing publications.
Computer engineering. Computer hardware, Control engineering systems. Automatic machinery (General)
Hassan Al Garni, Arunachalam Sundaram, Anjali Awasthi
et al.
A major design challenge for a grid-integrated photovoltaic power plant is to generate maximum power under varying loads, irradiance, and outdoor climatic conditions using competitive algorithm-based controllers. The objective of this study is to review experimentally validated advanced maximum power point tracking algorithms for enhancing power generation. A comprehensive analysis of 14 of the most advanced metaheuristics and 17 hybrid homogeneous and heterogeneous metaheuristic techniques is carried out, along with a comparison of algorithm complexity, maximum power point tracking capability, tracking frequency, accuracy, and maximum power extracted from PV systems. The results show that maximum power point tracking controllers mostly use conventional algorithms; however, metaheuristic algorithms and their hybrid variants are found to be superior to conventional techniques under varying environmental conditions. The Grey Wolf Optimization, in combination with Perturb & Observe, and Jaya-Differential Evolution, is found to be the most competitive technique. The study shows that standard testing and evaluation procedures can be further developed for comparing metaheuristic algorithms and their hybrid variants for developing advanced maximum power point tracking controllers. The identified algorithms are found to enhance power generation by grid-integrated commercial solar power plants. The results are of importance to the solar industry and researchers worldwide.
In his Cynefin classification of system behaviours, Snowden (2007), posits that his Simple (Clear ) and even Complicated system categories are sufficiently understandable to be amenable to the traditional methods of system analysis and modelling. Here, relationships between entities in the system are clear and unambiguous and they behave predictably. So, we can build mathematical “models” to describe how the system is expected to function in different situations.
 Leaving aside his fourth category, Chaotic, as too hard for the moment, this paper focusses on the third category, Complex systems. Yaneer Bar-Yam [1998] defined "complex systems" as systems that "have multiple interacting components, whose collective behaviour cannot be simply inferred from the behaviour of components."
 So, the traditional system modelling approach of decomposition into components and attempting to build up a picture of system performance from individual pieces, alone, is no longer appropriate for this class. The problem is that most of the methods we currently employ, and particularly the methods that attempt to make the case that these systems are safe, (such as Fault trees, FMEA. and Root Cause Analysis), rely on this decomposition into components and attempt to make the parts more reliable individually. Even the more systemic approaches, such as Reason’s (1990) Swiss Cheese Model, and Leveson’s System Theoretic Analysis (STPA) (2004), encourage adding layers of protection, barriers, strengthened control loops and better safety critical systems, in the hope that they are making the systems safer. But without a valid model to test the effectiveness of these add-ons, we cannot be sure that they will not have an opposite effect, of making the systems even more complex, unreliable and unpredictable.
 For complex systems, the careful gathering and documentation of system components and their relationships is still a necessary first step, but we need an additional capacity to model these complex systems to test out and prove they work safely and successfully in the real world. (not just draw fixed component relationships as system “models” (wiring diagrams))
 In the last 10 years, an original approach developed by Hollnagel, (2012), to model systems as sets of interactive, interdependent “functions”, (abstracted from and independent of component details), has now been developed to the point where it can take the basic data and structures from the current component focussed system engineering “models”, such as MBSE( Goldberg (1994)), and can pull it all together in a dynamic system model from which analysts can discern how it really works in practice, and predict the emergent behaviours characteristic of complex systems. 
 The paper describes how this methodology developed and how it is currently used to model real situations, to analyse incidents and anticipate events.
Shafik Kiraga, Md Nasim Reza, Milon Chowdhury
et al.
Site-specific measurements of the crop yield during harvesting are essential for successfully implementing precision management techniques. This study aimed to estimate the mass of radish tubers using the impact principle under simulated vibration and sloped-field harvesting conditions with a laboratory test bench. These conditions included the conveyor speed (CS), impact plate layout (IP), falling height onto the impact plate (FH), the plate angle relative to the horizontal (PH), the field slope, and the vibration of the harvesting machine. Two layouts of impact-type sensors were fabricated and tested, one with a single load cell (SL) and the other with two load cells (DL). An adjustable slope platform and a vibration table equipped with vibration blades were utilized to simulate the slope and vibration effects, respectively. Calibrations were conducted to verify the accuracy of the sensor outputs, processed with the finite impulse response and moving average filters. Radish mass was estimated using an asymmetrically trimmed mean method. The relative percentage error (RE), standard error (SE), coefficient of determination (R²), and analysis of variance (ANOVA) were used to assess the impact plate performance. The results indicated that the SE for both impact plates was less than 4 g in the absence of vibration and slope conditions. The R<sup>2</sup> for the single and double impact plates ranged from 0.58 to 0.89 and 0.69 to 0.81, respectively. The FH had no significant impact, while the PH significantly affected the mass measurements for both impact plates. On the other hand, the CS significantly affected the plate performance, except for the double-load-cell impact plate. Both vibration and slope affected the mass measurements, with RE values of 9.89% and 13.92%, respectively. The RE for filtered radish signals was reduced from 9.13% to 5.42%. The tests demonstrated the feasibility of utilizing the impact principle to assess the mass of radishes, opening up possibilities for the development of yield-monitoring systems for crops harvested in a similar manner.
Acoustic emission (AE) signal processing and interpretation are essential in mining engineering to acquire source information about AE events. However, AE signals obtained from coal mine monitoring systems often contain nonlinear noise, limiting the effectiveness of conventional analysis methods. To address this issue, a novel denoising approach using enhanced variational mode decomposition (VMD) and fuzzy entropy is proposed in this study. The denoised AE signal’s spectral multifractal features are analyzed. The optimization algorithm based on VMD with a weighted frequency index is introduced to avoid mode mixing and outperform other decomposition methods. The characteristic parameter Δα of the AE spectral multifractal parameter serves as an early warning indicator of coal instability. These findings contribute to the accurate extraction of time–frequency features and provide insights for on-site AE signal processing.
This study converted the hybridized oil produced from the blend of seed oil and animal wastes fat to biodiesel using a developed catalyst from palm kernel empty burnt bunch ash (PKPKEBBA). The hybridized oil was obtained via specific gravity method and the properties of the oils were determined. The developed catalyst was characterized using SEM, FTIR, XRF-FT, BET-adsorption, and qualitative analysis. Process optimization was carried out using RSM-CCD and ANN-GA with references to four variables namely: reaction period, catalyst conc., reaction temperature, and E-OH/OMR, respectively. The kinetics and thermodynamic parameters of the transesterification reaction was also carried out. The developed catalyst was recycled and reused, while the quality of the biodiesel was examined with a view to determine its potential to replace conventional diesel. Results showed low viscous and acid value of the hybridized oil which was obtained in a single stage conversion. The mix ratio of the hybridized oil was found to be 33:34:33 with respect to pumpkin seed oil, goat fat, and poultry waste fat. The developed heterogeneous catalyst contained CaCO3 as the major element found in the PKEBBA. Process optimization showed that ANN-GA gave a better optimum validated yield of 99.20% (wt./wt.) than RSM-CCD of 98.44% (wt./wt.). Considered design variables were mutually significant at p-value<0.0001. The rate equation constant was 0.0177 min−1, while the thermodynamic parameters at highest temperature (348 K) were ΔGr= 101.38 KJ/mol, ΔHr=-5.82 × 10−5 KJ/mol, and ΔSr= −291.32 KJ/mol. K. The strength of catalyst tested via reusability test showed catalyst reusability test was altered at 7 cycles. The produced biodiesel have fuel properties similar to conventional diesel. The study concluded that the hybridization of oils for biodiesel conversion is viable.
This study investigates a road-rail intermodal routing problem in a hub-and-spoke network. Carbon cap-and-trade policy is accommodated with the routing to reduce carbon dioxide emissions. Multiple time windows are employed to enhance customer flexibility and achieve on-time pickup and delivery services. Road service flexibility and resulting truck operations optimization are explored by combining truck departure time planning under traffic restrictions and speed optimization with the routing. To enhance the feasibility and optimality of the problem optimization, the routing problem is formulated in a fuzzy environment where capacity and carbon trading price rate are trapezoidal fuzzy parameters. Based on the customer-centric objective setting, a fuzzy nonlinear optimization model and its linear reformation are given to formulate the proposed routing problem that combines distribution route design, time window selection and truck operations optimization. A robust possibilistic programming approach is developed to optimize the routing problem by obtaining its robust solutions. A case study is presented to demonstrate the feasibility of the proposed approaches. The results show that the multiple time windows and truck operations optimization can lower the total costs, enhance the optimality robustness and reduce carbon dioxide emissions of the routing optimization. The sensitivity analysis finds that increasing the lower bound of the confidence level in the robust possibilistic programming model improve the robustness and environmental sustainability; however, worsen the economy of the routing optimization.
Alexander Genser, Noel Hautle, Michail Makridis
et al.
A reliable estimation of the traffic state in a network is essential, as it is the input of any traffic management strategy. The idea of using the same type of sensors along large networks is not feasible; as a result, data fusion from different sources for the same location should be performed. However, the problem of estimating the traffic state alongside combining input data from multiple sensors is complex for several reasons, such as variable specifications per sensor type, different noise levels, and heterogeneous data inputs. To assess sensor accuracy and propose a fusion methodology, we organized a video measurement campaign in an urban test area in Zurich, Switzerland. The work focuses on capturing traffic conditions regarding traffic flows and travel times. The video measurements are processed (a) manually for ground truth and (b) with an algorithm for license plate recognition. Additional processing of data from established thermal imaging cameras and the Google Distance Matrix allows for evaluating the various sensors’ accuracy and robustness. Finally, we propose an estimation baseline MLR (multiple linear regression) model (5% of ground truth) that is compared to a final MLR model that fuses the 5% sample with conventional loop detector and traffic signal data. The comparison results with the ground truth demonstrate the efficiency and robustness of the proposed assessment and estimation methodology.
Bryn C. Taylor, Franck Lejzerowicz, Marion Poirel
et al.
ABSTRACT Lifestyle factors, such as diet, strongly influence the structure, diversity, and composition of the microbiome. While we have witnessed over the last several years a resurgence of interest in fermented foods, no study has specifically explored the effects of their consumption on gut microbiota in large cohorts. To assess whether the consumption of fermented foods is associated with a systematic signal in the gut microbiome and metabolome, we used a multi-omic approach (16S rRNA amplicon sequencing, metagenomic sequencing, and untargeted mass spectrometry) to analyze stool samples from 6,811 individuals from the American Gut Project, including 115 individuals specifically recruited for their frequency of fermented food consumption for a targeted 4-week longitudinal study. We observed subtle but statistically significant differences between consumers and nonconsumers in beta diversity as well as differential taxa between the two groups. We found that the metabolome of fermented food consumers was enriched with conjugated linoleic acid (CLA), a putatively health-promoting molecule. Cross-omic analyses between metagenomic sequencing and mass spectrometry suggest that CLA may be driven by taxa associated with fermented food consumers. Collectively, we found modest yet persistent signatures associated with fermented food consumption that appear present in multiple -omic types which motivate further investigation of how different types of fermented food impact the gut microbiome and overall health. IMPORTANCE Public interest in the effects of fermented food on the human gut microbiome is high, but limited studies have explored the association between fermented food consumption and the gut microbiome in large cohorts. Here, we used a combination of omics-based analyses to study the relationship between the microbiome and fermented food consumption in thousands of people using both cross-sectional and longitudinal data. We found that fermented food consumers have subtle differences in their gut microbiota structure, which is enriched in conjugated linoleic acid, thought to be beneficial. The results suggest that further studies of specific kinds of fermented food and their impacts on the microbiome and health will be useful.
The performance of speaker aware training method based on i-vector is poor because of using MFCC which has the relative poor robustness as the input feature for the extraction of the i-vector.To solve this problem,an improved i-vector based speaker aware training method is proposed.Firstly,a low dimensional feature extraction method based on SVD is proposed,and then the feature extracted by this method is used to replace the MFCC,which can extract better i-vector.Experimental results show that,in the Vystadial_cz corpus,compared with the DNN-HMM speech recognition system and the original i-vector based speaker aware training method,the recognition performance of this method is increased by 1.62% and 1.52% respectively,in the WSJ corpus,the recognition performance of this method is increased by 3.9% and 1.48% respectively.
Scholarly affinities are one of the most fundamental hidden dynamics that drive scientific development. Some affinities are actual, and consequently can be measured through classical academic metrics such as co-authoring. Other affinities are potential, and therefore do not leave visible traces in information systems; for instance, some peers may share interests without actually knowing it. This article illustrates the development of a map of affinities for academic collectives, designed to be relevant to three audiences: the management, the scholars themselves, and the external public. Our case study involves the School of Architecture, Civil and Environmental Engineering of EPFL, hereinafter ENAC. The school consists of around 1,000 scholars, 70 laboratories, and 3 institutes. The actual affinities are modeled using the data available from the information systems reporting publications, teaching, and advising scholars, whereas the potential affinities are addressed through text mining of the publications. The major challenge for designing such a map is to represent the multi-dimensionality and multi-scale nature of the information. The affinities are not limited to the computation of heterogeneous sources of information; they also apply at different scales. The map, thus, shows local affinities inside a given laboratory, as well as global affinities among laboratories. This article presents a graphical grammar to represent affinities. Its effectiveness is illustrated by two actualizations of the design proposal: an interactive online system in which the map can be parameterized, and a large-scale carpet of 250 square meters. In both cases, we discuss how the materiality influences the representation of data, in particular the way key questions could be appropriately addressed considering the three target audiences: the insights gained by the management and their consequences in terms of governance, the understanding of the scholars’ own positioning in the academic group in order to foster opportunities for new collaborations and, eventually, the interpretation of the structure from a general public to evaluate the relevance of the tool for external communication.
Bibliography. Library science. Information resources
Najim Abid Jassim, Ass. Prof. Dr., Athraa Hameed Turki, MSc student
Chilled ceilings systems offer potential for overall capital savings. The main aim of the present research is to investigate the thermal performance of the indirect contact closed circuit cooling tower, ICCCCT used with chilled ceiling, to gain a deeper knowledge in this important field of engineering which has been traditionally used in various industrial & HVAC systems. To achieve this study, experimental work were implemented for the ICCCCT use with chilled ceiling. In this study the thermal performances of closed wet cooling tower use with chilled ceiling is experimentally and theoretically investigated. Different experimental tests were conducted by varying the controlling parameters to investigate their effects on the ICCCCT characteristics such as tower cooling capacity, chilled ceiling cooling capacity, tower saturation efficiency, mass transfer coefficient and heat transfer coefficient. The following controlling parameters are varied during experiments: spray water flow rate (90 to 150 kg/hr), ambient air wet bulb temperature (12 to 18 oC), and also changing chilled ceiling flow rate (2 to 6 l/min).