F. Michael Bartlett, Khalid Backtash
Hasil untuk "Engineering (General). Civil engineering (General)"
Menampilkan 20 dari ~8093587 hasil · dari CrossRef, DOAJ, arXiv, Semantic Scholar
Ali Abbasi Godarzi, Khadijeh Abdi Rokni
Energy intensity is one of the most important energy feature that has a dramatic value in energy system of Iran. Indeed, Iran is one of the most energy intensive countries in the world and its main reason is related to high energy consumption in household section. In this article, we present a non-linear model that considers three scenarios in management of household energy demand reducing. Therefore, a rational percentage of energy consumption reduction in the household sector, which firstly eliminates the imbalance between energy production and consumption, and secondly, derives a rational amount of profit from various reduction scenarios, will be presented. The mentioned rational profits in this article are obtained from three scenarios. In the first scenario, it is assumed that the percentage reduction in household energy consumption will be allocated to reducing energy demand in the industrial sector, resulting in profits from value-added creation in this sector. In the second scenario, all benefits from reducing energy consumption in the household sector will be devoted to energy exports, yielding profits from this source. Finally, in the third scenario, the reduction in energy consumption will lead to a decrease in energy supply and consequently a reduction in energy supply costs. To conduct a comprehensive study, a combination of the mentioned scenarios has also been modeled and investigated. The model results indicate that with a 25% reduction in household energy consumption in the 2024-2034 timeframe, the energy imbalance will be eliminated, and allocating 5% of this reduction entirely to the industrial sector will result in profits equivalent to $164.18 billion. However, it should be noted that in the considered combined scenarios, the greater the share of the first scenario, the higher the resulting profit, and the optimal point is achieved in the first scenario.
Huimin Li, Lingfeng Li, Bin Liu et al.
High-Speed Trains (HSTs) have emerged as a mainstream mode of transportation in China, owing to their exceptional safety and efficiency. Ensuring the reliable operation of HSTs is of paramount economic and societal importance. As critical rotating mechanical components of the transmission system, bearings make their fault diagnosis a topic of extensive attention. This paper provides a systematic review of image encoding-based bearing fault diagnosis methods tailored to the condition monitoring of HSTs. First, it categorizes the image encoding techniques applied in the field of bearing fault diagnosis. Then, a review of state-of-the-art studies has been presented, encompassing both monomodal image conversion and multimodal image fusion approaches. Finally, it highlights current challenges and proposes future research directions to advance intelligent fault diagnosis in HSTs, aiming to provide a valuable reference for researchers and engineers in the field of intelligent operation and maintenance.
Ibrahim Abbas, Aboelnour Abdalla, Areej Almuneef et al.
This study investigates generalized thermoelastic interaction in porous asphaltic materials subjected to thermal loading, using fractional model with time-delay effects. The framework incorporates the Riemann-Liouville fractional derivative to account for memory-dependent heat conduction, extending classical thermoelasticity into a more accurate and comprehensive domain. The Lord–Shulman model with one relaxation time is adopted to describe the coupling between mechanical and thermal responses. The governing equations are solved using Laplace transform and the eigenvalues approach, and the Stehfest algorithm is employed for numerical inversion. A detailed analysis is presented for temperature distribution, displacement, and stress fields in both solid and liquid phases of the porous medium under traction-free and thermally loaded boundary conditions. The numerical calculations show how the different sets of fractional parameters have impacted the temperature, stress, and displacement in the solid and liquid phases. Eventually, the visual representation of the data illustrates the distinctions between the fractional poro-thermoelasticity and classical coupled thermoelasticity formulations.
The Truyen Tran, Thu Minh Tran, Xuan Tung Nguyen et al.
This study aims to present the results of anticipation of lightweight concrete durability when exposed to a chloride environment under pre-compressive load. The research employs Keramzit aggregate as the coarse aggregate for lightweight concrete. Following a 28-day curing period in water, the concrete specimens undergo varying levels of pre-compressive stress. Rapid Chloride Permeability Testing is then conducted to ascertain the chloride diffusion coefficient. The study posits a correlation between the chloride diffusion coefficient and precompressive stress levels, drawing from the experimental findings. Furthermore, Monte-Carlo simulation is employed to assess the influence of stochastic variables on the corrosion likelihood of concrete structures using lightweight aggregates. These stochastic variables encompass the chloride diffusion coefficient, surface chloride concentration, critical chloride concentration, concrete protection layer thickness, and a coefficient contingent on environmental conditions, to appraise the operational lifespan of lightweight concrete structures.
Dinesh Eswararaj, Ajay Babu Nellipudi, Vandana Kollati
The automotive industry generates vast amounts of data from sensors, telemetry, diagnostics, and real-time operations. Efficient data engineering is critical to handle challenges of latency, scalability, and consistency. Modern data lakehouse formats Delta Parquet, Apache Iceberg, and Apache Hudi offer features such as ACID transactions, schema enforcement, and real-time ingestion, combining the strengths of data lakes and warehouses to support complex use cases. This study presents a comparative analysis of Delta Parquet, Iceberg, and Hudi using real-world time-series automotive telemetry data with fields such as vehicle ID, timestamp, location, and event metrics. The evaluation considers modeling strategies, partitioning, CDC support, query performance, scalability, data consistency, and ecosystem maturity. Key findings show Delta Parquet provides strong ML readiness and governance, Iceberg delivers high performance for batch analytics and cloud-native workloads, while Hudi is optimized for real-time ingestion and incremental processing. Each format exhibits tradeoffs in query efficiency, time-travel, and update semantics. The study offers insights for selecting or combining formats to support fleet management, predictive maintenance, and route optimization. Using structured datasets and realistic queries, the results provide practical guidance for scaling data pipelines and integrating machine learning models in automotive applications.
Daisuke Kikuta, Hiroki Ikeuchi, Kengo Tajiri
Chaos Engineering (CE) is an engineering technique aimed at improving the resilience of distributed systems. It involves intentionally injecting faults into a system to test its resilience, uncover weaknesses, and address them before they cause failures in production. Recent CE tools automate the execution of predefined CE experiments. However, planning such experiments and improving the system based on the experimental results still remain manual. These processes are labor-intensive and require multi-domain expertise. To address these challenges and enable anyone to build resilient systems at low cost, this paper proposes ChaosEater, a system that automates the entire CE cycle with Large Language Models (LLMs). It predefines an agentic workflow according to a systematic CE cycle and assigns subdivided processes within the workflow to LLMs. ChaosEater targets CE for software systems built on Kubernetes. Therefore, the LLMs in ChaosEater complete CE cycles through software engineering tasks, including requirement definition, code generation, testing, and debugging. We evaluate ChaosEater through case studies on small- and large-scale Kubernetes systems. The results demonstrate that it consistently completes reasonable CE cycles with significantly low time and monetary costs. Its cycles are also qualitatively validated by human engineers and LLMs.
Hashini Gunatilake, John Grundy, Rashina Hoda et al.
Empathy plays a crucial role in software engineering (SE), influencing collaboration, communication, and decision-making. While prior research has highlighted the importance of empathy in SE, there is limited understanding of how empathy manifests in SE practice, what motivates SE practitioners to demonstrate empathy, and the factors that influence empathy in SE work. Our study explores these aspects through 22 interviews and a large scale survey with 116 software practitioners. Our findings provide insights into the expression of empathy in SE, the drivers behind empathetic practices, SE activities where empathy is perceived as useful or not, and the other factors that influence empathy. In addition, we offer practical implications for SE practitioners and researchers, offering a deeper understanding of how to effectively integrate empathy into SE processes.
Dilli Ram Bhattarai, Cristina Poleacovschi
Zeshan Faiz, Shumaila Javeed, Iftikhar Ahmed et al.
The major goal of this research study is to solve the fractional order Wolbachia invasive model (FWIM) by developing a computational framework based on the Bayesian regularization backpropagation neural network (BRB-NN) approach. The population of mosquitoes is categorized into two classes, Wolbachia-infected mosquitoes and Wolbachia-uninfected mosquitoes. We also incorporate incomplete cytoplasmic incompatibility and imperfect maternal transmission. We investigate the effects of the fractional order derivative (α) and reproduction rate of Wolbachia-infected mosquitoes ϕw on the dynamics of mosquitoes. The proposed Bayesian regularization backpropagation scheme is applied to three distinct cases using 80% and 20% of the created dataset for training and testing, respectively, with 15 hidden neurons. Comparisons of the results are presented to verify the validity of the proposed technique for solving the model. The Bayesian regularization approach is used to lower the mean square error (MSE) for the fractional order Wolbachia invasive model. The achieved results are based on MSE, correlation, state transitions, error histograms, and regression analysis to confirm the effectiveness of the suggested approach. Additionally, the absolute error value modifies the designed approach’s accuracy.
Huaixin LI, Changgen YAN, Jiale XIE et al.
Objective Pile-soil interaction plays a critical role in slope support engineering. Since the contact surface represents the weakest link in the system, analyzing the influence of the soil shear area on the mechanical properties and the constitutive model of the contact surface between soil and structure enhances the understanding of pile-soil interactions.Methods Three sets of ring-shaped samples are initially cast to evaluate the impact of the soil shear area on the interface strength characteristics between soil and structure. Each sample has a height of 1.00 cm, an outer diameter of 6.12 cm, and inner diameters of 0 cm, 3.50 cm, and 4.98 cm, respectively. Subsequently, corresponding segments of 300-mesh sandpaper are adhered to the sample surfaces using a robust adhesive. The remolded soil is then dried, pulverized, sieved through a 2 mm mesh, and adjusted to a moisture content of 20%. It is cured for 24 hours prior to sample preparation. Different soil-structure interface samples with varying soil shear areas are then fabricated using specially designed equipment, and shear tests are conducted using the electric ZLB-1 strain-controlled direct shear apparatus manufactured by Nanjing Soil Instrument Co., Ltd. The test employs the fast shear method, with a shear rate of 0.8 mm/min and a shear displacement of 7 mm. During the tests, the soil shear area ratio (<italic>ρ</italic>) at the interface between soil and structure is controlled at 0, 0.33, 0.66, and 1.00, with normal stresses of 100, 200, 300, and 400 kPa, respectively.Results and Discussions The peak strength of the soil-structure interface increases linearly with increasing normal stress and soil shear area ratio. The shear area ratio of soil significantly influences the stress-strain curve of the sample. When the soil shear area ratio is <italic>ρ</italic> = 0 or <italic>ρ</italic> = 1.00, the shear stress-displacement curve of the sample exhibits a hardening behavior. Conversely, when the shear area ratio is 0 < <italic>ρ</italic> < 1.00, the curve demonstrates a softening behavior. This primarily occurs because the soil strength exceeds the interface strength between the soil and the structure. Under various soil shear area conditions, the strength at the soil-structure interface initially derives from the soil itself. Once the soil’s shear strength reaches its maximum, the sample’s shear stress is substantially reduced, exhibiting a stress-softening phenomenon. As the soil shear area ratio at the soil-structure interface gradually increases, the shear strength at the interface approaches that of the soil. Consequently, the interface effect leads to a decrease in shear strength compared to that of the soil shear surface. The shear strength of the soil shear surface primarily arises from the interactions between soil particles, including rotation, interlocking, and biting. In contrast, the shear strength at the soil-structure interface comprises two components: one part is generated by the friction, interlocking, and biting between the soil particles and the structure’s surface, while the other arises from the interactions among soil particles near the shear zone on the structure’s surface. During the shear process of the specimen, the proportion of shear strength is contributed by inter-particle shear resistance within the soil, and the interface shear resistance undergoes dynamic changes, exhibiting variability in the shear strength mechanism at the soil-structure interface. The total damage to the soil shear surface results from loading damage, whereas the total damage at the soil-structure interface is divided into equivalent initial damage, loading damage, and coupling damage caused by the interaction of the two. The smaller the soil shear area ratio <italic>ρ</italic>, the greater the equivalent initial damage at the soil-structure interface. By establishing a damage evolution relationship between the soil shear surface and the soil-structure interface, the latter is equated to the soil shear surface with initial damage, describing the influence of soil shear area on the strength characteristics of the interface. Based on the assumption that the strength of both the soil-structure interface and the soil shear surface follows a two-parameter Weibull probability distribution during the shear process of the sample, a damage strength model of the soil-structure interface based on equivalent damage is proposed using statistical damage theory. This model primarily consists of the fitting parameter <italic>B</italic>, which controls peak strength, and the fitting parameter <italic>C</italic>, which controls softening characteristics. Compared to the soil shear plane, the pre-set interface leads to significant changes in pores and microcracks during the shear process of the soil-structure interface. Consequently, the total damage to the soil-structure interface is considerably higher than that of the soil shear plane initially, and the macroscopic performance is that the strength of the soil-structure interface is significantly lower than that of the soil shear plane. The proposed model is validated by comparing it with experimental data, demonstrating that it accurately represents both softening and hardening stress-strain behaviors at the soil-structure interface.Conclusions This study explores how the soil shear area influences the mechanical properties of the soil-structure interface and proposes a zero-thickness interface model to predict this behavior, supported by experimental evidence. This model effectively fits the nonlinear relationship at the soil-structure interface under different soil shear area ratios, making it suitable for programmed calculations in finite element software.
Johannes Schleiss, Aditya Johri
In this practice paper, we propose a framework for integrating AI into disciplinary engineering courses and curricula. The use of AI within engineering is an emerging but growing area and the knowledge, skills, and abilities (KSAs) associated with it are novel and dynamic. This makes it challenging for faculty who are looking to incorporate AI within their courses to create a mental map of how to tackle this challenge. In this paper, we advance a role-based conception of competencies to assist disciplinary faculty with identifying and implementing AI competencies within engineering curricula. We draw on prior work related to AI literacy and competencies and on emerging research on the use of AI in engineering. To illustrate the use of the framework, we provide two exemplary cases. We discuss the challenges in implementing the framework and emphasize the need for an embedded approach where AI concerns are integrated across multiple courses throughout the degree program, especially for teaching responsible and ethical AI development and use.
Bohui Zhang, Valentina Anita Carriero, Katrin Schreiberhuber et al.
Ontology engineering (OE) in large projects poses a number of challenges arising from the heterogeneous backgrounds of the various stakeholders, domain experts, and their complex interactions with ontology designers. This multi-party interaction often creates systematic ambiguities and biases from the elicitation of ontology requirements, which directly affect the design, evaluation and may jeopardise the target reuse. Meanwhile, current OE methodologies strongly rely on manual activities (e.g., interviews, discussion pages). After collecting evidence on the most crucial OE activities, we introduce \textbf{OntoChat}, a framework for conversational ontology engineering that supports requirement elicitation, analysis, and testing. By interacting with a conversational agent, users can steer the creation of user stories and the extraction of competency questions, while receiving computational support to analyse the overall requirements and test early versions of the resulting ontologies. We evaluate OntoChat by replicating the engineering of the Music Meta Ontology, and collecting preliminary metrics on the effectiveness of each component from users. We release all code at https://github.com/King-s-Knowledge-Graph-Lab/OntoChat.
Jinqi Luo, Tianjiao Ding, Kwan Ho Ryan Chan et al.
Large Language Models (LLMs) are being used for a wide variety of tasks. While they are capable of generating human-like responses, they can also produce undesirable output including potentially harmful information, racist or sexist language, and hallucinations. Alignment methods are designed to reduce such undesirable outputs via techniques such as fine-tuning, prompt engineering, and representation engineering. However, existing methods face several challenges: some require costly fine-tuning for every alignment task; some do not adequately remove undesirable concepts, failing alignment; some remove benign concepts, lowering the linguistic capabilities of LLMs. To address these issues, we propose Parsimonious Concept Engineering (PaCE), a novel activation engineering framework for alignment. First, to sufficiently model the concepts, we construct a large-scale concept dictionary in the activation space, in which each atom corresponds to a semantic concept. Given any alignment task, we instruct a concept partitioner to efficiently annotate the concepts as benign or undesirable. Then, at inference time, we decompose the LLM activations along the concept dictionary via sparse coding, to accurately represent the activations as linear combinations of benign and undesirable components. By removing the latter ones from the activations, we reorient the behavior of the LLM towards the alignment goal. We conduct experiments on tasks such as response detoxification, faithfulness enhancement, and sentiment revising, and show that PaCE achieves state-of-the-art alignment performance while maintaining linguistic capabilities.
Johannes Schleiss, Aditya Johri, Sebastian Stober
Building up competencies in working with data and tools of Artificial Intelligence (AI) is becoming more relevant across disciplinary engineering fields. While the adoption of tools for teaching and learning, such as ChatGPT, is garnering significant attention, integration of AI knowledge, competencies, and skills within engineering education is lacking. Building upon existing curriculum change research, this practice paper introduces a systems perspective on integrating AI education within engineering through the lens of a change model. In particular, it identifies core aspects that shape AI adoption on a program level as well as internal and external influences using existing literature and a practical case study. Overall, the paper provides an analysis frame to enhance the understanding of change initiatives and builds the basis for generalizing insights from different initiatives in the adoption of AI in engineering education.
Isaiah Lahr, Saghir Alfasly, Peyman Nejat et al.
Searching for similar images in archives of histology and histopathology images is a crucial task that may aid in patient matching for various purposes, ranging from triaging and diagnosis to prognosis and prediction. Whole slide images (WSIs) are highly detailed digital representations of tissue specimens mounted on glass slides. Matching WSI to WSI can serve as the critical method for patient matching. In this paper, we report extensive analysis and validation of four search methods bag of visual words (BoVW), Yottixel, SISH, RetCCL, and some of their potential variants. We analyze their algorithms and structures and assess their performance. For this evaluation, we utilized four internal datasets ($1269$ patients) and three public datasets ($1207$ patients), totaling more than $200,000$ patches from $38$ different classes/subtypes across five primary sites. Certain search engines, for example, BoVW, exhibit notable efficiency and speed but suffer from low accuracy. Conversely, search engines like Yottixel demonstrate efficiency and speed, providing moderately accurate results. Recent proposals, including SISH, display inefficiency and yield inconsistent outcomes, while alternatives like RetCCL prove inadequate in both accuracy and efficiency. Further research is imperative to address the dual aspects of accuracy and minimal storage requirements in histopathological image search.
Yuan Huang, Yinan Chen, Xiangping Chen et al.
The rapid development of deep learning techniques, improved computational power, and the availability of vast training data have led to significant advancements in pre-trained models and large language models (LLMs). Pre-trained models based on architectures such as BERT and Transformer, as well as LLMs like ChatGPT, have demonstrated remarkable language capabilities and found applications in Software engineering. Software engineering tasks can be divided into many categories, among which generative tasks are the most concern by researchers, where pre-trained models and LLMs possess powerful language representation and contextual awareness capabilities, enabling them to leverage diverse training data and adapt to generative tasks through fine-tuning, transfer learning, and prompt engineering. These advantages make them effective tools in generative tasks and have demonstrated excellent performance. In this paper, we present a comprehensive literature review of generative tasks in SE using pre-trained models and LLMs. We accurately categorize SE generative tasks based on software engineering methodologies and summarize the advanced pre-trained models and LLMs involved, as well as the datasets and evaluation metrics used. Additionally, we identify key strengths, weaknesses, and gaps in existing approaches, and propose potential research directions. This review aims to provide researchers and practitioners with an in-depth analysis and guidance on the application of pre-trained models and LLMs in generative tasks within SE.
Davide Slaghenaufi, Giovanni Luzzini, Matteo Borgato et al.
In this work, the aromatic characterization of commercially available Prosecco wines with a price range between EUR 7 and 13 was carried out. These wines came from three different areas of origin: Valdobbiadene, Asolo and Treviso. Seventy volatile compounds were identified and quantified in the wines. Quantitatively, the wines were mainly characterized by compounds of fermentation origin (alcohols, acids, esters), and C<sub>6</sub>-alcohols, and to a lesser extent, terpenes, low molecular weight volatile sulfur compounds (VSC), and benzenoids. To determine their impact on the aroma of Prosecco wine, the respective OAVs were calculated. The molecules with higher OAV were ethyl hexanoate, isoamyl acetate, and β-damascenone. More generally, esters, responsible for fruity notes, seemed to play a major role in the aroma of Prosecco wine. Investigation into the possible effect of different production zones indicated 16 significantly different compounds accounting for differences between the various areas of origin of the wines, being mostly VSC, esters and C<sub>6</sub>-alcohols. A sensory evaluation through a sorting task highlighted the formation of clusters; wine samples were divided into two main groups partially attributable to the areas of origin. From a chemical point of view, cluster A was richer in esters, while cluster B had, on average, higher concentrations of compounds associated with wine aging such as cyclic terpenes, norisoprenoids (TDN and vitispirane), and VSC.
Ana Paula Jesus, Marta Ferreira Dias, Margarida Coelho
This paper explores the correlation between respondents concerns regarding climate change, their eagerness to adopt an AFV and their responsiveness to incentives. Seen as the solution for a cleaner mobility and greenhouse gas reduction in urban areas globally, alternative fuel vehicles (AFV) still own a modest market share in Europe. Among many reasons, the purchase price seems to be one of the most challenging to overcome. Incentives are considered a solution to mitigate the price barrier. The results of a survey carried out by the authors to 444 respondents led the authors to conclude that participants agree that AFVs contribute to tackle climate change. They also deduced that the vehicles price represents an offside for the lower-income households. Furthermore, the study revealed that the latter are less prone to buy an alternative fuel vehicle than higher-income families (59% against 80%). The authors also inferred that generally, households are more receptive to incentives or benefits based on up-front discounts or exemptions, directly impacting price and immediate savings, such as taxes exemption (value added tax and circulation tax), fuel discounts and purchase incentives. However, some differences were observed between income segments. For instance, the reduction or exemption of loan interests is among the most popular incentives for lower revenues, whilst higher revenues favour scrappage and non-financial incentives. Finally, in line with other studies, as upper incomes are less dependent on incentives and benefits to carry out the purchase, the authors put forward a differential and progressive approach for incentive instruments targeting lower revenues, allowing broader and equitable access to low carbon technology.
Hrvoje Belani, Petar Solic, Toni Perkovic
Ontologies serve as a one of the formal means to represent and model knowledge in computer science, electrical engineering, system engineering and other related disciplines. Ontologies within requirements engineering may be used for formal representation of system requirements. In the Internet of Things, ontologies may be used to represent sensor knowledge and describe acquired data semantics. Designing an ontology comprehensive enough with an appropriate level of knowledge expressiveness, serving multiple purposes, from system requirements specifications to modeling knowledge based on data from IoT sensors, is one of the great challenges. This paper proposes an approach towards ontology-based requirements engineering for well-being, aging and health supported by the Internet of Things. Such an ontology design does not aim at creating a new ontology, but extending the appropriate one already existing, SAREF4EHAW, in order align with the well-being, aging and health concepts and structure the knowledge within the domain. Other contributions include a conceptual formulation for Well-Being, Aging and Health and a related taxonomy, as well as a concept of One Well-Being, Aging and Health. New attributes and relations have been proposed for the new ontology extension, along with the updated list of use cases and particular ontological requirements not covered by the original ontology. Future work envisions full specification of the new ontology extension, as well as structuring system requirements and sensor measurement parameters to follow description logic.
Halaman 18 dari 404680