M. Hobday
Hasil untuk "Industrial engineering. Management engineering"
Menampilkan 20 dari ~11151169 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef
Antonio Ramón Gómez García, Francisco Luis Rivas Flor
Las lesiones por accidentes de trabajo siguen siendo un problema significativo en el sector de la construcción en Ecuador, especialmente en provincias con alta actividad urbanística y población laboral. Esta investigación busca identificar las causas de las diferencias en la incidencia de accidentes entre Guayas y Pichincha. Utilizando datos de 2014 a 2023, se calcularon las tasas de incidencia estandarizadas por edad (ASIR) y la razón de tasas (IRR). Además, se diseñó y aplicó un cuestionario para explorar las diferencias entre expertos (U de Mann-Whitney - Índice Kappa de Cohen). Los resultados muestran que Guayas presenta ASIRs más altas y el doble de IRR en comparación con Pichincha. Los expertos de Guayas identificaron factores a nivel macro como predominantes, mientras que en Pichincha se enfocaron en factores micro. No se encontraron diferencias significativas a nivel meso. Las disparidades podrían deberse a la aplicación desigual de normativas y actitudes culturales hacia la seguridad. Se sugiere mejorar la inspección laboral en Guayas y realizar estudios nacionales para una comprensión más amplia.
Francisco Javier Luque-Hernández, Sergio Aquino-Britez, Josefa Díaz-Álvarez et al.
Evolutionary algorithms are extensively used to solve optimisation problems. However, it is important to consider and reduce their energy consumption, bearing in mind that programming languages also significantly affect energy efficiency. This research work compares the execution of four frameworks—ParadisEO (C++), ECJ (Java), DEAPand Inspyred (Python)—running on two different architectures: a laptop and a server. The study follows a design that combines three population sizes (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mn>2</mn><mn>6</mn></msup></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mn>2</mn><mn>10</mn></msup></semantics></math></inline-formula>, <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><msup><mn>2</mn><mn>14</mn></msup></semantics></math></inline-formula> individuals) and three crossover probabilities (<inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>0.01</mn></mrow></semantics></math></inline-formula>; <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>0.2</mn></mrow></semantics></math></inline-formula>; <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>0.8</mn></mrow></semantics></math></inline-formula>) applied to four benchmarks (OneMax, Sphere, Rosenbrock and Schwefel). This work makes a relevant methodological contribution by providing a consistent implementation of the metric <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>η</mi><mo>=</mo><mi>f</mi><mi>i</mi><mi>t</mi><mi>n</mi><mi>e</mi><mi>s</mi><mi>s</mi><mo>/</mo><mi>k</mi><mi>W</mi><mi>h</mi></mrow></semantics></math></inline-formula>. This metric has been systematically applied in four different frameworks, thereby setting up a standardized and replicable protocol for the evaluation of the energy efficiency of evolutionary algorithms. The CodeCarbon software was used to estimate energy consumption, which was measured using RAPL counters. This unified metric also indicates the algorithmic productivity. The experimental results show that the server speeds up the number of generations by a factor of approximately <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mn>2.5</mn></mrow></semantics></math></inline-formula>, but the energy consumption increases four- to sevenfold. Therefore, on average, the energy efficiency of the laptop is five times higher. The results confirm the following conclusions: the computer power does not guarantee sustainability, and population size is a key factor in balancing quality and energy.
Long Chu
This study proposes a method for the analysis and optimization of low-carbon sports behavior patterns based on deep learning. Combining a sports behavior recognition algorithm with implicit label supervision and a module for extracting key individual characteristics, the method effectively integrates spatial and temporal features, significantly enhancing the ability to recognize complex movement patterns. This study addresses the critical need for sustainable sports analytics that significantly advances low-carbon behavior recognition in professional basketball. Training and testing on the NCAA and UCF-BBall dataset shows that the proposed model achieves 89.3 % and 85.7 % accuracy in sports behavior identication respectively. The proposed method excels in identifying various movement patterns in basketball games, validating its effectiveness in recognizing sports behavior patterns, improving athletic efficiency, and reducing unnecessary energy consumption.
Benedikt Dornauer, Michael Felderer, Mehrdad Saadatmand et al.
Modern software systems undergo frequent updates, continuously evolving with new versions and variants to offer new features, improve functionality, and expand usability. Given the rapid pace of software evolution, organizations require effective tools and methods to mitigate the challenges associated with these changes, also called deltas. To address these challenges, the international SmartDelta Project joined industry and academia to develop and test solutions for incremental development and quality assurance. This paper provides insights into the SmartDelta project achievements and highlights one main contribution: the SmartDelta Methodology, a domain-unspecific concept for delta management in incremental software engineering. This methodology enables companies to identify gaps in their continuous engineering environment across six stages and helps to discover new tools in various technical areas. Additionally, the paper presents seven selected tools at different stages of the methodology.
Bertrand Meyer
Vibe coding, the much-touted use of AI techniques for programming, faces two overwhelming obstacles: the difficulty of specifying goals ("prompt engineering" is a form of requirements engineering, one of the toughest disciplines of software engineering); and the hallucination phenomenon. Programs are only useful if they are correct or very close to correct. The solution? Combine the creativity of artificial intelligence with the rigor of formal specification methods and the power of formal program verification, supported by modern proof tools.
Fei-Lung Huang, Kai-Ying Chen, Wei-Hao Su
Smart city is an area where the Internet of things is used effectively with sensors. The data used by smart city can be collected through the cameras, sensors etc. Intelligent video surveillance (IVS) systems integrate multiple networked cameras for automatic surveillance purposes. Such systems can analyze and monitor video data and perform automatic functions required by users. This study performed main path analysis (MPA) to explore the development trends of IVS research. First, relevant articles were retrieved from the Web of Science database. Next, MPA was performed to analyze development trends in relevant research, and g-index and h-index values were analyzed to identify influential journals. Cluster analysis was then performed to group similar articles, and Wordle was used to display the key words of each group in word clouds. These key words served as the basis for naming their corresponding groups. Data mining and statistical analysis yielded six major IVS research topics, namely video cameras, background modeling, closed-circuit television, multiple cameras, person reidentification, and privacy, security, and protection. These topics can boost the future innovation and development of IVS technology and contribute to smart transportation, smart city, and other applications. According to the study results, predictions were made regarding developments in IVS research to provide recommendations for future research.
Claudia Cavallaro, Carolina Crespi, Vincenzo Cutello et al.
This paper introduces an agent-based model grounded in the ACO algorithm to investigate the impact of partitioning ant colonies on algorithmic performance. The exploration focuses on understanding the roles of group size and number within a multi-objective optimization context. The model consists of a colony of memory-enhanced ants (ME-ANTS) which, starting from a given position, must collaboratively discover the optimal path to the exit point within a grid network. The colony can be divided into groups of different sizes and its objectives are maximizing the number of ants that exit the grid while minimizing path costs. Three distinct analyses were conducted: an overall analysis assessing colony performance across different-sized groups, a group analysis examining the performance of each partitioned group, and a pheromone distribution analysis discerning correlations between temporal pheromone distribution and ant navigation. From the results, a dynamic correlation emerged between the degree of colony partitioning and solution quality within the ACO algorithm framework.
Logan Murphy, Torin Viger, Alessio Di Sandro et al.
In critical software engineering, structured assurance cases (ACs) are used to demonstrate how key properties (e.g., safety, security) are supported by evidence artifacts (e.g., test results, proofs). ACs can also be studied as formal objects in themselves, such that formal methods can be used to establish their correctness. Creating rigorous ACs is particularly challenging in the context of software product lines (SPLs), wherein a family of related software products is engineered simultaneously. Since creating individual ACs for each product is infeasible, AC development must be lifted to the level of product lines. In this work, we propose PLACIDUS, a methodology for integrating formal methods and software product line engineering to develop provably correct ACs for SPLs. To provide rigorous foundations for PLACIDUS, we define a variability-aware AC language and formalize its semantics using the proof assistant Lean. We provide tool support for PLACIDUS as part of an Eclipse-based model management framework. Finally, we demonstrate the feasibility of PLACIDUS by developing an AC for a product line of medical devices.
Julian Frattini, Michael Unterkalmsteiner, Davide Fucci et al.
Tools constitute an essential contribution to natural language processing for requirements engineering (NLP4RE) research. They are executable instruments that make research usable and applicable in practice. In this chapter, we first introduce a systematic classification of NLP4RE tools to improve the understanding of their types and properties. Then, we extend an existing overview with a systematic summary of 126 NLP4RE tools published between April 2019 and June 2023 to ease reuse and evolution of existing tools. Finally, we provide instructions on how to create, maintain, and disseminate NLP4RE tools to support a more rigorous management and dissemination.
Bradley P. Allen, Filip Ilievski
Knowledge engineering is the process of creating and maintaining knowledge-producing systems. Throughout the history of computer science and AI, knowledge engineering workflows have been widely used given the importance of high-quality knowledge for reliable intelligent agents. Meanwhile, the scope of knowledge engineering, as apparent from its target tasks and use cases, has been shifting, together with its paradigms such as expert systems, semantic web, and language modeling. The intended use cases and supported user requirements between these paradigms have not been analyzed globally, as new paradigms often satisfy prior pain points while possibly introducing new ones. The recent abstraction of systemic patterns into a boxology provides an opening for aligning the requirements and use cases of knowledge engineering with the systems, components, and software that can satisfy them best. This paper proposes a vision of harmonizing the best practices in the field of knowledge engineering by leveraging the software engineering methodology of creating reference architectures. We describe how a reference architecture can be iteratively designed and implemented to associate user needs with recurring systemic patterns, building on top of existing knowledge engineering workflows and boxologies. We provide a six-step roadmap that can enable the development of such an architecture, providing an initial design and outcome of the definition of architectural scope, selection of information sources, and analysis. We expect that following through on this vision will lead to well-grounded reference architectures for knowledge engineering, will advance the ongoing initiatives of organizing the neurosymbolic knowledge engineering space, and will build new links to the software architectures and data science communities.
Sergey Oktyabrinovich Gladkov
We are proposing a model mathematical description of droplet evaporation using the kinetic approach. We have obtained the basic equation of the theory by using the law of conserving the full power of the vapor–liquid system, which has not been done before. We have found the range of droplet sizes at which it is stable. We have given a comparison of the obtained results with the known traditional ones. We have given numerical estimates for the critical size of the fine-dispersed phase up to the value of which ordinary evaporation takes place (that is for Knudsen number <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>K</mi><mi>n</mi><mo>=</mo><mfrac><mi>l</mi><mi>R</mi></mfrac></mrow></semantics></math></inline-formula>, inequality <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>K</mi><mi>n</mi><mo>≪</mo><mn>1</mn></mrow></semantics></math></inline-formula> must be fulfilled, where <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>l</mi><mo>−</mo></mrow></semantics></math></inline-formula> is the free path of the molecule and <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>R</mi><mo>−</mo></mrow></semantics></math></inline-formula> is the droplet radius). We have given the optimal droplet size which is the most effective from the point of view of technical use in extinguishing flammable oil transformers.
Esmat Momeni
Purpose: Identifying, accessing, and evaluating course content, along with the skills of extracting, reasoning, and inferring, lead to the organization and presentation of materials in teaching. The purpose of the current research is to enhance the educational content of Bruner's curriculum by incorporating information literacy skills and critical thinking. Method: The current research is fundamental in terms of its purpose and has conducted using with the comparative qualitative content analysis method, with a directional approach. Research data has been collected in six stages through the study of library documents. The validity of the current research is based on content and reliability, utilizing library texts and documents. The reliability is ensured through a detailed description of steps and information analysis, as well as the principle of verifiability by maintaining documents related to different stages of the research. Finally, the principle of accuracy is intended to convey the research results accurately and in detail. Findings: The research findings indicated that the ability to identify information needs and analyze effectively empowers learners to respond to both typical and atypical questions within the content. The skill of obtaining the required information and the ability to infer encourage learners to engage critically with the content, prompting them to problematize and build upon previous concepts at the start of the lesson. The ability to critique and evaluate information sources, along with inferential reasoning skills, can transform content from simple to complex by presenting new concepts in a prominent and specific manner. The skill of optimal utilization of information sources and the ability of inductive reasoning are essential for learners to comprehend concepts by identifying similarities and differences, promoting practical guessing based on the content. The skill of complying with legal and ethical principles in information use, along with the skill of evaluation, fosters awareness in learners and lays the foundation for active participation, teamwork, self-direction, and independence. Conclusion: Learners in the process draw based on the fourteen cases of the educational from of curriculum, integrating the skills of information literacy and thinking. This integration helps them thinking, establish connections relationships information the information in their leading leads enhanced greater understanding, learning, learning the assimilation of new achieving content concepts. new. It is suggested to enhance the educational content of Bruner's curriculum by incorporating critical thinking skills and information literacy.
Bradley P. Allen, Filip Ilievski, Saurav Joshi
Knowledge engineering is the process of creating and maintaining knowledge-producing systems. Throughout the history of computer science and AI, knowledge engineering workflows have been widely used because high-quality knowledge is assumed to be crucial for reliable intelligent agents. However, the landscape of knowledge engineering has changed, presenting four challenges: unaddressed stakeholder requirements, mismatched technologies, adoption barriers for new organizations, and misalignment with software engineering practices. In this paper, we propose to address these challenges by developing a reference architecture using a mainstream software methodology. By studying the requirements of different stakeholders and eras, we identify 23 essential quality attributes for evaluating reference architectures. We assess three candidate architectures from recent literature based on these attributes. Finally, we discuss the next steps towards a comprehensive reference architecture, including prioritizing quality attributes, integrating components with complementary strengths, and supporting missing socio-technical requirements. As this endeavor requires a collaborative effort, we invite all knowledge engineering researchers and practitioners to join us.
L. Siddharth, Jianxi Luo
Aiming to support Retrieval Augmented Generation (RAG) in the design process, we present a method to identify explicit, engineering design facts - {head entity :: relationship :: tail entity} from patented artefact descriptions. Given a sentence with a pair of entities (based on noun phrases) marked in a unique manner, our method extracts the relationship that is explicitly communicated in the sentence. For this task, we create a dataset of 375,084 examples and fine-tune language models for relation identification (token classification) and elicitation (sequence-to-sequence). The token classification approach achieves up to 99.7 % accuracy. Upon applying the method to a domain of 4,870 fan system patents, we populate a knowledge base of over 2.93 million facts. Using this knowledge base, we demonstrate how Large Language Models (LLMs) are guided by explicit facts to synthesise knowledge and generate technical and cohesive responses when sought out for knowledge retrieval tasks in the design process.
Ioan Doroftei, Mircea Nitulescu, Doina Pisla et al.
The Joint International Conference of the 13th IFToMM International Symposium on Science of Mechanisms and Machines (SYROM 2022) and the XXV International Conference on Robotics (ROBOTICS 2022), https://syrom-robot.upt.ro, was organized by the Mechanical Engineering, Mechatronics and Robotics Department at the Mechanical Engineering Faculty, “Gheorghe Asachi” Technical University of Iasi, Romania, with the support of the: Romanian Association for the Science of Mechanisms and Machines (ARoTMM), Robotics Society of Romania (SRR), and Technical Sciences Academy of Romania (ASTR).
Faten Chaabane, Jalel Ktari, Tarek Frikha et al.
With the onset of the COVID-19 pandemic and the succession of its waves, the transmission of this disease and the number of deaths caused by it have been increasing. Despite the various vaccines, the COVID-19 virus is still contagious and dangerous for affected people. One of the remedies to this is precaution, and particularly social distancing. In the same vein, this paper proposes a remote voting system, which has to be secure, anonymous, irreversible, accessible, and simple to use. It therefore allows voters to have the possibility to vote for their candidate without having to perform the operation on site. This system will be used for university elections and particularly for student elections. We propose a platform based on a decentralized system. This system will use two blockchains communicating with each other: the public Ethereum blockchain and the private Quorum blockchain. The private blockchain will be institution-specific. All these blockchains send the necessary data to the public blockchain which manages different data related to the universities and the ministry. This system enables using encrypted data with the SHA-256 algorithm to have both security and information security. Motivated by the high energy consumption of blockchain and by the performance improvements in low-power, a test is performed on a low-power embedded platform Raspberry PI4 showing the possibility to use the Blockchain with limited resources.
Boli Chen, Xiao Pan, Simos A. Evangelou
This paper develops energy management (EM) control for series hybrid electric vehicles (HEVs) that include an engine start-stop system (SSS). The objective of the control is to optimally split the energy between the sources of the powertrain and achieve fuel consumption minimization. In contrast to existing works, a fuel penalty is used to characterize more realistically SSS engine restarts, to enable more realistic design and testing of control algorithms. The paper first derives two important analytic results: a) analytic EM optimal solutions of fundamental and commonly used series HEV frameworks, and b) proof of optimality of charge sustaining operation in series HEVs. It then proposes a novel heuristic control strategy, the hysteresis power threshold strategy (HPTS), by amalgamating simple and effective control rules extracted from the suite of derived analytic EM optimal solutions. The decision parameters of the control strategy are small in number and freely tunable. The overall control performance can be fully optimized for different HEV parameters and driving cycles by a systematic tuning process, while also targeting charge sustaining operation. The performance of HPTS is evaluated and benchmarked against existing methodologies, including dynamic programming (DP) and a recently proposed state-of-the-art heuristic strategy. The results show the effectiveness and robustness of the HPTS and also indicate its potential to be used as the benchmark strategy for high fidelity HEV models, where DP is no longer applicable due to computational complexity.
Tavian Barnes, Ken Jen Lee, Cristina Tavares et al.
The traditional path to a software engineering career involves a post-secondary diploma in Software Engineering, Computer Science, or a related field. However, many software engineers take a non-traditional path to their career, starting from other industries or fields of study. This paper proposes a study on barriers faced by software engineers with non-traditional educational and occupational backgrounds, and possible mitigation strategies for those barriers. We propose a two-stage methodology, consisting of an exploratory study, followed by a validation study. The exploratory study will involve a grounded-theory-based qualitative analysis of relevant Reddit data to yield a framework around the barriers and possible mitigation strategies. These findings will then be validated using a survey in the validation study. Making software engineering more accessible to those with non-traditional backgrounds will not only bring about the benefits of functional diversity, but also serves as a method of filling in the labour shortages of the software engineering industry.
Manuel De Stefano, Fabiano Pecorelli, Dario Di Nucci et al.
Quantum computing is no longer only a scientific interest but is rapidly becoming an industrially available technology that can potentially overcome the limits of classical computation. Over the last years, all major companies have provided frameworks and programming languages that allow developers to create their quantum applications. This shift has led to the definition of a new discipline called quantum software engineering, which is demanded to define novel methods for engineering large-scale quantum applications. While the research community is successfully embracing this call, we notice a lack of systematic investigations into the state of the practice of quantum programming. Understanding the challenges that quantum developers face is vital to precisely define the aims of quantum software engineering. Hence, in this paper, we first mine all the GitHub repositories that make use of the most used quantum programming frameworks currently on the market and then conduct coding analysis sessions to produce a taxonomy of the purposes for which quantum technologies are used. In the second place, we conduct a survey study that involves the contributors of the considered repositories, which aims to elicit the developers' opinions on the current adoption and challenges of quantum programming. On the one hand, the results highlight that the current adoption of quantum programming is still limited. On the other hand, there are many challenges that the software engineering community should carefully consider: these do not strictly pertain to technical concerns but also socio-technical matters.
Halaman 41 dari 557559