K. Storbacka, R. Brodie, Tilo Böhmann et al.
Hasil untuk "Machine design and drawing"
Menampilkan 20 dari ~3326697 hasil · dari DOAJ, CrossRef, Semantic Scholar
Hao Wang, Jundi Wang, Hong Chen et al.
This paper presents a kind of dual-channel coupled radial magnetic field resolver (DCCRMFR). The exciting winding and signal winding of this resolver adopt the structure of orthogonal phase. The number of turns and distribution of the four phase signal winding have been designed. The rotor has a double-wave magnetic conductive material structure. The variable reluctance mechanism between the stator and the rotor is derived by analytical method, and the feasibility of changing the coupling area for variable reluctance is obtained. The inductance of DCCRMFR was theoretically derived through the winding function method and combined with the finite element simulation method to obtain the inductance variation law and verify the correctness of the resolver design. Then simulation analysis was conducted on the output signal of DCCRMFR to extract the total harmonic distortion (THD) of the envelope of the electromotive force (EMF) output from the signal winding. Taking THD as the optimization objective, the optimized DCCRMFR simulation model is obtained by analyzing the air-gap length between the stator and the rotor and the thickness ratio of rotor. Finally, experimental measurements were conducted on a prototype model of a two pole pairs DCCRMFR, and the measurement results were compared and analyzed with simulation results to verify the correctness of the structural design and optimization of this DCCRMFR.
Keisuke Tozuka, Bastien Poitrimol, Genki Sasaki et al.
Abstract This paper introduces an innovative integrated model designed to generalize texture models for enhanced tactile rendering in virtual environments. The integrated model comprises a two-layer regression model. The first-layer represents the relationship between probing velocity, force, and vibrational characteristics. The second-layer models the correspondence between physically grounded parameters of the target texture (such as transient frequency, displacement wavelength, and displacement amplitude) and the first-layer regression model. We present extensive data analysis to validate the model’s effectiveness. The results show that it achieves high similarity in spectral reproduction for reference data and moderate similarity for unmeasured textures. The model offers real-time performance, general applicability, and realism for various applications in tactile design and virtual content development.
Balasubramanian Sriram, Saeed Shirazi, Christos Kalyvas et al.
This study presents a machine learning-enhanced optimization framework for proton exchange membrane fuel cell (PEMFC), designed to address critical challenges in dynamic load adaptation and thermal management for automotive applications. A high-fidelity model of a 65-cell stack (45 V, 133.5 A, 6 kW) is developed in MATLAB/Simulink, integrating four core subsystems: PID-controlled fuel delivery, humidity-regulated air supply, an electrochemical-thermal stack model (incorporating Nernst voltage and activation, ohmic, and concentration losses), and a 97.2–efficient SiC MOSFET-based DC/DC boost converter. The framework employs the NSGA-II algorithm to optimize key operational parameters—membrane hydration (λ = 12–14), cathode stoichiometry (λO<sub>2</sub> = 1.5–3.0), and cooling flow rate (0.5–2.0 L/min)—to balance efficiency, voltage stability, and dynamic performance. The optimized model achieves a 38% reduction in model-data discrepancies (RMSE < 5.3%) compared to experimental data from the Toyota Mirai, and demonstrates a 22% improvement in dynamic response, recovering from 0 to 100% load steps within 50 ms with a voltage deviation of less than 0.15 V. Peak performance includes 77.5% oxygen utilization at 250 L/min air flow (1.1236 V/cell) and 99.89% hydrogen utilization at a nominal voltage of 48.3 V, yielding a peak power of 8112 W at 55% stack efficiency. Furthermore, fuzzy-PID control of fuel ramping (50–85 L/min in 3.5 s) and thermal management (ΔT < 1.5 °C via 1.0–1.5 L/min cooling) reduces computational overhead by 29% in the resulting digital twin platform. The framework demonstrates compliance with ISO 14687-2 and SAE J2574 standards, offering a scalable and efficient solution for next-generation fuel cell electric vehicle (FCEV) aligned with global decarbonization targets, including the EU’s 2035 CO<sub>2</sub> neutrality mandate.
Bakly Ali AK., Hadi Nizar J., Krynke Marek et al.
This study highlights the growing collaboration between materials, engineering, and pharmaceutical research, enhancing drug efficacy and reducing side effects by optimizing polymer carriers for stability and compatibility. Polyvinyl alcohol (PVA), Rhumix, glycerol, and citric acid were used as a carrier, drug, plasticizer, and stabilizer to create polymer-drug composites at varying temperatures. A twin-screw extruder was used to mix, melt, and extrude 60% PVA, 30% Rhumix, 9.9% glycerol, and 0.1% citric acid at (160, 170, and 180)°C with a screw speed of 50 rpm. DSC, FTIR, and optical/digital microscopy techniques characterized the composites. Results showed smooth extrusion of the PVA/drug composites, with the addition of plasticizers resulting in lower Tg and Tm. The extruded compounds exhibited varying colours and surface properties. The bonding values remained stable, indicating no significant interaction. DSC curves revealed two Tg values, indicating compatibility and immiscibility. Microscope images demonstrated improved drug dispersion at 160°C. Notably, the selected components, particularly PVA and glycerol, are widely recognised for their biocompatibility and low toxicity, as confirmed by previous studies, which support the potential suitability of these compounds for biomedical applications.
Dumitru Alexandru BODISLAV, Raluca Iuliana GEORGESCU
This paper explores the growing intersection between machine learning systems and human neuroeconomic processes, examining how AI-driven environments influence decisionmaking at the neural level. Drawing on insights from computational neuroscience, cognitive psychology, and neuroeconomics, the study outlines how reinforcement learning architectures employed in AI align structurally with the brain’s valuation systems. It highlights the modulation of neural circuits – such as the ventral striatum, prefrontal cortex, and anterior cingulate cortex – through algorithmic feedback, personalization, and reward optimization mechanisms. The paper argues that prolonged engagement with predictive technologies can shape cognitive autonomy, reward sensitivity, and exploratory behaviour, with potential long-term implications for cognitive sovereignty. To address these concerns, the authors propose neuroadaptive and ethically aligned AI design principles that preserve decision-making agency, cognitive flexibility, and mental wellbeing. The study contributes to the emerging field of neuroeconomic design and suggests a paradigm shift toward human-compatible AI systems.
Dang Dinh Son Dang Dinh Son
Abstract Background Parkinson's disease (PD) is a progressive neurodegenerative disorder that severely affects patients' quality of life. Early and accurate diagnosis is essential for timely intervention. Traditional diagnostic approaches rely heavily on subjective clinical evaluation. Objective This study aims to evaluate various machine learning (ML) models for PD detection using spiral drawing data, focusing on preprocessing, feature extraction, and statistical validation. Methods The dataset, sourced from the UCI Machine Learning Repository, includes 122 samples (61 PD patients and 61 controls). Five ML classifiers were evaluated: Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbors (KNN), Logistic Regression (LR), and Gradient Boosting (XGBoost). Preprocessing included normalization, Principal Component Analysis (PCA), and SelectKBest. Model evaluation was done using 10-fold cross-validation, hyperparameter tuning with GridSearchCV, and metrics such as accuracy, precision, recall, F1-score, ROC, and AUC. Results Gradient Boosting achieved the highest performance (91.2% accuracy, AUC 0.95). Dimensionality reduction and feature selection significantly enhanced model performance, particularly for SVM and KNN. Conclusion Advanced ML techniques, when combined with proper preprocessing, show significant promise in PD diagnosis using spiral drawing data. Future research should explore deep learning and multimodal integration.
Ilia Panfilov, Dmitry Tikhonenko, Kirill Kravtsov et al.
This article discusses the principles of modeling the casting process for casting a part - a "flywheel" made of cast iron SCH20. A detailed overview of the casting drawing is provided, as well as the designed 3D model according to this drawing. The values of allowances for the machining of the part are analyzed. The characteristic of the obtained alloy is presented, on the basis of which the calculation of the technological yield of the suitable one is made. At the end of the article, the result of the output of a suitable casting is given.
Shayla Lee, Wendy Ju
This research explores whether the interaction between adversarial robots and creative practitioners can push artists to rethink their initial ideas. It also explores how working with these robots may influence artists' views of machines designed for creative tasks or collaboration. Many existing robots developed for creativity and the arts focus on complementing creative practices, but what if robots challenged ideas instead? To begin investigating this, I designed UnsTable, a robot drawing desk that moves the paper while participants (N=19) draw to interfere with the process. This inquiry invites further research into adversarial robots designed to challenge creative practitioners.
Francesco Sovrano, Salvatore Sapienza, M. Palmirani et al.
On 21 April 2021, the European Commission proposed the first legal framework on Artificial Intelligence (AI) to address the risks posed by this emerging method of computation. The Commission proposed a Regulation known as the AI Act. The proposed AI Act considers not only machine learning, but expert systems and statistical models long in place. Under the proposed AI Act, new obligations are set to ensure transparency, lawfulness, and fairness. Their goal is to establish mechanisms to ensure quality at launch and throughout the whole life cycle of AI-based systems, thus ensuring legal certainty that encourages innovation and investments on AI systems while preserving fundamental rights and values. A standardisation process is ongoing: several entities (e.g., ISO) and scholars are discussing how to design systems that are compliant with the forthcoming Act, and explainability metrics play a significant role. Specifically, the AI Act sets some new minimum requirements of explicability (transparency and explainability) for a list of AI systems labelled as “high-risk” listed in Annex III. These requirements include a plethora of technical explanations capable of covering the right amount of information, in a meaningful way. This paper aims to investigate how such technical explanations can be deemed to meet the minimum requirements set by the law and expected by society. To answer this question, with this paper we propose an analysis of the AI Act, aiming to understand (1) what specific explicability obligations are set and who shall comply with them and (2) whether any metric for measuring the degree of compliance of such explanatory documentation could be designed. Moreover, by envisaging the legal (or ethical) requirements that such a metric should possess, we discuss how to implement them in a practical way. More precisely, drawing inspiration from recent advancements in the theory of explanations, our analysis proposes that metrics to measure the kind of explainability endorsed by the proposed AI Act shall be risk-focused, model-agnostic, goal-aware, intelligible, and accessible. Therefore, we discuss the extent to which these requirements are met by the metrics currently under discussion.
Somnath Lahiri, Jing Ren, Xianke Lin
A lot of research has been conducted in recent years on stereo depth estimation techniques, taking the traditional approach to a new level such that it is in an appreciably good form for competing in the depth estimation market with other methods, despite its few demerits. Sufficient progress in accuracy and depth computation speed has manifested during the period. Over the years, stereo depth estimation has been provided with various training modes, such as supervised, self-supervised, and unsupervised, before deploying it for real-time performance. These modes are to be used depending on the application and/or the availability of datasets for training. Deep learning, on the other hand, has provided the stereo depth estimation methods with a new life to breathe in the form of enhanced accuracy and quality of images, attempting to successfully reduce the residual errors in stages in some of the methods. Furthermore, depth estimation from a single RGB image has been intricate since it is an ill-posed problem with a lack of geometric constraints and ambiguities. However, this monocular depth estimation has gained popularity in recent years due to the development in the field, with appreciable improvements in the accuracy of depth maps and optimization of computational time. The help is mostly due to the usage of CNNs (Convolutional Neural Networks) and other deep learning methods, which help augment the feature-extraction phenomenon for the process and enhance the quality of depth maps/accuracy of MDE (monocular depth estimation). Monocular depth estimation has seen improvements in many algorithms that can be deployed to give depth maps with better clarity and details around the edges and fine boundaries, which thus helps in delineating between thin structures. This paper reviews various recent deep learning-based stereo and monocular depth prediction techniques emphasizing the successes achieved so far, the challenges acquainted with them, and those that can be expected shortly.
Jemala Marek
The current state of technological development shows that most inventions arise in response to specific circumstances, such as new business risks, changes in legislation, or crisis events. Reactions to new social trends, business models, problematic processes, or competing goods and services are also some of the main drivers of technological innovation. But before these technologies are implemented, there should be adequate regulations in place to prevent them from negatively impacting businesses or society. Two main practical research objectives are the subject of this study. To analyze 100 leading technology companies to identify the main trends and issues in the field of Technology innovation management, as well as their links to economics and society. And to identify the most important technological innovation trends and the corresponding macro-factors more suitable for realistic innovation assessment. As a result of this study, the innovation progress of certain nations (China, the U.S., Japan, and the EU) was measured to a certain extent. The research focus is to depict the key macro-factors that can characterize more complex technological innovation development and certain regional differences. The important intention of this study is also to raise awareness of technological innovation and cooperation in different countries. This research study was carried out from 2022 to 2024.
Makoto Jinno, Ryosuke Nonoyama, Yasuteru Sakurai et al.
Abstract Polymerase chain reaction (PCR) is an effective method for diagnosing infectious diseases and has been the primary method throughout the novel coronavirus disease (COVID-19) pandemic. PCR tests (from specimen collection to result acquisition) involve sample pretreatment, nucleic acid extraction, and PCR procedure. Automating the pretreatment process is crucial to mitigate the risk of infection for workers and to reduce the likelihood of sample contamination-triggered misdiagnosis, particularly when handling centrifuge tubes, cryopreservation tubes, and microtubes. Robotic systems have been engineered to automate cell culture and PCR-based diagnosis, predominantly designed for use with screw-capped containers. However, this leaves a notable gap in automation solutions for microtubes equipped with press-type caps. To address this gap, we developed a versatile microtube capper/decapper system. On the other hand, many tasks of manual operation using microtubes, which are routinely conducted in clinical tests and biological experiments, are performed. Compared to screw-type caps for centrifuge and cryopreservation tubes, press-type caps for microtubes present a considerably higher risk of the worker's fingers contacting the inside of the cap and/or generating airborne droplets. Despite the risks of contamination and infection derived from the manual handling of microtube caps, which can compromise diagnosis/experiment accuracy and worker safety, devices for manually opening and closing microtube caps without direct contact remain lacking. Therefore, leveraging the technology from the developed versatile microtube capper/decapper system for laboratory automation, we created a manually operated microtube equipped with an automatic capper/decapper system tailored for personnel in clinical and biological laboratories. In this study, we first examined the required specifications and prerequisites for a manual microtube capper/decapper and clarified the operating methods, operating procedures, operation environment, device size, accompanying functions, etc. Based on the required specifications and preconditions, we proceeded with the mechanical and control design of the conceptual model, manufactured a prototype, and confirmed its basic functions and performance. The compliant to the required specifications and preconditions and the usefulness of the proposed manual microtube capper/decapper were validated through various experiments and demonstrations. Using the proposed microtube capper/decapper, even small-scale operations, which are challenging to streamline, can be performed nearly as efficiently as full manual operations. Although operation time was not reduced, the ability to open and close microtubes without manual contact is crucial for improving diagnostic and experimental accuracy and for reducing the burden on and enhancing the safety of laboratory personnel. Because microtubes are used in various clinical tests and biological experiments, we believe that the proposed system can markedly reduce the workload for personnel across numerous clinical and biological laboratories.
Jingxue Bi, Yunjia Wang, Baoguo Yu et al.
Several Wireless Fidelity (WiFi) fingerprint datasets based on Received Signal Strength (RSS) have been shared for indoor localization. However, they can’t meet all the demands of WiFi RSS-based localization. A supplementary open dataset for WiFi indoor localization based on RSS, called as SODIndoorLoc, covering three buildings with multiple floors, is presented in this work. The dataset includes dense and uniformly distributed Reference Points (RPs) with the average distance between two adjacent RPs smaller than 1.2 m. Besides, the locations and channel information of pre-installed Access Points (APs) are summarized in the SODIndoorLoc. In addition, computer-aided design drawings of each floor are provided. The SODIndoorLoc supplies nine training and five testing sheets. Four standard machine learning algorithms and their variants (eight in total) are explored to evaluate positioning accuracy, and the best average positioning accuracy is about 2.3 m. Therefore, the SODIndoorLoc can be treated as a supplement to UJIIndoorLoc with a consistent format. The dataset can be used for clustering, classification, and regression to compare the performance of different indoor positioning applications based on WiFi RSS values, e.g., high-precision positioning, building, floor recognition, fine-grained scene identification, range model simulation, and rapid dataset construction.
Raphaël Gyory, David Restrepo Amariles, Gregory Lewkowicz et al.
Wentai Zhang, Joe Joseph, Quan Chen et al.
We present a new data generation method to facilitate an automatic machine interpretation of 2D engineering part drawings. While such drawings are a common medium for clients to encode design and manufacturing requirements, a lack of computer support to automatically interpret these drawings necessitates part manufacturers to resort to laborious manual approaches for interpretation which, in turn, severely limits processing capacity. Although recent advances in trainable computer vision methods may enable automatic machine interpretation, it remains challenging to apply such methods to engineering drawings due to a lack of labeled training data. As one step toward this challenge, we propose a constrained data synthesis method to generate an arbitrarily large set of synthetic training drawings using only a handful of labeled examples. Our method is based on the randomization of the dimension sets subject to two major constraints to ensure the validity of the synthetic drawings. The effectiveness of our method is demonstrated in the context of a binary component segmentation task with a proposed list of descriptors. An evaluation of several image segmentation methods trained on our synthetic dataset shows that our approach to new data generation can boost the segmentation accuracy and the generalizability of the machine learning models to unseen drawings.
Ming-An Chung, Kuo-Chun Tseng, Ing-Peng Meiy
This paper proposes a simple and small-dimensioned antenna that can provide X band and Ku band for the low-earth-orbiting (LEO) satellite system in an Internet of vehicles system. The antenna is designed on the substrate Arlon DiClad 880. The antenna structure consists of an inverted triangle geometry and an inverted U-shaped slot. The dimensions of the antenna are 12.5 × 5 mm<sup>2</sup>, and the area of the substrate is 30 × 13 × 0.254 mm<sup>3</sup>. The antenna is easy to make, and the manufacturing cost is low. The measurement results of the reflection coefficient (lower than −10 dB) of the antenna show that the working frequency band can cover the X-band (10.87–12.76 GHz) and the Ku band (15.19–16.02 GHz). The measured and simulated results are fairly similar. The efficiency of the antenna in the X-band is about 50–80.8%. The efficiency of the antenna in the Ku-band is about 50–74%. The gains of the antennas are about 3.34–6.08 dBi and 3.50–4.65 dBi in the X-band and Ku band, respectively, and the highest gain is 6.08 dBi. The antenna design can realize the features of low cost and small dimensions in autonomous vehicles and vehicle networking communication system equipment and achieve good wireless transmission capabilities from vehicles to the base station in the IOV.
Roman Ildusovich Ilyasov
This paper describes a universal method proposed by the author for the evaluative analytical calculation of the main parameters of synchronous electrical machines, including superconducting ones. Traditional methods for analytical calculation of parameters to build a phasor diagram of electrical machines require a calculation of all dimensions of the active zone, tooth-slot zone and frontal parts of armature windings. All sizes and local states of magnetic circuit saturation are necessary for the calculation of magnetic conductivities. Traditional analytical methods use, among other things, empirical formulas and non-physical coefficients and allow one to calculate only standard machines with classic tooth-slot zones and armature winding types. As a result of drawing a phasor diagram using traditional methods, the angle between the electromotive force and voltage is calculated, which is the machine’s internal parameter and has no major significance for users. The application of modern computer programs for simulation requires a preliminary analytical calculation in order to obtain all dimensions of the three-dimensional model. FEM simulation programs are expensive, require expensive high-performance computers and highly paid skilled personnel. Fast analytical techniques are also required to assess the correctness of the obtained automatic computer simulation results. The proposed analytical method makes it possible to quickly obtain all the main parameters of a newly designed machine (including superconducting ones and those of non-traditional design) without a detailed calculation of the dimensions of the tooth-slot zone and armature end-windings. The characteristic values of load angles are set according to the results of simple calculations, and the desired values, obtained via plotting, represent the inductive resistances of armature winding and inductive voltage drop across it. Results of practical significance, calculated from the voltage diagram, are as follows: the inductor’s magnetomotive force necessary to maintain the nominal load voltage value, regardless of the magnitude (including double overload) and type of the connected load, or the main dimensions of the active zone.
Yuri Nakao, Simone Stumpf, Subeida Ahmed et al.
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications. Recent work has started to investigate how humans judge fairness and how to support machine learning (ML) experts in making their AI models fairer. Drawing inspiration from an Explainable AI (XAI) approach called \emph{explanatory debugging} used in interactive machine learning, our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users without any technical or domain background to identify potential fairness issues and possibly fix them in the context of loan decisions. Through workshops with end-users, we co-designed and implemented a prototype system that allowed end-users to see why predictions were made, and then to change weights on features to"debug"fairness issues. We evaluated the use of this prototype system through an online study. To investigate the implications of diverse human values about fairness around the globe, we also explored how cultural dimensions might play a role in using this prototype. Our results contribute to the design of interfaces to allow end-users to be involved in judging and addressing AI fairness through a human-in-the-loop approach.
Roberta Fischli
This article extends property-owning democracy to the digital realm and introduces “data-owning democracy,” a new political economic regime characterized by the wide distribution of data as capital among citizens. Drawing on republican theory and acknowledging data's unique role in the digital economy, it proposes a two-tier model that combines different modes of data ownership and corresponding rights. The first layer of “data-owning democracy” is characterized by a digital public infrastructure that enables citizens to collectively generate data and have a say in how their citizen data are used. In the second layer, individuals automatically receive machine-readable copies of their data whenever they are generated—a slightly more advanced form of the European Union's existing right to data portability (Art. 20). With its focus on empowerment, data-owning democracy is designed to be complementary to existing data protection regulations. It also illustrates how political theory more broadly, and republican theory specifically, can be instructive for specifying the normative components of a new political economy dealing with questions of empowerment and digital rights.
Halaman 32 dari 166335