As artificial intelligence (AI) is increasingly integrated into society, people are relying on it more and more, and higher requirements are put forward for the safety and ethical standards of AI. This article explores the development of artificial intelligence technology and its potential safety and ethical challenges in various fields. In terms of security, the risk of adversarial attacks is analyzed in depth, and the robustness of the model is enhanced through adversarial training and data enhancement techniques. In addition, it is recommended to adopt measures such as data encryption and differential privacy to address data privacy and security issues. Regarding ethical considerations, this paper identifies the origins of algorithmic bias and argues for mitigating it through rigorous testing, validation, and regulatory frameworks. It also highlights the importance of increasing the transparency and explainability of AI to enhance public trust. Finally, the paper emphasizes the importance of defining accountability for AI behavior and suggests establishing laws and regulations that effectively govern AI applications. In conclusion, the study argues that the development of AI should emphasize safety and ethical considerations. Through the combination of technical intervention, legal supervision and social responsibility, the sustainable development of artificial intelligence is effectively promoted.
The increasing prominence of concepts such as Smart Production and Industrial Internet of Things (IIoT) within the context of Industry 4.0 has introduced a new set of requirements for the engineering of industrial systems, including support for dynamic environments, timeliness guarantees, support for heterogeneity, interoperability and reliability. These requirements are further exacerbated at the network level by the notable rise in the number and variety of devices involved. To stay competitive in this ever-changing industrial landscape while boosting productivity, it is vital to meet those requirements, combining established protocols with emerging technologies. Software-Defined Networking (SDN) is the forefront traffic management paradigm that offers flexibility for complex industrial networks, enabling efficient resource allocation and dynamic reconfiguration. Message Queuing Telemetry Transport (MQTT) is a low-overhead protocol of the application layer that is gaining popularity in the scope of the IoT and IIoT. However, its Quality-of-Service (QoS) policies do not support timeliness requirements. This article presents a framework that seamlessly integrates SDN and MQTT, enhancing network management flexibility while satisfying real-time requirements found in industrial environments. It leverages the User Properties of MQTTv5 to allow specifying real-time requirements. MQTT traffic is intercepted by a Network Manager that extracts real-time information and instructs an SDN controller to deploy corresponding network reservations. MQTT traffic across multiple edge networks is propagated by selected brokers using multicasting. Extensive experiments validate the proposed approach, demonstrating its superiority over MQTT and Direct Multicast-MQTT (DM-MQTT) DM-MQTT in latency reduction. A response time analysis, validated experimentally, emphasizes robust performance across metrics.
Massive amounts of data drive the performance of deep learning models, but in practice, data resources are often highly dispersed and bound by data privacy and security concerns, making it difficult for multiple data sources to share their local data directly. Data resources are difficult to aggregate effectively, resulting in a lack of support for model training. How to collaborate between data sources in order to aggregate the value of data resources is therefore an important research question. However, existing distributed-collaborative-learning architectures still face serious challenges in collaborating between nodes that lack mutual trust, with security and trust issues seriously affecting the confidence and willingness of data sources to participate in collaboration. Blockchain technology provides trusted distributed storage and computing, and combining it with collaboration between data sources to build trusted distributed-collaborative-learning architectures is an extremely valuable research direction for application. We propose a trusted distributed-collaborative-learning mechanism based on blockchain smart contracts. Firstly, the mechanism uses blockchain smart contracts to define and encapsulate collaborative behaviours, relationships and norms between distributed collaborative nodes. Secondly, we propose a model-fusion method based on feature fusion, which replaces the direct sharing of local data resources with distributed-model collaborative training and organises distributed data resources for distributed collaboration to improve model performance. Finally, in order to verify the trustworthiness and usability of the proposed mechanism, on the one hand, we implement formal modelling and verification of the smart contract by using Coloured Petri Net and prove that the mechanism satisfies the expected trustworthiness properties by verifying the formal model of the smart contract associated with the mechanism. On the other hand, the model-fusion method based on feature fusion is evaluated in different datasets and collaboration scenarios, while a typical collaborative-learning case is implemented for a comprehensive analysis and validation of the mechanism. The experimental results show that the proposed mechanism can provide a trusted and fair collaboration infrastructure for distributed-collaboration nodes that lack mutual trust and organise decentralised data resources for collaborative model training to develop effective global models.
In the modern economy, digitalization has become one of the key components of the Russian Federation regions socio-economic development. Enterprises of various industries are faced with the need to process large amounts of data, which greatly complicates data management, and therefore the relevance of the analysis of artificial intelligence technologies increases. Training employees for industrial processes is a major challenge in any industry. Effective human resource management requires an accurate assessment and presentation of available competencies, as well as an effective mapping of the required competencies for specific positions. Competences enable the company to achieve high production and economic results. The aim of the study is to develop a structural model of a predictive expert system for managing data on the competencies of a modern manager by combining artificial and human intelligence, which can serve as a decision support tool for managers in real conditions to improve the efficiency of a particular enterprise. The study of the demand for managers and requirements for candidates in the Russian Federation and the Republic of Tatarstan was conducted on the data of the largest Russian Internet recruitment company HeadHunter. To develop a structural model of the proposed expert system, information from specialized scientific publications published in the Russian and foreign scientific literature of the Web of science and Scopus databases was used. The expert system will allow the manager to find the best options for using employees, predict the development of the enterprise as a whole and its individual divisions, which will significantly increase the key performance indicators of any company.
Reducing the size of the training set, which involves replacing it with a condensed set, is a widely adopted practice to enhance the efficiency of instance-based classifiers while trying to maintain high classification accuracy. This objective can be achieved through the use of data reduction techniques, also known as prototype selection or generation algorithms. Although there are numerous algorithms available in the literature that effectively address single-label classification problems, most of them are not applicable to multilabel data, where an instance can belong to multiple classes. Well-known transformation methods cannot be combined with a data reduction technique due to different reasons. The Condensed Nearest Neighbor rule is a popular parameter-free single-label prototype selection algorithm. The IB2 algorithm is the one-pass variation of the Condensed Nearest Neighbor rule. This paper proposes variations of these algorithms for multilabel data. Through an experimental study conducted on nine distinct datasets as well as statistical tests, we demonstrate that the eight proposed approaches (four for each algorithm) offer significant reduction rates without compromising the classification accuracy.
Tujuan penelitian ini adalah untuk mengetahui pengaruh etika profesi auditor dan fee audit terhadap kualitas audit.
Desain / metodologi / pendekatan: dalam penelitian ini dilakukan analisis statistik deskriptif dengan pendekatan kuantitatif yang menggunakan teknik analisis regresi linear berganda dengan alat analisis SPSS 24.
Temuan Penelitian: Hasil dari penelitian ini menunjukkan bahwa etika profesi dan fee audit memiliki pengaruh terhadap kualitas audit.
Kontribusi Teoretis / Orisinalitas: Perbedaan penelitian ini dengan penelitian sebelumnya adalah pada teknik analisis yang digunakan, selain itu objek penelitian juga berbeda, pada penelitian ini yang menjadi objek penelitian adalah Kantor Akuntan Publik yang berada di Kota Pontianak dan Bandung dan struktur bisnis yang kompleks sehingga menjadikan penelitian layak untuk diteruskan. Berdasarkan permasalahan di atas, dan melihat pentingnya etika profesi serta sangat sensitifnya fee audit penulis tertarik untuk meneliti kembali dengan fokus KAP di Pontianak Bandung sebagai responden.
Keterbatasan dan implikasi penelitian: Peneliti menyadari keterbatasan dalam penelitian ini yang tentunya memerlukan perbaikan dan pengembangan untuk penelitian selanjutnya. Keterbatasan dalam penelitian ini adalah Variabel independen dalam penelitian belum memberikan kontribusi yang baik terhadap variabel dependen. Hal tersebut terlihat dari analisis koefisien determinasi dimana nilai R2 sebesar 66,6%. Sisanya sebesar 33.4% dipengaruhi oleh variabel lain diluar model ini sehingga disarankan bagi peneliti selanjutnya untuk menambahkan variabel-variabel independen yang secara teoritis dapat berpengaruh lebih besar terhadap kualitas audit. Selain itu data yang dikumpulkan untuk diteliti dan dianalisis berdasarkan pada persepsi masing-masing responden terhadap item-item instrumen penelitian sehingga dapat memungkinkan terjadinya bias atau miss perseption.
Economics as a science, Management. Industrial management
Albertina Paula Monteiro, Francisco Barbosa, Amélia Silva
et al.
Based on the legitimacy and stakeholders’ theories, this research aims to analyze the environmental information disclosure of Portuguese companies. Specifically, this study aims to explore the environmental information disclosure level, whether the industry (environmentally sensitive) influences the level of ecological matters disclosure, and whether this impacts the companies' performance/profitability. Using the content analysis technique, we developed two indices to assess the level of environmental disclosures in companies' mandatory and voluntary reporting. In addition, for the relationship between variables analysis, we applied the Process Macro of SPSS software. Study results show that (1) there is a positive evolution in the level of environmental disclosure by Lisbon stock exchange listed companies between the years 2015 and 2017, (2) the industry has no significant relationship with profitability; (3) the environmental disclosure acts as a mediator variable in the relationship between industry and profitability. This research presents differences in the tendency of environmental matters disclosure when prepared under an accounting framework or voluntarily and assesses the mediating role of the environmental disclosure index in the relationship between industry and performance.
Three levels, namely the device level, the connection level, and the systems management level, are frequently used to conceptualize intelligent machinery and Industry 4 [...]
In order to better implement the water environmental management policies, water quality evaluation is the basic step, that is to reasonably divide it into specific water quality category according to multiple water quality parameters in a certain water area.Aimed at this problem, an improved Naive Bayes classification method was proposed, which endowed different attributes with different weights, weakened the assumption of Naive Bayes conditional independence, and made the classification result closer to the actual category.Firstly, referred to the data released by the national surface water quality automatic monitoring station, 500 water quality data were selected as samples, and an evaluation system with four indicators was established, including dissolved oxygen, permanganate index, ammonia nitrogen and total phosphorus.And then, the improved Naive Bayes classification method was used to learn and evaluate the samples, and its classification performance by the five fold cross validation method was verified.The results show that the accuracy, precision, recall and F1 value of the improved Naive Bayes classification method reach 96.0%, 95.9%, 93.8% and 94.8% respectively, with higher performance index of water quality data classification compared with other Naive Bayes classification method, which can provide some reference for the problem of water quality data classification encountered in actual engineering.
Information technology, Management information systems
PT. Telkom Akses Pontianak memiliki sistem informasi Inventory yang selama ini digunakan, selama melakukan penelitian ditemukanlah beberarapa temuan, yaitu seperti informasi terkait ketersedian material, sistem yang kurang efektif terkait data pengeluaran barang yang berdampak pada laporan periodik perusahaan, dan kurangnya optimalisasi Sumber Daya Manusia yang ada. sehingga dengan permsalahan yang ada menjadi dasar untuk melakukan audit sistem informasi yang digunakan. Audit mengacu pada framework COBIT 5 dengan menggunakan Domain MEA ditemukanlah hasil dari tingkat kapabilitas masing-masing sub domain MEA itu sendiri dan juga Gap Analisisnya. Dengan nilai kapabilitas dari subdomain MEA 01 senilai 3,83, Subdomain MEA 02 senilai 3,60, dan Subdomain MEA 03 senilai 3,69, dengan nilai rata-rata yaitu 3,70 dengan keterangan Predictable Process yang berarti objek yang diteliti sudah mencapai proses yang ditetapkan berjalan dalam suatu batas yang ditentukan untuk mencapai tujuan prosesnya. Serta dengan perhitungan Gap Analisis yaitu pada subdomain MEA 01 senilai 1,2, Subdomain MEA 02 senilai 1,4, dan Subdomain MEA 03 senilai 1,3, dengan nilai rata-rata yaitu 1,3 yang berarti perusahaan masih perlu meningkatkan terkait sistem informasi Inventory yang digunakan agar dapat memperoleh hasil yang optimal bagi seluruh pemangku kepentingan.
Electronic computers. Computer science, Management information systems
Aditya Utama, Mohammad Pramono Hadi, Emilya Nurjani
The widespread drought area in Trenggalek Regency in 2019 needs to be analyzed to reduce negative impacts and as a monitoring tool to anticipate future drought events. The Standardized Precipitation Index (SPI) is a drought analysis method by calculating the rainwater deficit at various time scales used to identify the distribution of drought in Trenggalek Regency. This study using rain data on 13 rain stations for the period 1990-2019 and agricultural production data for 2019. The calculation results show that the highest SPI value occurred in March at the highly wet level with a value of 2.11. The lowest SPI value occurred in May at the extremely dry level with a value of -2.31. The results are then mapped using ArcGIS with the Inverse Distance Weighted (IDW) method to identify the spatial distribution of drought.
Information technology, Electronic computers. Computer science
Research increasingly suggests that climate change has intensified the frequency of droughts, floods, and other environmental disasters across sub-Saharan Africa. In response to the resulting array of climate-induced challenges, various stakeholders are working collectively to build climate resilience in rural and urban communities and trans-continentally. This paper examines key climate resilience-building projects that have been implemented across sub-Saharan Africa through multi-stakeholder partnerships. It uses a vulnerabilities assessment approach to examine the strategic value of these projects in managing the mitigation of climate shocks and long-term environmental changes. There are still many challenges to building climate resilience in the region, but through multi-stakeholder partnerships, sub-Saharan African nations are expanding their capacity to pool resources and build collective action aimed at financing and scaling up innovative climate solutions. This article contributes to ongoing interdisciplinary academic, management, and policy discourses on global climate adaptation focused on populations and landscapes most at risk.
Ethnology. Social and cultural anthropology, Organizational behaviour, change and effectiveness. Corporate culture
Gilgen-Ammann, Rahel, Schweizer, Theresa, Wyss, Thomas
BackgroundElite athletes and recreational runners rely on the accuracy of global navigation satellite system (GNSS)–enabled sport watches to monitor and regulate training activities. However, there is a lack of scientific evidence regarding the accuracy of such sport watches.
ObjectiveThe aim was to investigate the accuracy of the recorded distances obtained by eight commercially available sport watches by Apple, Coros, Garmin, Polar, and Suunto when assessed in different areas and at different speeds. Furthermore, potential parameters that affect the measurement quality were evaluated.
MethodsAltogether, 3 × 12 measurements in urban, forest, and track and field areas were obtained while walking, running, and cycling under various outdoor conditions.
ResultsThe selected reference distances ranged from 404.0 m to 4296.9 m. For all the measurement areas combined, the recorded systematic errors (±limits of agreements) ranged between 3.7 (±195.6) m and –101.0 (±231.3) m, and the mean absolute percentage errors ranged from 3.2% to 6.1%. Only the GNSS receivers from Polar showed overall errors <5%. Generally, the recorded distances were significantly underestimated (all P values <.04) and less accurate in the urban and forest areas, whereas they were overestimated but with good accuracy in 75% (6/8) of the sport watches in the track and field area. Furthermore, the data assessed during running showed significantly higher error rates in most devices compared with the walking and cycling activities.
ConclusionsThe recorded distances might be underestimated by up to 9%. However, the use of all investigated sport watches can be recommended, especially for distance recordings in open areas.
Information technology, Public aspects of medicine
Network traffic exhibits a high level of variability over short periods of time. This variability impacts negatively on the accuracy of anomaly-based network intrusion detection systems (IDS) that are built using predictive models in a batch learning setup. This work investigates how adapting the discriminating threshold of model predictions, specifically to the evaluated traffic, improves the detection rates of these intrusion detection models. Specifically, this research studied the adaptability features of three well known machine learning algorithms: C5.0, Random Forest and Support Vector Machine. Each algorithm’s ability to adapt their prediction thresholds was assessed and analysed under different scenarios that simulated real world settings using the prospective sampling approach. Multiple IDS datasets were used for the analysis, including a newly generated dataset (STA2018). This research demonstrated empirically the importance of threshold adaptation in improving the accuracy of detection models when training and evaluation traffic have different statistical properties. Tests were undertaken to analyse the effects of feature selection and data balancing on model accuracy when different significant features in traffic were used. The effects of threshold adaptation on improving accuracy were statistically analysed. Of the three compared algorithms, Random Forest was the most adaptable and had the highest detection rates.
Abstract Intangible cultural heritage has the characteristics of regionality, uniqueness, and liveness. Its protection and inheritance are faced with many challenges, and can be resolved by integrating with the scenic areas. The construction of digital scenic areas is an effective way to achieve a win-win situation. Based on resource characteristics and market positioning analysis of the Celadon Cultural Industrial Park, the construction objectives and coping strategies of the scenic area are proposed by systematically introducing the digital technology in the scenic area planning, project initiation, tourism management model, and other aspects to create a culture-first, three-dimensional virtual reality scenic area.
This article presents the theoretical assumptions about measurement of relational capital. The aims of this article are: to indicate the nature of inter-organizational relationships, define the relational capital and its components, present methods and tools to measure relational capital including examples of evaluation indicators of relationship capital, outline the effect of relational capital competences needed to develop the relational capital in long-term and show how to applicate the methods of measurement of relational capital.
Management. Industrial management, Management information systems
Y. A. Kapitanyuk, A. A. Kapitonov, S. A. Chepinsky
The article deals with a trajectory control system development for the omnidirectional mobile robot. This kind of robots gives the possibility to control separately each degree of freedom due to special design of the wheels, which greatly facilitates the solution of the spatial control tasks and makes it possible to focus directly on the development of algorithms. Control law synthesis is based on kinematic model of a solid body on a plane. Desired trajectory is defined as a smooth implicit function in a fixed coordinate system. Procedure of control design is represented by using a differential-geometric method of nonlinear transformation of the original model to the task-oriented form, which describes the longitudinal motion along a trajectory and orthogonal deviation. Proportional controllers with direct compensation of nonlinear terms are synthesized for the transformed model. Main results are represented by nonlinear control algorithms and experimental data. Practical implementation of considered control laws for the Robotino mobile robot by Festo Didactics Company is done for illustration of this approach workability. The cases of straight line motion and movement along a circle are represented as desirable trajectories, and the majority of practical tasks for mobile robots control can be implemented by their combination.