Hasil untuk "Instruments and machines"

Menampilkan 20 dari ~632094 hasil · dari DOAJ, arXiv, Semantic Scholar, CrossRef

JSON API
DOAJ Open Access 2026
A review of optimization strategies for deep and machine learning in diabetic macular edema

A. M. Mutawa, Khalid Sabti, Bibin Shalini Sundaram Thankaleela et al.

Diabetic macular edema (DME) is a primary contributor to visual impairment in diabetic patients, necessitating precise and prompt analysis for optimal treatment. Recent breakthroughs in deep learning (DL) and machine learning (ML) have yielded promising outcomes in ophthalmic image analysis. However, researchers often overlook the significance of optimization algorithms in enhancing the efficacy of their models for DME-related tasks. This review aims to consolidate, seek, discover, assess, and integrate existing work on the application of DL and ML, with emphasis on the integration and impact of optimization algorithms in enhancing their efficacy, robustness, and performance for DME in the fields of computer science and engineering. The population, intervention, comparison, and outcome framework was employed in this study to facilitate a clear and comprehensive analysis. The procedural superiority of the included investigations was evaluated using the Joanna Briggs Institute Critical Appraisal Tools for assessing methodological quality. The Auto-Metric Graph Neural Network achieved the greatest accuracy of 99.57% for combined diabetic retinopathy-DME grading, illustrating the higher efficacy of hybrid architectures augmented by meta-heuristic optimizers, such as Jaya and ant colony optimization. Successful deployment, however, depends on overcoming hurdles, such as the low mean average precision in lesion identification (0.1540) in YOLO-based models in the test set performance, and improved clinical interpretability to foster clinician trust. A Sankey diagram visually analyzes the flow of quantities between different entities of the survey.Systematic review registrationB. (2025, November 2). A Review of Optimization Strategies for Deep and Machine Learning in DME. Retrieved from osf.io/qsh4j.

Electronic computers. Computer science
DOAJ Open Access 2025
Assessing patient preferences for medical decision making - a comparison of different methods

Jakub Fusiak, Andreas Wolkenstein, Verena S. Hoffmann

BackgroundPatient preferences are a critical component of shared decision-making (SDM), particularly when choosing between treatment options with differing risks and outcomes. Many methods exist to elicit these preferences, but their complexity, usability, and acceptance vary.ObjectiveWe aim to gain insight into the acceptance, effort and preferences of participants regarding five different methods of preference assessment. Additionally, we investigate the influence of health status, experiences within the health system and of demographic factors on the results.MethodsWe conducted a cross-sectional online survey including five preference elicitation Methods: best-worst scaling, direct weighting, PAPRIKA (Potentially All Pairwise Rankings of all Possible Alternatives), time trade-off, and standard gamble. The questionnaire was distributed via academic and patient advocacy mailing lists, reaching both healthy individuals and those with acute or chronic illnesses. Participants rated each method using six standardized statements on a 5-point Likert scale. Additional items assessed general acceptance of algorithm-assisted preference assessments and the clarity of the questionnaire.ResultsOf 258 initiated questionnaires, 123 (48%) were completed and included in the analysis. Participants were diverse in age, gender, and health status, but predominantly highly educated and digitally literate. Across all measures, the PAPRIKA method received the highest ratings for clarity, usability, and perceived ability to express preferences. Simpler methods (best-worst scaling, direct weighting) were rated as less useful for capturing nuanced preferences, while abstract utility-based methods (standard gamble, time trade-off) were seen as cognitively demanding. Subgroup analyses showed minimal variation across demographic groups. Most participants (82%) could imagine using at least one of the presented methods in real clinical settings, but also emphasized the importance of physician involvement in interpreting results.ConclusionThe interactive PAPRIKA method best balanced cognitive demand and expressiveness and was preferred by most participants. Structured methods for preference elicitation may enhance SDM when integrated into clinical workflows and supported by healthcare professionals. Further research is needed to evaluate their use in real-world decisions and among more diverse patient populations.

Medicine, Public aspects of medicine
DOAJ Open Access 2025
Attention-based functional-group coarse-graining: a deep learning framework for molecular prediction and design

Ming Han, Ge Sun, Paul F. Nealey et al.

Abstract Machine learning (ML) offers considerable promise for the design of new molecules and materials. In real-world applications, the design problem is often domain-specific, and suffers from insufficient data, particularly labeled data, for ML training. In this study, we report a data-efficient, deep-learning framework for molecular discovery that integrates a coarse-grained functional-group representation with a self-attention mechanism to capture intricate chemical interactions. Our approach exploits group-contribution concepts to create a graph-based intermediate representation of molecules, serving as a low-dimensional embedding that substantially reduces the data demands typically required for training. Using a self-attention mechanism to learn the subtle but highly relevant chemical context of functional groups, the method proposed here consistently outperforms existing approaches for predictions of multiple thermophysical properties. In a case study focused on adhesive polymer monomers, we train on a limited dataset comprising only 6,000 unlabeled and 600 labeled monomers. The resulting chemistry prediction model achieves over 92% accuracy in forecasting properties directly from SMILES strings, exceeding the performance of current state-of-the-art techniques. Furthermore, the latent molecular embedding is invertible, enabling the design pipeline to automatically generate new monomers from the learned chemical subspace. We illustrate this functionality by targeting several properties, including high and low glass transition temperatures (Tg), and demonstrate that our model can identify new candidates with values that surpass those in the training set. The ease with which the proposed framework navigates both chemical diversity and data scarcity offers a promising route to accelerate and broaden the search for functional materials.

Materials of engineering and construction. Mechanics of materials, Computer software
arXiv Open Access 2025
A Synthetic Instrumental Variable Method: Using the Dual Tendency Condition for Coplanar Instruments

Ratbek Dzhumashev, Ainura Tursunalieva

Traditional instrumental variable (IV) methods often struggle with weak or invalid instruments and rely heavily on external data. We introduce a Synthetic Instrumental Variable (SIV) approach that constructs valid instruments using only existing data. Our method leverages a data-driven dual tendency (DT) condition to identify valid instruments without requiring external variables. SIV is robust to heteroscedasticity and can determine the true sign of the correlation between endogenous regressors and errors--an assumption typically imposed in empirical work. Through simulations and real-world applications, we show that SIV improves causal inference by mitigating common IV limitations and reducing dependence on scarce instruments. This approach has broad implications for economics, epidemiology, and policy evaluation.

en stat.ME, math.ST
arXiv Open Access 2025
TransVFC: A Transformable Video Feature Compression Framework for Machines

Yuxiao Sun, Yao Zhao, Meiqin Liu et al.

Nowadays, more and more video transmissions primarily aim at downstream machine vision tasks rather than humans. While widely deployed Human Visual System (HVS) oriented video coding standards like H.265/HEVC and H.264/AVC are efficient, they are not the optimal approaches for Video Coding for Machines (VCM) scenarios, leading to unnecessary bitrate expenditure. The academic and technical exploration within the VCM domain has led to the development of several strategies, and yet, conspicuous limitations remain in their adaptability for multi-task scenarios. To address the challenge, we propose a Transformable Video Feature Compression (TransVFC) framework. It offers a compress-then-transfer solution and includes a video feature codec and Feature Space Transform (FST) modules. In particular, the temporal redundancy of video features is squeezed by the codec through the scheme-based inter-prediction module. Then, the codec implements perception-guided conditional coding to minimize spatial redundancy and help the reconstructed features align with downstream machine perception.After that, the reconstructed features are transferred to new feature spaces for diverse downstream tasks by FST modules. To accommodate a new downstream task, it only requires training one lightweight FST module, avoiding retraining and redeploying the upstream codec and downstream task networks. Experiments show that TransVFC achieves high rate-task performance for diverse tasks of different granularities. We expect our work can provide valuable insights for video feature compression in multi-task scenarios. The codes are at https://github.com/Ws-Syx/TransVFC.

DOAJ Open Access 2024
Analogue Computation Converter for Nonhomogeneous Second-Order Linear Ordinary Differential Equation

Gabriel Nicolae Popa, Corina Maria Diniș

Among many other applications, electronic converters can be used with sensors with analogue outputs (DC voltage). This article presents an analogue computation converter with two DC voltages at the inputs (one input changes the frequency of the output signal, another input changes the amplitude of the output signal) that provide a periodic sinusoidal signal (with variable frequency and amplitude) at the output. On the basis of the analogue computation converter is a nonhomogeneous second-order linear ordinary differential equation which is solved analogically. The analogue computation converter consists of analogue multipliers and operational amplifiers, composed of seven function circuits: two analogue multiplication circuits, two analogue addition circuits, one non-inverting amplifier, and two integration circuits (with RC time constants). At the output of an oscillator is a sinusoidal signal which depends on the DC voltages applied on two inputs (0 ÷ 10 V): at one input, a DC voltage is applied to linearly change the sinusoidal frequency output (up to tens of kHz, according to two time constants), and at the other input, a DC voltage is applied to linearly change the amplitude of the oscillator output signal (up to 10 V). It can be used with sensors which have a DC output voltage and must be converted to a sine wave signal with variable frequency and amplitude with the aim of transmitting information over longer distances through wires. This article presents the detailed theory of the functioning, simulations, and experiments of the analogue computation converter.

Electronic computers. Computer science
DOAJ Open Access 2024
Quantum Criticality Under Imperfect Teleportation

Pablo Sala, Sara Murciano, Yue Liu et al.

Entanglement, measurement, and classical communication together enable teleportation of quantum states between distant parties, in principle, with perfect fidelity. To what extent do correlations and entanglement of a many-body wave function transfer under imperfect teleportation protocols? We address this question for the case of an imperfectly teleported quantum critical wave function, focusing on the ground state of a critical Ising chain. We demonstrate that imperfections, e.g., in the entangling gate adopted for a given protocol, effectively manifest as weak measurements acting on the otherwise pristinely teleported critical state. Armed with this perspective, we leverage and further develop the theory of measurement-altered quantum criticality to quantify the resilience of critical-state teleportation. We identify classes of teleportation protocols for which imperfection (i) preserves both the universal long-range entanglement and correlations of the original quantum critical state, (ii) weakly modifies these quantities away from their universal values, and (iii) obliterates long-range entanglement altogether while preserving power-law correlations, albeit with a new set of exponents. We also show that mixed states describing the average over a series of sequential imperfect teleportation events retain pristine power-law correlations due to a “built-in” decoding algorithm, though their entanglement structure measured by the negativity depends on errors similarly to individual protocol runs. These results may allow one to design teleportation protocols that optimize against errors—highlighting a potential practical application of measurement-altered criticality.

Physics, Computer software
arXiv Open Access 2024
Learning Decision Policies with Instrumental Variables through Double Machine Learning

Daqian Shao, Ashkan Soleymani, Francesco Quinzan et al.

A common issue in learning decision-making policies in data-rich settings is spurious correlations in the offline dataset, which can be caused by hidden confounders. Instrumental variable (IV) regression, which utilises a key unconfounded variable known as the instrument, is a standard technique for learning causal relationships between confounded action, outcome, and context variables. Most recent IV regression algorithms use a two-stage approach, where a deep neural network (DNN) estimator learnt in the first stage is directly plugged into the second stage, in which another DNN is used to estimate the causal effect. Naively plugging the estimator can cause heavy bias in the second stage, especially when regularisation bias is present in the first stage estimator. We propose DML-IV, a non-linear IV regression method that reduces the bias in two-stage IV regressions and effectively learns high-performing policies. We derive a novel learning objective to reduce bias and design the DML-IV algorithm following the double/debiased machine learning (DML) framework. The learnt DML-IV estimator has strong convergence rate and $O(N^{-1/2})$ suboptimality guarantees that match those when the dataset is unconfounded. DML-IV outperforms state-of-the-art IV regression methods on IV regression benchmarks and learns high-performing policies in the presence of instruments.

en cs.LG, stat.ML
arXiv Open Access 2024
Review of detector requirements: some challenges for the present

Luca Pasquini, Dinko Milaković

Astrophysics demands higher precision in measurements across photometry, spectroscopy, and astrometry. Several science cases necessitate not only precision but also a high level of accuracy. We highlight the challenges involved, particularly in achieving spectral fidelity, which refers to our ability to accurately replicate the input spectrum of an astrophysical source. Beyond wavelength calibration, this encompasses correcting observed spectra for atmospheric, telescope, and instrumental signatures. Elevating spectral fidelity opens avenues for addressing fundamental questions in physics and astrophysics. We delve into specific science cases, critically analyzing the prerequisites for conducting crucial observations. Special attention is given to the requirements for spectrograph detectors, their calibrations and data reduction. Importantly, these considerations align closely with the needs of photometry and astrometry.

en astro-ph.IM
arXiv Open Access 2024
Predicting loss-of-function impact of genetic mutations: a machine learning approach

Arshmeet Kaur, Morteza Sarmadi

The innovation of next-generation sequencing (NGS) techniques has significantly reduced the price of genome sequencing, lowering barriers to future medical research; it is now feasible to apply genome sequencing to studies where it would have previously been cost-inefficient. Identifying damaging or pathogenic mutations in vast amounts of complex, high-dimensional genome sequencing data may be of particular interest to researchers. Thus, this paper's aims were to train machine learning models on the attributes of a genetic mutation to predict LoFtool scores (which measure a gene's intolerance to loss-of-function mutations). These attributes included, but were not limited to, the position of a mutation on a chromosome, changes in amino acids, and changes in codons caused by the mutation. Models were built using the univariate feature selection technique f-regression combined with K-nearest neighbors (KNN), Support Vector Machine (SVM), Random Sample Consensus (RANSAC), Decision Trees, Random Forest, and Extreme Gradient Boosting (XGBoost). These models were evaluated using five-fold cross-validated averages of r-squared, mean squared error, root mean squared error, mean absolute error, and explained variance. The findings of this study include the training of multiple models with testing set r-squared values of 0.97.

en q-bio.GN, cs.LG
CrossRef Open Access 2023
Portable Instruments Based on NIR Sensors and Multivariate Statistical Methods for a Semiautomatic Quality Control of Textiles

Jordi-Roger Riba, Rita Puig, Rosa Cantero

Near-infrared (NIR) spectroscopy is a widely used technique for determining the composition of textile fibers. This paper analyzes the possibility of using low-cost portable NIR sensors based on InGaAs PIN photodiode array detectors to acquire the NIR spectra of textile samples. The NIR spectra are then processed by applying a sequential application of multivariate statistical methods (principal component analysis, canonical variate analysis, and the k-nearest neighbor classifier) to classify the textile samples based on their composition. This paper tries to solve a real problem faced by a knitwear manufacturer, which arose because different pieces of the same garment were made with “identical” acrylic yarns from two suppliers. The sweaters had a composition of 50% acrylic, 45% wool, and 5% viscose. The problem occurred after the garments were dyed, where different shades were observed due to the different origins of the acrylic yarns. This is a challenging real-world problem for two reasons. First, there is the need to differentiate between acrylic yarns of different origins, which experts say cannot be visually distinguished before garments are dyed. Second, measurements are made in the field using portable NIR sensors rather than in a controlled laboratory using sophisticated and expensive benchtop NIR spectrometers. The experimental results obtained with the portable sensors achieved a classification accuracy of 95%, slightly lower than the 100% obtained with the high-performance laboratory benchtop NIR spectrometer. The results presented in this paper show that portable NIR sensors combined with appropriate multivariate statistical classification methods can be effectively used for on-site textile quality control.

DOAJ Open Access 2023
Superconvergence Analysis of Discontinuous Galerkin Methods for Systems of Second-Order Boundary Value Problems

Helmi Temimi

In this paper, we present an innovative approach to solve a system of boundary value problems (BVPs), using the newly developed discontinuous Galerkin (DG) method, which eliminates the need for auxiliary variables. This work is the first in a series of papers on DG methods applied to partial differential equations (PDEs). By consecutively applying the DG method to each space variable of the PDE using the method of lines, we transform the problem into a system of ordinary differential equations (ODEs). We investigate the convergence criteria of the DG method on systems of ODEs and generalize the error analysis to PDEs. Our analysis demonstrates that the DG error’s leading term is determined by a combination of specific Jacobi polynomials in each element. Thus, we prove that DG solutions are superconvergent at the roots of these polynomials, with an order of convergence of <inline-formula><math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><semantics><mrow><mi>O</mi><mo>(</mo><msup><mi>h</mi><mrow><mi>p</mi><mo>+</mo><mn>2</mn></mrow></msup><mo>)</mo></mrow></semantics></math></inline-formula>.

Electronic computers. Computer science
DOAJ Open Access 2023
Predictive digital twin for optimizing patient-specific radiotherapy regimens under uncertainty in high-grade gliomas

Anirban Chaudhuri, Graham Pash, David A. Hormuth et al.

We develop a methodology to create data-driven predictive digital twins for optimal risk-aware clinical decision-making. We illustrate the methodology as an enabler for an anticipatory personalized treatment that accounts for uncertainties in the underlying tumor biology in high-grade gliomas, where heterogeneity in the response to standard-of-care (SOC) radiotherapy contributes to sub-optimal patient outcomes. The digital twin is initialized through prior distributions derived from population-level clinical data in the literature for a mechanistic model's parameters. Then the digital twin is personalized using Bayesian model calibration for assimilating patient-specific magnetic resonance imaging data. The calibrated digital twin is used to propose optimal radiotherapy treatment regimens by solving a multi-objective risk-based optimization under uncertainty problem. The solution leads to a suite of patient-specific optimal radiotherapy treatment regimens exhibiting varying levels of trade-off between the two competing clinical objectives: (i) maximizing tumor control (characterized by minimizing the risk of tumor volume growth) and (ii) minimizing the toxicity from radiotherapy. The proposed digital twin framework is illustrated by generating an in silico cohort of 100 patients with high-grade glioma growth and response properties typically observed in the literature. For the same total radiation dose as the SOC, the personalized treatment regimens lead to median increase in tumor time to progression of around six days. Alternatively, for the same level of tumor control as the SOC, the digital twin provides optimal treatment options that lead to a median reduction in radiation dose by 16.7% (10 Gy) compared to SOC total dose of 60 Gy. The range of optimal solutions also provide options with increased doses for patients with aggressive cancer, where SOC does not lead to sufficient tumor control.

Electronic computers. Computer science
DOAJ Open Access 2023
Logical Reasoning Based on Residual Attention Multi-scale Relation Network

XIONG Zhongmin, ZENG Qi, LU Peng, WANG Zhenhua, ZHENG Zongsheng

Logical reasoning is the ability to perceive patterns and connections between visual elements. Endowing computers with human-like reasoning ability is a critical area of research;state-of-the-art deep neural networks have achieved superhuman performance in image processing and other fields.However,the concept of logical reasoning through images requires further research.To address the problems of insufficient feature extraction and generalization of Multi-scale Relation Network(MRNet),an improved logical reasoning method,called Residual Attention Multi-scale Relation Network(ResAMRNet),is proposed. In the backbone network,shallow features are integrated into the deep network training process by utilizing residual structures and combining jump and long jump. This reduces the loss of feature information and improves the feature extraction capability of the model. In the reasoning module,the channel attention mechanism and residuals are combined to detect the relationship features between each image line.It can differentiate the significance of each feature channel,learn the attention weight adaptively,and extract key features.In this study,a Double-pooled Efficient Channel Attention(DECA) mechanism is proposed to combine global maximum pooling to further obtain feature information regarding objects and to improve generalization.Experimental results on representative logical reasoning datasets,Relational and Analogical Visual rEasoNing(RAVEN) and Improved RAVEN(I-RAVEN),show that the accuracy of the proposed method using these datasets is higher by 8.3 and 18.1 percentage points,respectively,than that of MRNet. Therefore,it demonstrates strong logical reasoning capabilities.

Computer engineering. Computer hardware, Computer software
arXiv Open Access 2023
Multi-Observables and Multi-Instruments

Stan Gudder

This article introduces the concepts of multi-observables and multi-instruments in quantum mechanics. A multi-observable $A$ (multi-instrument $\mathcal{I}$) has an outcome space of the form $Ω=Ω_1\times\cdots\timesΩ_n$ and is denoted by $A_{x_1\cdots x_n}$ ($\mathcal{I}_{x_1\cdots x_n}$) where $(x_1,\ldots ,x_n)\inΩ$. We also call $A$ ($\mathcal{I}$) an $n$-observable ($n$-instrument) and when $n=2$ we call $A$ ($\mathcal{I}$) a bi-observable (bi-instrument). We point out that bi-observables $A$ ($\mathcal{I}$) and bi-instruments have been considered in past literature, but the more general case appears to be new. In particular, two observables (instruments) have been defined to coexist or be compatible if they possess a joint bi-observable (bi-instrument). We extend this definition to $n$ observables and $n$ instruments by considering joint marginals of $n$-observables and joint reduced marginals of $n$-instruments. We show that a $n$-instrument measures a unique $n$-observable and if a finite umber of instruments coexist, then their measured observables coexist. We prove that there is a close relationship between a nontrivial $n$-observable and its parts. Moreover, a similar result holds for instruments. We next show that a natural definition for the tensor product of a finite number of instruments exist and possess reasonable properties. We then discuss sequential products of a finite number of observables and instruments. We present various examples such as Kraus, Holevo and Lüders instruments.

en quant-ph
arXiv Open Access 2023
Weak Identification with Many Instruments

Anna Mikusheva, Liyang Sun

Linear instrumental variable regressions are widely used to estimate causal effects. Many instruments arise from the use of ``technical'' instruments and more recently from the empirical strategy of ``judge design''. This paper surveys and summarizes ideas from recent literature on estimation and statistical inferences with many instruments for a single endogenous regressor. We discuss how to assess the strength of the instruments and how to conduct weak identification-robust inference under heteroskedasticity. We establish new results for a jack-knifed version of the Lagrange Multiplier (LM) test statistic. Furthermore, we extend the weak-identification-robust tests to settings with both many exogenous regressors and many instruments. We propose a test that properly partials out many exogenous regressors while preserving the re-centering property of the jack-knife. The proposed tests have correct size and good power properties.

en econ.EM
arXiv Open Access 2023
Stochastic errors in quantum instruments

Darian McLaren, Matthew A. Graydon, Joel J. Wallman

Fault-tolerant quantum computation requires non-destructive quantum measurements with classical feed-forward. Many experimental groups are actively working towards implementing such capabilities and so they need to be accurately evaluated. As with unitary channels, an arbitrary imperfect implementation of a quantum instrument is difficult to analyze. In this paper, we define a class of quantum instruments that correspond to stochastic errors and thus are amenable to standard analysis methods. We derive efficiently computable upper- and lower-bounds on the diamond distance between two quantum instruments. Furthermore, we show that, for the special case of uniform stochastic instruments, the diamond distance and the natural generalization of the process infidelity to quantum instruments coincide and are equal to a well-defined probability of an error occurring during the measurement.

en quant-ph
DOAJ Open Access 2022
Inventory Information System Audit Using Cobit 5 Domain MEA at PT. Telkom Akses Pontianak

Noor Hellyda Hermawati, Susy Rosyida

PT. Telkom Akses Pontianak memiliki sistem informasi Inventory yang selama ini digunakan, selama melakukan penelitian ditemukanlah beberarapa temuan, yaitu seperti informasi terkait ketersedian material, sistem yang kurang efektif terkait data pengeluaran barang yang berdampak pada laporan periodik perusahaan, dan kurangnya optimalisasi Sumber Daya Manusia yang ada. sehingga dengan permsalahan yang ada menjadi dasar untuk melakukan audit sistem informasi yang digunakan. Audit mengacu pada framework COBIT 5 dengan menggunakan Domain MEA ditemukanlah hasil dari tingkat kapabilitas masing-masing sub domain MEA itu sendiri dan juga Gap Analisisnya. Dengan nilai kapabilitas dari subdomain MEA 01 senilai 3,83, Subdomain MEA 02 senilai 3,60, dan Subdomain MEA 03 senilai 3,69, dengan nilai rata-rata yaitu 3,70 dengan keterangan Predictable Process yang berarti objek yang diteliti sudah mencapai proses yang ditetapkan berjalan dalam suatu batas yang ditentukan untuk mencapai tujuan prosesnya. Serta dengan perhitungan Gap Analisis yaitu pada subdomain MEA 01 senilai 1,2, Subdomain MEA 02 senilai 1,4, dan Subdomain MEA 03 senilai 1,3, dengan nilai rata-rata yaitu 1,3 yang berarti perusahaan masih perlu meningkatkan terkait sistem informasi Inventory yang digunakan agar dapat memperoleh hasil yang optimal bagi seluruh pemangku kepentingan.

Electronic computers. Computer science, Management information systems
DOAJ Open Access 2022
Strategic guidelines for the development of enterprises of the construction sector

Nikolay Chepachenko, Marina Yudenko, Anna Gospodinova et al.

The current trend of globalization of the world economy necessitates the use of high-tech developments and innovations that allow achieving strategic goals at the national, regional, and sectoral levels. The prerequisites of the study are determined by the urgency of finding solutions to problematic issues of formation and implementation of priority strategic guidelines for the development of enterprises of the construction sector, designed to ensure an adequate contribution to the strategic vector of advanced industrial, technological and socio-economic development of the construction industry and the national economy. This determines the need to find a solution to the problem of forming and implementing priority strategic guidelines for the development of enterprises mainly by increasing technological and innovative potentials that form the economic potential of the development of enterprises by the type of activity "Construction". The purpose of the study is to identify strategic guidelines for the development of enterprises of the construction sector that meet the targets of the fourth scientific and technological revolution and the achievement of strategic goals for the development of national economies. The findings of the paper outline the key signs of development, inherent in the nature of the development of material objects and economic entities of the economy are revealed. This allowed us to propose a systematization of the formation of priority strategic guidelines for the economic development of construction enterprises, reflecting the relationship with the targets for achieving national goals and strategic objectives for the development of economies of various countries and meeting the targets of the fourth scientific and technological revolution Industry 4.0. The practical implications refer to enterprises of the construction sector.

Electronic computers. Computer science, Economics as a science

Halaman 7 dari 31605