Hasil untuk "Computer applications to medicine. Medical informatics"
Menampilkan 20 dari ~12371377 hasil · dari CrossRef, DOAJ, Semantic Scholar
P. Keane, E. Topol
In this issue of npj Digital Medicine, Abramoff and colleagues report the findings from a prospective study that evaluates the performance of a diabetic retinopathy diagnostic system (IDx-DR) in a primary care setting. This represents an important clinical milestone as, in April 2018, these results were used to form the basis for FDA approval of the system, thus becoming the first fully autonomous AI-based system approved for marketing in the USA. Given the potentially transformative potential of AI for healthcare (in particular a technique referred to as “deep learning”)—but also its associated hype—this lays an important foundation for future translation of such technologies to routine clinical practice. Deep learning uses artificial neural networks—so-called because of their superficial resemblance to biological neural networks—as computational models to discover intricate structure in large, high-dimensional datasets. Although first espoused in the 1980s, deep learning has come to prominence in recent years, driven in large part by the power of graphics processing units (GPUs) originally developed for video gaming, cloud computing, and the increasing availability of large, carefully annotated datasets. Since 2012, deep learning has brought seismic changes to the technology industry, with major breakthroughs in areas as diverse as image and speech recognition, natural language translation, robotics, and even self-driving cars. In 2015, Scientific American listed deep learning as one of their ‘world changing’ ideas for the year. Deep learning is particularly well suited to image classification tasks and so has huge potential in medical imaging applications— scans, slides, skin lesions and the patterns in medical practice that occur frequently and are associated with screening, triage, diagnosis, and monitoring. A number of recent research studies have demonstrated this potential in multiple domains, albeit in retrospective in silico settings. The work reported by Abramoff et al. is an important milestone as the first of its kind to be performed in a prospective real-world clinical environment, and using a product that will be commercially available rather than a research prototype. The need for external validation studies is well recognized in the machine learning community; however, there may be less awareness of the additional specific value provided by a prospective clinical study, as well as the time, effort, and considerable costs that such studies entail. Prospective, noninterventional studies, such as that described by Abramoff and colleagues, will likely be fundamental to addressing questions about automated diagnosis efficacy. However, such studies will not address the issue of clinical effectiveness—do patients directly benefit from the use of such AI systems? In the case of diabetic retinopathy, the question might be: do patients ultimately have good—or at least non-inferior—visual outcomes when this system is used? This is not a trivial point—computer aided detection (CAD) systems for mammography were approved by the FDA in 1998, and by 2008 74% of all screening mammograms in the Medicare population were interpreted using this technology. However, nearly 20 years later a large study concluded “CAD does not improve diagnostic accuracy of mammography and may result in missed cancers. These results suggest that insurers pay more for computer-aided detection with no established benefit to women.” To properly address this issue, prospective interventional studies should be required. Of course, such randomized clinical trials may not be feasible or warranted in every case; however, it will be incumbent on the clinical community to engage with this question. A further important point is that, historically, diagnostic accuracy studies have often been suboptimally or poorly reported. With the likely further clinical translation of AI systems, it will become increasingly important for STARD, and other trial reporting guidelines, to be both followed and regularly updated. The clinical research community has also got blind spots. In particular, there is a lack of awareness of the so-called 'AI Chasm', that is the gulf between developing a scientifically sound algorithm and its use in any meaningful real-world applications. It is one thing to develop an algorithm that works well on a small dataset from a specific population, it is quite another to develop one that will generalize to other populations and across different imaging modalities. There is also a large gulf between the experimental code produced for a proof-of-concept research study, and the eventual code to be used in a product with regulatory approvals. The latter constitutes a medical device and so must typically be rewritten from the ground up, with a quality management system in place, and in compliance with Good Manufacturing Practice. The time, expertise, and expense associated with this can be considerable and likely not possible for clinicians without an industry partner or other significant commercial support. It is also important to highlight that many aspects of the regulatory processes for AI are still evolving and that there is uncertainty about the implications of this, both for planning of clinical trials and commercial development. Firstly, it is worth explicitly pointing out a prevalent misconception about AI diagnostic systems. Although these systems typically learn by being trained on large amounts of labelled images, at some point this process is stopped and diagnostic thresholds are set. In the work by Abramoff and colleagues, the software was locked prior to the clinical trial—after this point, the software behaves in a similar fashion to non-AI diagnostic systems. That is to say the auto-didactic aspect of the algorithm is no longer doing ‘on the job’ learning. It may be some years before clinical trial methodologies and regulatory frameworks have evolved to deal with algorithms capable of learning on a case-by-case basis in a real-world setting. Secondly, it is worth highlighting that the IDxDR was reviewed under the FDA’s De Novo premarket review pathway. This is a regulatory pathway for lowto moderate-risk devices that are novel and for which there is no legally marketed device. The bar for subsequent approval of diabetic retinopathy AI diagnostic systems is likely to be higher. While this study is undoubtedly a milestone, and an important benchmark for future research, it is also important to touch on
Guangdong FU, Lifeng PENG, Zhihao ZHANG et al.
This research utilizes a deep learning-based image generation algorithm to generate pseudo-sagittal STIR sequences from sagittal T1WI and T2WI MR images. The evaluations include both subjective assessments by two physicians and objective analyses, measuring image quality through SNR and CNR in ROIs of five different tissues. Further analyses, including MAE, PSNR, SSIM, and COR, establish a strong correlation between the generated STIR sequences and the gold standard, with Bland-Altman analysis indicating pixel consistency. The findings indicate that the deep learning-generated STIR sequences not only align with but potentially surpass the gold standard in terms of image quality and clinical diagnostic capabilities. Moreover, the approach demonstrates promise for clinical implementation, offering reduced scan time and enhanced imaging efficiency.
Soumil Jain, Megan Armstrong, John Luna et al.
Alexander Brehmer, Christopher Martin Sauer, Jayson Salazar Rodríguez et al.
BackgroundFHIR (Fast Healthcare Interoperability Resources) has been proposed to enable health data interoperability. So far, its applicability has been demonstrated for selected research projects with limited data. ObjectiveThis study aimed to design and implement a conceptual medical intelligence framework to leverage real-world care data for clinical decision-making. MethodsA Python package for the use of multimodal FHIR data (FHIRPACK [FHIR Python Analysis Conversion Kit]) was developed and pioneered in 5 real-world clinical use cases, that is, myocardial infarction, stroke, diabetes, sepsis, and prostate cancer. Patients were identified based on the ICD-10 (International Classification of Diseases, Tenth Revision) codes, and outcomes were derived from laboratory tests, prescriptions, procedures, and diagnostic reports. Results were provided as browser-based dashboards. ResultsFor 2022, a total of 1,302,988 patient encounters were analyzed. (1) Myocardial infarction: in 72.7% (261/359) of cases, medication regimens fulfilled guideline recommendations. (2) Stroke: out of 1277 patients, 165 received thrombolysis and 108 thrombectomy. (3) Diabetes: in 443,866 serum glucose and 16,180 glycated hemoglobin A1c measurements from 35,494 unique patients, the prevalence of dysglycemic findings was 39% (13,887/35,494). Among those with dysglycemia, diagnosis was coded in 44.2% (6138/13,887) of the patients. (4) Sepsis: In 1803 patients, Staphylococcus epidermidis was the primarily isolated pathogen (773/2672, 28.9%) and piperacillin and tazobactam was the primarily prescribed antibiotic (593/1593, 37.2%). (5) PC: out of 54, three patients who received radical prostatectomy were identified as cases with prostate-specific antigen persistence or biochemical recurrence. ConclusionsLeveraging FHIR data through large-scale analytics can enhance health care quality and improve patient outcomes across 5 clinical specialties. We identified (1) patients with sepsis requiring less broad antibiotic therapy, (2) patients with myocardial infarction who could benefit from statin and antiplatelet therapy, (3) patients who had a stroke with longer than recommended times to intervention, (4) patients with hyperglycemia who could benefit from specialist referral, and (5) patients with PC with early increases in cancer markers.
Vuokko Heikinheimo, Maija Tiitu, Arto Viinikka
Access to green spaces in urban regions is vital for the well-being of citizens. In this article, we present data on green space quality and path distances to different types of green spaces. The path distances represent green space accessibility using active travel modes (walking, cycling). The path distances were calculated using the pedestrian street network across the seven largest urban regions in Finland. We derived the green space typology from the Urban Atlas Data that is available across functional urban areas in Europe and enhanced it with national data on water bodies, conservation areas and recreational facilities and routes from Finland. We extracted the walkable street network from OpenStreetMap and calculated shortest paths to different types of green spaces using open-source Python programming tools. Network distances were calculated up to ten kilometers from each green space edge and the distances were aggregated into a 250 m × 250 m statistical grid that is interoperable with various statistical data from Finland. The geospatial data files representing the different types of green spaces, network distances across the seven urban regions, as well as the processing and analysis scripts are shared in an open repository. These data offer actionable information about green space accessibility in Finnish urban regions and support the integration of green space quality and active travel modes into further research and planning activities.
Luca Cosmo, Anees Kazi, Seyed-Ahmad Ahmadi et al.
Recently, Graph Convolutional Networks (GCNs) have proven to be a powerful machine learning tool for Computer Aided Diagnosis (CADx) and disease prediction. A key component in these models is to build a population graph, where the graph adjacency matrix represents pair-wise patient similarities. Until now, the similarity metrics have been defined manually, usually based on meta-features like demographics or clinical scores. The definition of the metric, however, needs careful tuning, as GCNs are very sensitive to the graph structure. In this paper, we demonstrate for the first time in the CADx domain that it is possible to learn a single, optimal graph towards the GCN’s downstream task of disease classification. To this end, we propose a novel, end-to-end trainable graph learning architecture for dynamic and localized graph pruning. Unlike commonly employed spectral GCN approaches, our GCN is spatial and inductive, and can thus infer previously unseen patients as well. We demonstrate significant classification improvements with our learned graph on two CADx problems in medicine. We further explain and visualize this result using an artificial dataset, underlining the importance of graph learning for more accurate and robust inference with GCNs in medical applications.
Mohammad A.Y. Alqudah, Akram Al-Nosairy, Karem H. Alzoubi et al.
Background: Uncontrolled diabetes mellitus (DM) is accompanied by progressive cognitive deterioration mediated by neurodegeneration due to chronic hyperglycemia and subsequent oxidative damage. Oxidative stress damage due to high blood glucose is associated with functional and structural changes in the hippocampus, responsible for cognitive properties, leading to cognitive impairment. Edaravone is a potent antioxidant with neuroprotective properties that has mainly been used to treat amyotrophic lateral sclerosis and has been tested in many models associated with cognition deficits. Our study aimed to assess edaravone's potential neuroprotective activity to reverse memory impairment in a streptozotocin (STZ)-induced diabetic rat model. Methods: DM was induced by a single 50 mg/kg STZ intraperitoneal injection. Rats received edaravone intraperitoneally at 6 mg/kg/day, six days/week, for four weeks. The radial arm water maze behavioral test evaluated both learning and memory. Oxidative stress was evaluated by molecularly measuring hippocampus enzyme activities and biomarker levels. Results: Impairment of short- and long-term memory was observed in rats with STZ-induced diabetes, accompanied by decreased hippocampal superoxide dismutase and glutathione peroxidase activities and reduced/oxidized glutathione ratio. Edaravone significantly attenuated memory impairment (p < 0.05) and normalized oxidative stress biomarker levels (p < 0.05). Furthermore, STZ significantly increased hippocampal thiobarbituric acid reactive substance activity, which was also normalized by edaravone treatment (p < 0.05). However, STZ and edaravone did not affect catalase and brain-derived neurotrophic factor levels (p > 0.05). Conclusion: Edaravone prevented STZ-induced memory impairment and attenuated oxidative stress likely by restoring hippocampus antioxidant mechanisms.
Aristeidis Litos, Aristeidis Litos, Evangelia Intze et al.
Microbial time-series analysis, typically, examines the abundances of individual taxa over time and attempts to assign etiology to observed patterns. This approach assumes homogeneous groups in terms of profiles and response to external effectors. These assumptions are not always fulfilled, especially in complex natural systems, like the microbiome of the human gut. It is actually established that humans with otherwise the same demographic or dietary backgrounds can have distinct microbial profiles. We suggest an alternative approach to the analysis of microbial time-series, based on the following premises: 1) microbial communities are organized in distinct clusters of similar composition at any time point, 2) these intrinsic subsets of communities could have different responses to the same external effects, and 3) the fate of the communities is largely deterministic given the same external conditions. Therefore, tracking the transition of communities, rather than individual taxa, across these states, can enhance our understanding of the ecological processes and allow the prediction of future states, by incorporating applied effects. We implement these ideas into Cronos, an analytical pipeline written in R. Cronos’ inputs are a microbial composition table (e.g., OTU table), their phylogenetic relations as a tree, and the associated metadata. Cronos detects the intrinsic microbial profile clusters on all time points, describes them in terms of composition, and records the transitions between them. Cluster assignments, combined with the provided metadata, are used to model the transitions and predict samples’ fate under various effects. We applied Cronos to available data from growing infants’ gut microbiomes, and we observe two distinct trajectories corresponding to breastfed and formula-fed infants that eventually converge to profiles resembling those of mature individuals. Cronos is freely available at https://github.com/Lagkouvardos/Cronos.
Dujuan Li, Caixia Chen
Abstract Purpose Surface electromyography (sEMG) is vulnerable to environmental interference, low recognition rate and poor stability. Electrocardiogram (ECG) signals with rich information were introduced into sEMG to improve the recognition rate of fatigue assessment in the process of rehabilitation. Methods Twenty subjects performed 150 min of Pilates rehabilitation exercise. Twenty subjects performed 150 min of Pilates rehabilitation exercise. ECG and sEMG signals were collected at the same time. Aftering necessary preprocessing, the classification model of improved particle swarm optimization support vector machine base on sEMG and ECG data fusion was established to identify three different fatigue states (Relaxed, Transition, Tired). The model effects of different classification algorithms (BPNN, KNN, LDA) and different fused data types were compared. Results IPSO-SVM had obvious advantages in the classification effect of sEMG and ECG signals, the average recognition rate was 87.83%. The recognition rates of sEMG and ECG fusion feature classification models were 94.25%, 92.25%, 94.25%. The recognition accuracy and model performance was significantly improved. Conclusion The sEMG and ECG signal after feature fusion form a complementary mechanism. At the same time, IPOS-SVM can accurately detect the fatigue state in the process of Pilates rehabilitation. On the same model, the recognition effect of fusion of sEMG and ECG(Relaxed: 98.75%, Transition:92.25%, Tired:94.25%) is better than that of only using sEMG signal or ECGsignal. This study establishes technical support for establishing relevant man–machine devices and improving the safety of Pilates rehabilitation.
Wan Mohd Azam Wan Mohd Yunus, Hanna-Maria Matinolli, Otto Waris et al.
BackgroundStudies have shown a high prevalence of depression during pregnancy, and there is also evidence that cognitive behavioral therapy (CBT) is one of the most effective psychosocial interventions. Emerging evidence from randomized controlled trials (RCTs) has shown that technology has been successfully harnessed to provide CBT interventions for other populations. However, very few studies have focused on their use during pregnancy. This approach has become increasingly important in many clinical areas due to the COVID-19 pandemic, and our study aimed to expand the knowledge in this particular clinical area. ObjectiveOur systematic review aimed to bring together the available research-based evidence on digitalized CBT interventions for depression symptoms during pregnancy. MethodsA systematic review of the Web of Science, Cochrane Central Register of Controlled Trials, CINAHL, MEDLINE, Embase, PsycINFO, Scopus, ClinicalTrials.gov, and EBSCO Open Dissertations databases was carried out from the earliest available evidence to October 27, 2021. Only RCT studies published in English were considered. The PRISMA (Preferred Reporting Items of Systematic Reviews and Meta-analyses) guidelines were followed, and the protocol was registered on the Prospective Register of Systematic Reviews. The risk of bias was assessed using the revised Cochrane risk-of-bias tool for randomized trials. ResultsThe review identified 7 studies from 5 countries (the United States, China, Australia, Norway, and Sweden) published from 2015 to 2021. The sample sizes ranged from 25 to 1342 participants. The interventions used various technological elements, including text, images, videos, games, interactive features, and peer group discussions. They comprised 2 guided and 5 unguided approaches. Using digitalized CBT interventions for depression during pregnancy showed promising efficacy, with guided intervention showing higher effect sizes (Hedges g=1.21) than the unguided interventions (Hedges g=0.14-0.99). The acceptability of the digitalized CBT interventions was highly encouraging, based on user feedback. Attrition rates were low for the guided intervention (4.5%) but high for the unguided interventions (22.1%-46.5%). A high overall risk of bias was present for 6 of the 7 studies. ConclusionsOur search only identified a small number of digitalized CBT interventions for pregnant women, despite the potential of this approach. These showed promising evidence when it came to efficacy and positive outcomes for depression symptoms, and user feedback was positive. However, the overall risk of bias suggests that the efficacy of the interventions needs to be interpreted with caution. Future studies need to consider how to mitigate these sources of biases. Digitalized CBT interventions can provide prompt, effective, evidence-based interventions for pregnant women. This review increases our understanding of the importance of digitalized interventions during pregnancy, including during the COVID-19 pandemic. Trial RegistrationPROSPERO International Prospective Register of Systematic Reviews CRD42020216159; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=216159
Rohan Khera, Bobak J. Mortazavi, Veer Sangha et al.
Abstract Diagnosis codes are used to study SARS-CoV2 infections and COVID-19 hospitalizations in administrative and electronic health record (EHR) data. Using EHR data (April 2020–March 2021) at the Yale-New Haven Health System and the three hospital systems of the Mayo Clinic, computable phenotype definitions based on ICD-10 diagnosis of COVID-19 (U07.1) were evaluated against positive SARS-CoV-2 PCR or antigen tests. We included 69,423 patients at Yale and 75,748 at Mayo Clinic with either a diagnosis code or a positive SARS-CoV-2 test. The precision and recall of a COVID-19 diagnosis for a positive test were 68.8% and 83.3%, respectively, at Yale, with higher precision (95%) and lower recall (63.5%) at Mayo Clinic, varying between 59.2% in Rochester to 97.3% in Arizona. For hospitalizations with a principal COVID-19 diagnosis, 94.8% at Yale and 80.5% at Mayo Clinic had an associated positive laboratory test, with secondary diagnosis of COVID-19 identifying additional patients. These patients had a twofold higher inhospital mortality than based on principal diagnosis. Standardization of coding practices is needed before the use of diagnosis codes in clinical research and epidemiological surveillance of COVID-19.
Brenton Button, Clare Cook, James Goertzen et al.
BackgroundProviding feedback to medical learners is a critical educational activity. Despite the recognition of its importance, most research has focused on training preceptors to give feedback, which neglects the role of learners in receiving feedback. Delivering a combined professional development session for both preceptors and students may facilitate more effective feedback communication and improve both the quality and quantity of feedback. ObjectiveThe objective of our research project is to examine the impact of a relational feedback intervention on both preceptors and students during a longitudinal integrated clerkship. MethodsStudents and preceptors will attend a 2.5-hour combined professional development session, wherein they will be provided with educational tools for giving and receiving feedback within a coaching relationship and practice feedback giving and receiving skills together. Before the combined professional development session, students will be asked to participate in a 1-hour preparation session that will provide an orientation on their role in receiving feedback and their participation in the combined professional development session. Students and preceptors will be asked to complete a precombined professional development session survey and an immediate postcombined professional development session survey. Preceptors will be asked to complete a follow-up assessment survey, and students will be asked to participate in a follow-up, student-only focus group. Anonymized clinical faculty teaching evaluations and longitudinal integrated clerkship program evaluations will also be used to assess the impact of the intervention. ResultsAs of March 1, 2022, a total of 66 preceptors and 29 students have completed the baseline and follow-up measures. Data collection is expected to conclude in December 2023. ConclusionsOur study is designed to contribute to the literature on the feedback process between preceptors and students within a clinical setting. Including both the preceptors and the students in the same session will improve on the work that has already been conducted in this area, as the students and preceptors can further develop their relationships and coconstruct feedback conversations. We will use social learning theory to interpret the results of our study, which will help us explain the results and potentially make the work generalizable to other fields. International Registered Report Identifier (IRRID)DERR1-10.2196/32829
Zhe Yang, Kun Jiang, Miaomiao Lou et al.
Abstract Background Health data from different specialties or domains generallly have diverse formats and meanings, which can cause semantic communication barriers when these data are exchanged among heterogeneous systems. As such, this study is intended to develop a national health concept data model (HCDM) and develop a corresponding system to facilitate healthcare data standardization and centralized metadata management. Methods Based on 55 data sets (4640 data items) from 7 health business domains in China, a bottom-up approach was employed to build the structure and metadata for HCDM by referencing HL7 RIM. According to ISO/IEC 11179, a top-down approach was used to develop and standardize the data elements. Results HCDM adopted three-level architecture of class, attribute and data type, and consisted of 6 classes and 15 sub-classes. Each class had a set of descriptive attributes and every attribute was assigned a data type. 100 initial data elements (DEs) were extracted from HCDM and 144 general DEs were derived from corresponding initial DEs. Domain DEs were transformed by specializing general DEs using 12 controlled vocabularies which developed from HL7 vocabularies and actual health demands. A model-based system was successfully established to evaluate and manage the NHDD. Conclusions HCDM provided a unified metadata reference for multi-source data standardization and management. This approach of defining health data elements was a feasible solution in healthcare information standardization to enable healthcare interoperability in China.
Zhongan Zhang, Xu Zheng, Kai An et al.
Background The China Hospital Information Network Conference (CHINC) is one of the most influential academic and technical exchange activities in medical informatics and medical informatization in China. It collects frontier ideas in medical information and has an important reference value for the analysis of China's medical information industry development. Objective This study summarizes the current situation and future development of China's medical information industry and provides a future reference for China and abroad in the future by analyzing the characteristics of CHINC exhibitors in 2021. Methods The list of enterprises and participating keywords were obtained from the official website of CHINC. Basic characteristics of the enterprises, industrial fields, applied technologies, company concepts, and other information were collected from the TianYanCha website and the VBDATA company library. Descriptive analysis was used to analyze the collected data, and we summarized the future development directions. Results A total of 205 enterprises officially participated in the exhibition. Most of the enterprises were newly founded, of which 61.9% (127/205) were founded in the past 10 years. The majority of these enterprises were from first-tier cities, and 79.02% (162/205) were from Beijing, Zhejiang, Guangdong, Shanghai, and Jiangsu Provinces. The median registered capital is 16.67 million RMB (about US $2.61 million), and there are 35 (72.2%) enterprises with a registered capital of more than 100 million RMB (about US $15.68 million), 17 (8.3%) of which are already listed. A total of 126 enterprises were found in the VBDATA company library, of which 39 (30.9%) are information technology vendors and 57 (45.2%) are application technology vendors. In addition, 16 of the 57 (28%) use artificial intelligence technology. Smart medicine and internet hospitals were the focus of the enterprises participating in this conference. Conclusions China's tertiary hospital informatization has basically completed the construction of the primary stage. The average grade of hospital electronic medical records exceeds grade 3, and 78.13% of the provinces have reached grade 3 or above. The characteristics are as follows: On the one hand, China's medical information industry is focusing on the construction of smart hospitals, including intelligent systems supporting doctors' scientific research, diagnosis-related group intelligent operation systems, and office automation systems supporting hospital management, single-disease clinical decision support systems assisting doctors' clinical care, and intelligent internet of things for logistics. On the other hand, the construction of a compact county medical community is becoming a new focus of enterprises under the guidance of practical needs and national policies to improve the quality of grassroots health services. In addition, whole-course management and digital therapy will also become a new hotspot in the future.
Yahya Aziz, Dewi Wardani
Tina Kumra, Selvi Rajagopal, Kathleen Johnson et al.
Ideal management of chronic disease includes team based primary care, however primary care medical staff face a lack of training when addressing nutritional counseling and lifestyle prevention. Interactive culinary medicine education has shown to improve knowledge and confidence among medical students. The aim of this study was to determine whether a culinary medicine curriculum delivered to a multidisciplinary team of primary care medical staff and medical students in a community setting would improve self-reported efficacy in nutritional counseling and whether efficacy differed between participant roles. A 4-h interactive workshop that took place within the neighborhood of a primary care medical home was delivered to medical staff and students. Participants completed a voluntary questionnaire before and after the workshop that addressed participants’ attitudes and confidence in providing nutritional counseling to patients. Chi-square tests were run to determine statistically significant associations between role of participant and survey question responses. Sign Rank tests were run to determine if pre-workshop responses differed significantly from post-workshop responses. Thirteen of seventeen responses related to attitudes and efficacy demonstrated significant improvement after the workshop compared with prior to the workshop. Significant differences noted between roles prior to the workshop disappear when asking the same questions after the workshop. Delivery of culinary medicine curricula to a primary care medical home team in a community setting is an innovative opportunity to collaboratively improve nutritional education and counseling in chronic disease prevention.
Anne-Marthe Sanders, Geneviève Richard, Knut Kolskår et al.
Maintaining high levels of daily activity and physical capability have been proposed as important constituents to promote healthy brain and cognitive aging. Studies investigating the associations between brain health and physical activity in late life have, however, mainly been based on self-reported data or measures designed for clinical populations. In the current study, we examined cross-sectional associations between physical activity, recorded by an ankle-positioned accelerometer for seven days, physical capability (grip strength, postural control, and walking speed), and neuroimaging based surrogate markers of brain health in 122 healthy older adults aged 65–88 years. We used a multimodal brain imaging approach offering complementary structural MRI based indicators of brain health: global white matter fractional anisotropy (FA) and mean diffusivity (MD) based on diffusion tensor imaging, and subcortical and global brain age based on brain morphology inferred from T1-weighted MRI data. In addition, based on the results from the main analysis, follow-up regression analysis was performed to test for association between the volume of key subcortical regions of interest (hippocampus, caudate, thalamus and cerebellum) and daily steps, and a follow-up voxelwise analysis to test for associations between walking speed and FA across the white matter Tract-Based Spatial Statistics (TBSS) skeleton. The analyses revealed a significant association between global FA and walking speed, indicating higher white matter integrity in people with higher pace. Voxelwise analysis supported widespread significant associations. We also found a significant interaction between sex and subcortical brain age on number of daily steps, indicating younger-appearing brains in more physically active women, with no significant associations among men. These results provide insight into the intricate associations between different measures of brain and physical health in old age, and corroborate established public health advice promoting physical activity.
Leila Bazrafkan, Leila Mohammadinia, Mohammad Nikrou et al.
Background: Cost assessment with modern costing systems can be conducive to the efficiency of education. In this study, activitybased costing was applied to calculate the cost of in-person training for undergraduate nursing students, and to evaluate the adjustable cost in virtual education. Methods: It was a descriptive-applied research using cross-sectional economic analysis based on activity-based costing system. The statistical population included undergraduate nursing students in Lamerd, Iran. Data analysis was performed using the data obtained from a 4-year undergraduate program (2012 to 2016). The data were compiled in three categories, namely educational costs, support costs and cultural-welfare expenditure. The acquired data in face-to-face education were considered as the reference for comparison. Furthermore, the adjustable costs for the alternative e-learning approach were measured. Results: The total training cost at Lamerd Nursing School during the four-year study program stood at US$382,761. The per capita cost in this period was equal to US$13,693. It was measured at US$8,659 for educational activities, US$3,933 for support, and US$1,077 for welfare. Therefore, the highest per capita cost was in the area of education and the lowest cost was attributed to welfare. Furthermore, the curriculum cost in face-to-face training was calculated to be US$664. Finally, it was found that virtual education would reduce the total cost by US$156,199. Conclusion: Activity-based costing system is a new model that helps restructure the financial systems of universities. It enables the senior management in these institutions to make informed decisions on adjusting educational activities based on the information acquired. One of these decisions involves comparing the cost-effectiveness of in-person paramedical training with that of virtual training. This is especially important in adopting modern educational models.
Halaman 35 dari 618569