Soil Organic Matter
S. Mandi, Somanath Nayak, Y. Shivay
et al.
February 01, 2021 This report summarizes some of the research activities conducted by theMinnesota Office for Soil Health, in partnership with University Extension and the Department of Soil Water and Climate, funded by a 3‐year Conservation Innovation Grant from the Natural Resources Conservation Ser‐ vice (NRCS) with support from the Board of Water and Soil Resources. A key objective of this project is to gather repsentative soil health data from working farms with a range of locations, soil types, and management practices. This data will serve as an important baseline to help us evaluate the effectiveness of specific soil health tests and interpret data in light of relevant regional soil conditions in Minnesota. From 2019‐2020 we collected >500 soil samples from a total of 27 participating farms across Min‐ nesota. This report includes preliminary data year 1 (2019) data for the following soil tests: • Soil organic matter % • pH • Phosphorus (P) • Potassium (K) • Potentially mineralizable nitrogen (PMN) • Soil respiration (potentially mineralizable carbon) (PMC)
805 sitasi
en
Environmental Science
Recommendations for blood pressure measurement in humans and experimental animals: Part 1: blood pressure measurement in humans: a statement for professionals from the Subcommittee of Professional and Public Education of the American Heart Association Council on High Blood Pressure Research.
T. Pickering, J. Hall, L. Appel
et al.
Validity and reliability of the Edmonton Frail Scale
D. Rolfson, S. Majumdar, R. Tsuyuki
et al.
1. Meltzer H, Gill H, Petticrew M, Hinds K. Office of Population Census and Surveys (OPCS)—Surveys of Psychiatric Morbidity in Great Britain Report 1: The prevalence of psychiatric morbidity amongst adults living in private households. London: HMSO, 1995. 2. Beekman AT, Copeland JR, Prince MJ. Review of community prevalence of depression in later life. Br J Psychiatry 1999; 174: 307–11. 3. Prescription Pricing Authority (PPA) PACT Centre Pages. Drugs used in Mental Health. http://www.ppa.org.uk/news/ pact-112003/pact-112003.htm (4 November 2004, date last accessed). 4. Middleton N, Gunnell D, Whitley E, Dorling D, Frankel S. Secular trends in antidepressant prescribing in the UK, 1975–1998 J Public Health Med 2001; 23: 262–6. 5. National Institute for Clinical Excellence. Management of depression in primary and secondary care. Clinical Guideline 23. National Institute for Clinical Excellence 2004. 6. Percudani M, Barbui C, Fortino I, Petrovich L. Antidepressant drug prescribing among elderly subjects: a population-based study. Int J Geriatr Psychiatry 2005; 20: 113–8. 7. Lawreson RA, Tyrere F, Newson RB, Farmer RDT. The treatment of depression in UK general practice: selective serotonin reuptake inhibitors and tricyclic antidepressants compared. J Affect Disord 2000; 59: 149–57. 8. Wilson KC, Copeland JR, Taylor S, Donoghue J, McCracken CF. Natural history of pharmacotherapy of older depressed community resident. The MRC-ALPHA Study. Br J Psychiatry 1999; 175: 439–43. 9. Living in Britain. A summary of changes over time – Use of health services. Office of National Statistics (ONS). http://www.statistics.gov.uk (16 February 2005, date last accessed). 10. Rosenbaum JF, Zajecka J. Clinical management of antidepressant discontinuation. J Clin Psychiatry 1998; 59: 535–7. 11. Zermansky AG. Who controls repeats? Br J Gen Prac 1996; 46: 643–7.
Physician Office Visits for Low Back Pain: Frequency, Clinical Evaluation, and Treatment Patterns From a U.S. National Survey
L. Hart, R. Deyo, Daniel C. Cherkin
et al.
Guidelines on genetic evaluation and management of Lynch syndrome: a consensus statement by the US Multi-Society Task Force on Colorectal Cancer.
F. Giardiello, John I. Allen, J. Axilbund
et al.
The integration of competences for sustainable development in higher education: an analysis of bachelor programs in management
W. Lambrechts, Ingrid Mulà, Kim Ceulemans
et al.
Sola-Visibility-ISPM: Benchmarking Agentic AI for Identity Security Posture Management Visibility
Gal Engelberg, Konstantin Koutsyi, Leon Goldberg
et al.
Identity Security Posture Management (ISPM) is a core challenge for modern enterprises operating across cloud and SaaS environments. Answering basic ISPM visibility questions, such as understanding identity inventory and configuration hygiene, requires interpreting complex identity data, motivating growing interest in agentic AI systems. Despite this interest, there is currently no standardized way to evaluate how well such systems perform ISPM visibility tasks on real enterprise data. We introduce the Sola Visibility ISPM Benchmark, the first benchmark designed to evaluate agentic AI systems on foundational ISPM visibility tasks using a live, production-grade identity environment spanning AWS, Okta, and Google Workspace. The benchmark focuses on identity inventory and hygiene questions and is accompanied by the Sola AI Agent, a tool-using agent that translates natural-language queries into executable data exploration steps and produces verifiable, evidence-backed answers. Across 77 benchmark questions, the agent achieves strong overall performance, with an expert accuracy of 0.84 and a strict success rate of 0.77. Performance is highest on AWS hygiene tasks, where expert accuracy reaches 0.94, while results on Google Workspace and Okta hygiene tasks are more moderate, yet competitive. Overall, this work provides a practical and reproducible benchmark for evaluating agentic AI systems in identity security and establishes a foundation for future ISPM benchmarks covering more advanced identity analysis and governance tasks.
Camera traps and deep learning enable efficient large‐scale density estimation of wildlife in temperate forest ecosystems
Maik Henrich, Christian Fiderer, Alisa Klamm
et al.
Abstract Automated detectors such as camera traps allow the efficient collection of large amounts of data for the monitoring of animal populations, but data processing and classification are a major bottleneck. Deep learning algorithms have gained increasing attention in this context, as they have the potential to dramatically decrease the time and effort required to obtain population density estimates. However, the robustness of such an approach has not yet been evaluated across a wide range of species and study areas. This study evaluated the application of DeepFaune, an open‐source deep learning algorithm for the classification of European animal species and camera trap distance sampling (CTDS) to a year‐round dataset containing 895,019 manually classified photos from 10 protected areas across Germany. For all wild animal species and higher taxonomic groups on which DeepFaune was trained, the algorithm achieved an overall accuracy of 90%. The 95% confidence interval (CI) of the difference between the CTDS estimates based on manual and automated image classification contained zero for all species and seasons with a minimum sample size of 20 independent observations per study area, except for two. Meta‐regression revealed an average difference between the classification methods of −0.005 (95% CI: −0.205–0.196) animals/km2. Classification success correlated with the divergence of the population density estimates, but false negative and false positive detections had complex effects on the density estimates via different CTDS parameters. Therefore, metrics of classification performance alone are insufficient to assess the effect of deep learning classifiers on the population density estimation process, which should instead be followed through entirely for proper validation. In general, however, our results demonstrate that readily available deep learning algorithms can be used in largely unsupervised workflows for estimating population densities from camera trap data.
Developing and Evaluating an Inpatient Caregiver Support Program: Feasibility, Acceptability, and Perceived Impact
Joan M. Griffin, PhD, Lynne M. Vitagliano, MSW, LCSW, Angela K. Wold, MSW, LMSW
et al.
Objective: To evaluate the feasibility, acceptability, and perceived impact of a hospital-based Caregiver Support Program (CSP) that supports family care partners (FCPs) of seriously ill hospitalized patients by addressing their unmet needs through emotional support, help navigating the hospital, and referrals to trusted and vetted resources. Patients and Methods: The evaluation, conducted from October 15, 2021 to January 30, 2024, was designed using a 2-phase interrupted time series. Phase 1 included prelaunch and postlaunch surveys delivered to staff and FCPs on 2 hospital units. CSP procedures were refined based on phase 1 results, and then new procedures were evaluated in phase 2 on a third unit using the same survey procedures. Feasibility was measured by the number of FCPs seen, total visits, contact hours, and time per visit. Changes in staff and FCP acceptability and perceived care quality were assessed before and after the CSP’s implementation. Results: In phase 1, 253 FCPs received 282 hours of support, and 100% (8/8) of staff at postlaunch with knowledge about the CSP program strongly believed that it benefited patients and FCPs. In phase 2, 88% (53/61) of FCPs rated the program as very or extremely helpful. Care quality improved over time, but differences from prelaunch to postlaunch were not statistically significant. Conclusion: The CSP is a feasible and acceptable approach to support FCPs that is highly valued by both staff and FCPs. Program expansion may benefit both FCPs and staff by improving care quality, including communication between families, patients, and staff, and access to vetted support resources.
Guidelines on genetic evaluation and management of Lynch syndrome: a consensus statement by the US Multi-Society Task Force on colorectal cancer.
F. Giardiello, John I. Allen, J. Axilbund
et al.
SHAP Stability in Credit Risk Management: A Case Study in Credit Card Default Model
Luyun Lin, Yiqing Wang
The increasing development in the consumer credit card market brings substantial regulatory and risk management challenges. The advanced machine learning models applications bring concerns about model transparency and fairness for both financial institutions and regulatory departments. In this study, we evaluate the consistency of one commonly used Explainable AI (XAI) technology, SHAP, for variable explanation in credit card probability of default models via a case study about credit card default prediction. The study shows the consistency is related to the variable importance level and hence provides practical recommendation for credit risk management
Quantitative Risk Management in Volatile Markets with an Expectile-Based Framework for the FTSE Index
Abiodun Finbarrs Oketunji
This research presents a framework for quantitative risk management in volatile markets, specifically focusing on expectile-based methodologies applied to the FTSE 100 index. Traditional risk measures such as Value-at-Risk (VaR) have demonstrated significant limitations during periods of market stress, as evidenced during the 2008 financial crisis and subsequent volatile periods. This study develops an advanced expectile-based framework that addresses the shortcomings of conventional quantile-based approaches by providing greater sensitivity to tail losses and improved stability in extreme market conditions. The research employs a dataset spanning two decades of FTSE 100 returns, incorporating periods of high volatility, market crashes, and recovery phases. Our methodology introduces novel mathematical formulations for expectile regression models, enhanced threshold determination techniques using time series analysis, and robust backtesting procedures. The empirical results demonstrate that expectile-based Value-at-Risk (EVaR) consistently outperforms traditional VaR measures across various confidence levels and market conditions. The framework exhibits superior performance during volatile periods, with reduced model risk and enhanced predictive accuracy. Furthermore, the study establishes practical implementation guidelines for financial institutions and provides evidence-based recommendations for regulatory compliance and portfolio management. The findings contribute significantly to the literature on financial risk management and offer practical tools for practitioners dealing with volatile market environments.
Assessing the impact of clerkships on the growth of clinical knowledge
Chi Chang, Heather S. Laird-Fick, John D. Mitchell
et al.
Purpose This study quantified the impact of clinical clerkships on medical students’ disciplinary knowledge using the Comprehensive Clinical Science Examination (CCSE) as a formative assessment tool.Methods This study involved 155 third-year medical students in the College of Human Medicine at Michigan State University who matriculated in 2016. Disciplinary scores on their individual Comprehensive Clinical Science Examination reports were extracted by digitizing the bar charts using image processing techniques. Segmented regression analysis was used to quantify the differences in disciplinary knowledge before, during, and after clerkships in five disciplines: surgery, internal medicine, psychiatry, pediatrics, and obstetrics and gynecology (ob/gyn).Results A comparison of the regression intercepts before and during their clerkships revealed that, on average, the participants improved the most in ob/gyn ([Formula: see text]11.193, p[Formula: see text].0001), followed by psychiatry ([Formula: see text]10.005, p[Formula: see text].001), pediatrics ([Formula: see text]6.238, p[Formula: see text].0001), internal medicine ([Formula: see text]1.638, p[Formula: see text].30), and improved the least in surgery ([Formula: see text]−2.332, p[Formula: see text].10). The regression intercepts of knowledge during their clerkships and after them, on the other hand, suggested that students’ average scores improved the most in psychiatry ([Formula: see text]7.649, p[Formula: see text].008), followed by ob/gyn ([Formula: see text]4.175, p[Formula: see text].06), surgery ([Formula: see text]4.106, p[Formula: see text].007), and pediatrics ([Formula: see text]1.732, p[Formula: see text].32).Conclusions These findings highlight how clerkships influence the acquisition of disciplinary knowledge, offering valuable insights for curriculum design and assessment. This approach can be adapted to evaluate the effectiveness of other curricular activities, such as tutoring or intersessions. The results have significant implications for educators revising clerkship content and for students preparing for the United States Medical Licensing Examination Step 2.
Comparing efficacy and adherence of smartphone-guided exercises to conventional self-directed exercises for neck pain in office workers: A randomized controlled trial protocol.
Sana Salah, Anis Jellad, Manel Mili
et al.
<h4>Background</h4>Self-exercises focusing on strength and endurance training, as well as self-mobilization, are effective in neck pain (NP). This study aims to investigate the differences in self-management using two workplace-based interventions: a smartphone application consisting of personalized neck exercises compared to conventional approach (self-exercises program on paper) in chronic NP office workers.<h4>Methods</h4>The project is a prospective, superiority, randomized controlled trial. Fifty participants with chronic NP will be randomly assigned into the interventional group (IG, n = 25), utilizing the smartphone application, or the control group (CG, n = 25). The CG includes the use of a paper sheet with exercises and recommendations, and the IG includes the use of a smartphone application, which provides individualized exercise programs. Both protocols will last three months and will be preceded by an educational session at baseline for all participants. The main outcome measure comprises pain intensity evaluated according to the pain intensity number rating scale. Secondary outcomes are function evaluated according to the neck disability index, quality of life according to the short form 12, and participants' adherence to self-exercises. Outcome measures will be collected at baseline, and one and three months of follow-up.<h4>Discussion</h4>The current project will evaluate the effectiveness of a smartphone application consisting of personalized neck exercises, when compared with a conventional approach for self-rehabilitation. The smartphone application will allow to monitor the participants' status and to help resolving the problem of adherence to self-exercises in chronic NP. Despite some limitations may be related to the short follow-up duration, the study findings could help to develop evidence-based knowledge about the impact of workplace interventions using new technologies in mitigating discomfort and promoting well-being among affected workers.<h4>Trial registration</h4>ClinicalTrials.gov. NCT06485804 Registered on July 1, 2024.
Optimizing Cost Management in Construction Projects: A Sustainability Assessment Model Using Fuzzy Inference Systems (Case Study of the Apadana Project in the Persian Gulf Petrochemical Industries Company)
Ali Ebrahimi Kordlar, Hossein Safari, Mohammad Rozbeh
Objective: The construction industry has been increasingly criticized for its poor sustainability performance in recent decades, creating a chance for the sector to play a key role in global sustainability efforts. Rapid technological advancements and increasing construction project complexities have driven the need for flexible, sustainability-focused project management frameworks. This study introduces a fuzzy inference system designed to evaluate construction project sustainability, built on insights from extensive literature and expert input.Methods: To design the proposed model, the system inputs—criteria for evaluating the sustainability level of construction projects at various layers—were first identified. Next, the necessary if-then rules were developed based on expert opinions. The system output was determined in alignment with the research’s final objective. By offering a comprehensive assessment of construction project sustainability, the model enables organizations to identify their strengths and weaknesses, assess their current position, and make informed decisions to enhance their sustainable performance. Results: The output of the research includes a detailed analysis of the sustainability performance of construction projects. The designed model, along with its measurement tools, provides an opportunity for leaders and managers in the construction industry who are concerned about sustainability to gradually enhance their sustainability status and advance the sustainability level of projects. This model consists of three subsystems named the Direction, Execution, and Results subsystems. The aforementioned subsystems are the result of a literature review and are considered inputs to the final level of the model.Conclusion: The designed model serves as a tool to identify and implement improvement methods and potential areas for project advancement from a sustainability perspective. By utilizing this model, the quality of project execution in line with sustainability indicators, while addressing all three dimensions—economic, social, and environmental—improves continuously and proportionately.
Digital transformation in agricultural circulation: enhancing rural modernization and sustainability through technological innovation
Hengli Wang, Lili Zhang, Zhongyin An
IntroductionAs digital transformation accelerates globally, the digitalization of agricultural product circulation (DAPC) is becoming a key driver of rural revitalization and sustainable agricultural development.MethodsThis study introduces a digital agriculture product circulation index (DAPCI) to assess the level of digitalization in agricultural product circulation and the influence of digitalization on rural modernization in China. Additionally, this study develops a rural agricultural modernization development index (RAMDI) to measure the extent of modernization across 30 provinces (from 2012–2023). The entropy weight method and a spatial error model are applied to capture both direct and indirect effects.ResultsThe findings reveal that digitalization significantly enhances rural agricultural modernization (RAM), particularly in technologically advanced regions, with strong spatial spillover effects benefiting neighboring areas. The results further reveal that the digitalization of agricultural circulation positively correlates with improved rural economic development; green innovation and industrial structure optimization emerge as key mechanisms for driving both environmental sustainability and economic growth.DiscussionThis research contributes to understanding how digital tools can reshape agricultural practices, making those practices more resilient, efficient, and environmentally friendly. By demonstrating the impact of digitalization on rural agricultural sustainability, this study highlights the importance of integrated technological innovations and management strategies for advancing sustainable agricultural development and climate resilience in rural economies.
Nutrition. Foods and food supply, Food processing and manufacture
Prevalence and Associated Factors of Treatment Regimen Fatigue Among People Living with HIV/AIDS in China: A Cross-Sectional Survey
Liu B, Yang Y, Zhou H
et al.
Baohua Liu,1,* Yisi Yang,2,* Hongguo Zhou,3 Huan Liu,4 Zhenzhen Xu1 1Department of Elderly Care and Management, School of Health Services and Wellness, Ningbo College of Health Sciences, Ningbo, Zhejiang, People’s Republic of China; 2Institute for STD and HIV/AIDS Prevention and Control, Harbin Center for Disease Control and Prevention, Harbin, Heilongjiang, People’s Republic of China; 3Dean’s Office, Ningbo College of Health Sciences, Ningbo, Zhejiang, People’s Republic of China; 4School of Health Management, Harbin Medical University, Harbin, Heilongjiang, People’s Republic of China*These authors contributed equally to this workCorrespondence: Hongguo Zhou; Huan Liu, Email zhou840512@163.com; liuhuan00813@163.comIntroduction: Treatment regimen fatigue (TRF) is universal among people living with HIV/AIDS. Long-term adherence to treatment regimens is crucial to maintaining the health and life span of such individuals.Objective: This study aimed to examine treatment regimen fatigue among people living with HIV/AIDS and the relevant factors.Methods: This cross-sectional study was conducted between January and December 2019 at two designated AIDS medical institutions in Harbin, China. A total of 717 valid samples were included in the study. The Treatment Regimen Fatigue Scale was used to measure treatment regimen fatigue. The participants responded to several questions regarding their demographic characteristics, clinical characteristics, and social psychological characteristics. Multivariate logistic regression assessed the relationship between TRF and associated factors. Odds ratios (OR) and 95% confidence intervals (CI) for OR were calculated.Results: The self-reported mean global score for the TRFS was − 15.59 ± 22.90. After adjusted location, education background and, monthly income, the logistic regression model indicated that depression (OR=3.177, 95% CI=2.180– 4.629), other chronic diseases (OR=1.786, 95% CI=1.057– 3.019), > 3 years of treatment (OR=1.767, 95% CI=1.203– 2.594), having an intimate confidant (OR=0.514, 95% CI=0.347– 0.760), life satisfaction (OR=0.564, 95% CI=0.365– 0.870), living area (OR=0.491, 95% CI=0.295– 0.817), and an undergraduate or above education level (OR = 0.568, 95% CI=0.335– 0.965) were associated factors for TRF.Conclusion: The prevalence of TRF among PLWHA in China is relatively high and is influenced by multiple factors including psychosocial, clinical, and demographic characteristics. Social support, especially psychological support, for PLWHA should be strengthened. This study’s findings highlight the need to develop multilevel interventions to reduce TRF, addressing the complex needs of PLWHA and mitigating the adverse impact of TRF on HIV treatment outcomes. Further longitudinal research on factors of TRF should be conducted to strengthen and broaden the current findings.Keywords: treatment regimen fatigue, antiretroviral therapy adherence, HIV/AIDS, psychosocial factors
Project Risk Management from the bottom-up: Activity Risk Index
Fernando Acebes, Javier Pajares, Jose M Gonzalez-Varona
et al.
Project managers need to manage risks throughout the project lifecycle and, thus, need to know how changes in activity durations influence project duration and risk. We propose a new indicator (the Activity Risk Index, ARI) that measures the contribution of each activity to the total project risk while it is underway. In particular, the indicator informs us about what activities contribute the most to the project's uncertainty so that project managers can pay closer attention to the performance of these activities. The main difference between our indicator and other activity sensitivity metrics in the literature (e.g. cruciality, criticality, significance, or schedule sensitivity indices) is that our indicator is based on the Schedule Risk Baseline concept instead of on cost or schedule baselines. The new metric not only provides information at the beginning of the project, but also while it is underway. Furthermore, the ARI is the only one to offer a normalized result: if we add its value for each activity, the total sum is 100%.
Lessons From Model Risk Management in Financial Institutions for Academic Research
Mahmood Alaghmandan, Olga Streltchenko
In this paper, we discuss aspects of model risk management in financial institutions which could be adopted by academic institutions to improve the process of conducting academic research, identify and mitigate existing limitations, decrease the possibility of erroneous results, and prevent fraudulent activities.
Beyond probability-impact matrices in project risk management: A quantitative methodology for risk prioritisation
Fernando Acebes, José Manuel González-Varona, Adolfo López-Paredes
et al.
The project managers who deal with risk management are often faced with the difficult task of determining the relative importance of the various sources of risk that affect the project. This prioritisation is crucial to direct management efforts to ensure higher project profitability. Risk matrices are widely recognised tools by academics and practitioners in various sectors to assess and rank risks according to their likelihood of occurrence and impact on project objectives. However, the existing literature highlights several limitations to use the risk matrix. In response to the weaknesses of its use, this paper proposes a novel approach for prioritising project risks. Monte Carlo Simulation (MCS) is used to perform a quantitative prioritisation of risks with the simulation software MCSimulRisk. Together with the definition of project activities, the simulation includes the identified risks by modelling their probability and impact on cost and duration. With this novel methodology, a quantitative assessment of the impact of each risk is provided, as measured by the effect that it would have on project duration and its total cost. This allows the differentiation of critical risks according to their impact on project duration, which may differ if cost is taken as a priority objective. This proposal is interesting for project managers because they will, on the one hand, know the absolute impact of each risk on their project duration and cost objectives and, on the other hand, be able to discriminate the impacts of each risk independently on the duration objective and the cost objective.