Hasil untuk "cs.MS"

Menampilkan 20 dari ~278571 hasil · dari CrossRef, Semantic Scholar

JSON API
CrossRef Open Access 2025
Managing Brand Equity In The Digital Era: A Strategic Approach

Adv. Hardik M. Goradiya, Ms. Nilam Hardik Goradiya, Ms. Yashahshree Datar et al.

In today's digital world, managing brand equity has become a key strategic goal for businesses that want to stay relevant in the market and keep customers loyal over time. This article examines the complex aspects of brand equity management in the digital age, highlighting the amalgamation of conventional branding strategies with innovative digital tools and platforms. It looks at how social media, content marketing, working with influencers, and real-time data affect how people see brands and their worth. The research examines the difficulties of ensuring brand consistency across digital touchpoints while also exploring the benefits of personalised experiences and interactive communication. From a strategic point of view, the research shows how important authenticity, putting the customer first, and making decisions based on data are for keeping and growing brand equity. The results add to the ongoing conversation about how to manage digital brands and give marketers useful information on how to improve brand equity in the face of technical changes and changing customer expectations.

S2 Open Access
Symbolic PBPK-PDE Modeling Using Open-Source Julia Tools

A. Elmokadem, Daniel Kirouac, Tim Knab et al.

Objectives: Physiologically based pharmacokinetic (PBPK) models provide a mechanistic characterization of a drug’s distribution in the body. Ordinary differential equations (ODEs) are used for most PBPK models in literature which ignore the spatial distribution of a drug. Spatial distribution, however, can be critical to understand the PK of some drugs like topical preparations, inhaled treatments, and antitumor therapies. As such, investigators have tried to combine PBPK models, represented as ODEs, with partial differential equations (PDEs) to capture the spatial component of drug distribution. PBPK-PDE models are, however, challenging to build. The current work demonstrates a framework to build PBPK-PDE models using the open-source Julia [1] package, ModelingToolkit.jl [2], which can symbolically represent equations and simplify PDE model coding and integration with ODEs. A PBPK-PDE model of naphthalene diffusion from the skin into the body is used as an example of framework. Methods: The PBPK-PDE model used in this work was a simplified version of a previously published naphthalene PBPK-PDE model [3]. The PBPK model described the distribution of topically administered naphthalene from the skin compartment into the circulation and remaining compartments (lung, liver, fat, poorly perfused and richly perfused tissues). The skin compartment was dissected into an outer well where naphthalene was introduced, stratum corneum (SC) and viable epidermis (VE). The diffusion of naphthalene across the one-dimensional space of the SC was represented as a PDE. The Julia open-source package MethodOfLines.jl [4] was used to automatically discretize the PDE problem. Boundary conditions were set to equilibrium conditions between the well and the outermost layer of the SC and between the VE and the innermost layer of the SC.Results: The PBPK-PDE model was able to characterize the diffusion of naphthalene in the different SC layers as well as its penetration into the systemic circulation following dermal administration. The concentration of naphthalene in each of the discretized one-dimensional SC space versus time was demonstrated. Conclusions: A framework using the Julia open-source tool, ModelingToolkit.jl, was developed to build PBPK-PDE models in a simple and intuitive way. A naphthalene PBPK-PDE model was used as a proof-of-concept, while the framework was generally applicable to the variety of pharmacometric models where a spatial component was critical to understanding activity.Citations: 1. Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A Fresh Approach to Numerical Computing. SIAM Rev. 2017;59: 65–98.2. Ma Y, Gowda S, Anantharaman R, Laughman C, Shah V, Rackauckas C. ModelingToolkit: A Composable Graph Transformation System For Equation-Based Modeling. arXiv [cs.MS]. 2021. Available: http://arxiv.org/abs/2103.052443. Kapraun DF, Schlosser PM, Nylander-French LA, Kim D, Yost EE, Druwe IL. A Physiologically Based Pharmacokinetic Model for Naphthalene With Inhalation and Skin Routes of Exposure. Toxicol Sci. 2020;177: 377–391.4. Creators Jones, Alex W. 1 Hyett, Criston Rackauckas, Chris2 Wellcome Trust Chan Zuckerberg Initiative (United States) Show affiliations 1. SciML 2. MIT. MethodOfLines.jl - Automatic finite difference PDE discretization and solving with Julia SciML. doi:10.5281/zenodo.11186853

S2 Open Access
How to Make a Salad? Rethinking Pharmacometric/QSP Model Composition Using Open-Source Julia Tools

A. Elmokadem, Daniel Kirouac, Tim Knab et al.

Objectives: Pharmacometric and systems pharmacology models are often modular as different, independent components can be joined together to form a more complex model. The process of combining and reusing model components can be challenging with no clear framework and, as such, investigators often resort to rewriting models from scratch rather than reusing the individual components. Additionally, model components could be written in different notations such as ordinary differential equations (ODEs) or reactions, depending on the most convenient way to represent a system. This adds an additional complexity to the model composition process. A framework is presented that allows an investigator to seamlessly combine different model components represented in their respective notations and reuse these independent components to create multiple combinations of integrated models, just like mixing the components of a salad.Methods: Julia [1] open-source tools, namely ModelingToolkit.jl [2] and Catalyst.jl [3], were used to present a convenient framework for pharmacometric model composition. The symbolic-numeric model representation of ModelingToolkit.jl and the reaction notation provided by Catalyst.jl allowed for seamless composition of independent model components presented as ODEs or reactions.Results: The framework was demonstrated by composing different model components (e.g., Pharmacokinetic (PK), Pharmacodynamic (PD), physiological organs) to build larger models (e.g., PKPD, Physiologically Based PK (PBPK), Quantitative Systems Pharmacology (QSP)). Both ODEs and reaction notations were combined into integrated PKPD and QSP models with examples drawn from bispecific T cell engagers, viral dynamics, and drug-drug interactions (DDI). The framework enabled seamless transitions from in vitro to in vivo murine to clinical settings for a bispecific T cell engager application [4]. Conclusions: A framework based on Julia open-source tools was proposed in this work to allow for seamless pharmacometric and QSP model composition. This framework enables model reusability and translation using convenient and flexible model notation.Citations: 1. Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A Fresh Approach to Numerical Computing. SIAM Rev. 2017;59: 65–98.2. Ma Y, Gowda S, Anantharaman R, Laughman C, Shah V, Rackauckas C. ModelingToolkit: A Composable Graph Transformation System For Equation-Based Modeling. arXiv [cs.MS]. 2021. Available: http://arxiv.org/abs/2103.052443. Loman TE, Ma Y, Ilin V, Gowda S, Korsbo N, Yewale N, et al. Catalyst: Fast and flexible modeling of reaction networks. PLoS Comput Biol. 2023;19: e1011530.4. Betts A, Haddish-Berhane N, Shah DK, van der Graaf PH, Barletta F, King L, et al. A translational quantitative systems pharmacology model for CD3 bispecific molecules: Application to quantify T cell-mediated tumor cell killing by P-cadherin LP DART®. AAPS J. 2019 May 22;21(4):66.

CrossRef 2026
ROBUST AND VERIFIABLE LLMS FOR HIGH-STAKES DECISION-MAKING (HEALTHCARE, DEFENSE, FINANCE)

MS in CS Candidate, University of Central Missouri, USA, Manish Bolli, Sai Srinivas Matta et al.

Robust and verifiable large language models (LLMs) are increasingly considered for high-stakes decision-support in healthcare, defense, and finance, yet empirical evidence on their reliability, security, and audit readiness remains limited. This quantitative study evaluated four LLM system configurations—baseline, retrieval-grounded, schema/rule-constrained, and tool-augmented verification—across 360 domain-specific cases and 5,760 evaluated case-instances under clean, perturbation, out-of-distribution, and adversarial conditions. Descriptive and multivariable analyses showed that tool-augmented verification achieved the highest overall task correctness at 80% on clean inputs, compared to 64% for baseline, while maintaining higher decision stability under perturbations at 81% versus 61%. Evidence support rates increased from 58% in baseline outputs to 82% in tool-augmented configurations, and schema validity exceeded 94% under constrained outputs across domains. Under adversarial testing, retrieval-grounded systems exhibited the highest policy violation rate at 18.9%, whereas schema/rule-constrained and tool-augmented systems reduced violations to 7.2% and 6.9%, respectively. However, stricter controls increased false refusals, rising from 2.3% in baseline to 7.0% in schema-constrained configurations. Mixed-effects regression results indicated that tool augmentation more than doubled the odds of task correctness relative to baseline, while schema constraints reduced policy violations by nearly 50%. Out-of-distribution conditions reduced correctness across all configurations, with the smallest degradation observed in tool-augmented systems. Overall, the findings demonstrated that robustness and verifiability in high-stakes LLM decision-support depended on layered grounding, constraint enforcement, and deterministic verification mechanisms, and that measurable tradeoffs emerged between security controls and operational utility across domains.

CrossRef 2025
FEDERATED LEARNING FOR PRIVACY-PRESERVING HEALTHCARE DATA SHARING: ENABLING GLOBAL AI COLLABORATION

Ms in CS Candidate, Campbellsville University, USA, Sai Srinivas Matta, Manish Bolli et al.

This study provides a comprehensive systematic review of federated learning as a framework for privacy-preserving healthcare data sharing and its potential to enable global artificial intelligence collaboration. In total, 124 peer-reviewed articles were examined following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to ensure transparency, rigor, and reproducibility. The review highlights how federated learning has evolved from conceptual discussions to practical applications across multiple healthcare domains, including medical imaging, electronic health records, biosignals, and genomic analysis. Key findings indicate that federated architectures, particularly server–client models, have become the dominant deployment strategy, while peer-to-peer approaches are gaining attention for their resilience and decentralization. Privacy-preserving mechanisms—such as differential privacy, secure aggregation, and cryptographic computation—emerged as central to ensuring compliance with regulatory and ethical standards, with adaptive strategies allowing for an effective balance between confidentiality and model utility. Evidence from multi-institutional collaborations shows that federated learning not only improves predictive performance but also enhances inclusivity, enabling smaller or resource-limited institutions to contribute meaningfully without relinquishing data ownership. At the same time, empirical studies identified adversarial risks such as gradient inversion, membership inference, and poisoning attacks, underscoring the necessity for layered safeguards and strong governance structures. Collectively, the findings demonstrate that federated learning is more than a technical innovation; it represents a socio-technical paradigm that integrates privacy, equity, and collaboration into the development of global healthcare AI. This review positions federated learning as a cornerstone for building secure, ethical, and scalable artificial intelligence systems that address the dual imperatives of advancing medical innovation while safeguarding patient confidentiality.

CrossRef 2024
AI-AUGMENTED CYBERSECURITY: GRAPH NEURAL NETWORKS FOR PREDICTING NATION-STATE CYBERATTACKS

MS in CS Candidate, Campbellsville University, USA, Sai Srinivas Matta, Manish Bolli et al.

Nation-state cyberattacks represent one of the most complex and evolving threats to global security, often leveraging sophisticated strategies that exploit structural vulnerabilities across interconnected digital ecosystems. Traditional machine learning models, while effective for anomaly detection and malware classification, struggle to capture the relational and temporal dependencies inherent in coordinated cyber campaigns. This study explores the integration of Graph Neural Networks (GNNs) into AI-augmented cybersecurity frameworks to enhance predictive capabilities against nation-state cyberattacks. By modeling cyber infrastructures, threat intelligence, and attack pathways as graph-structured data, GNNs can identify latent patterns and interdependencies between threat actors, targets, and tactics. The proposed framework incorporates multi-source data, including network telemetry, open-source intelligence, and historical incident reports, to construct dynamic attack graphs that evolve in near real time. Experimental evaluations demonstrate that GNN-based models outperform conventional deep learning architectures in forecasting multi-stage intrusions, achieving higher precision in distinguishing state-sponsored campaigns from generic cyber threats. Furthermore, explainability modules embedded within the GNN pipeline improve interpretability by revealing critical nodes, links, and features driving predictions, thereby supporting actionable decision-making for security analysts and policymakers. This work underscores the strategic potential of AI-augmented approaches in advancing national resilience, providing early-warning capabilities, and enabling proactive defense strategies against adversarial state actors.

CrossRef 2023
TRUSTWORTHY AI: EXPLAINABILITY & FAIRNESS IN LARGE-SCALE DECISION SYSTEMS

Ms in CS Candidate, Campbellsville University, USA, Sai Srinivas Matta, Manish Bolli et al.

This study examined the critical roles of explain ability and fairness in advancing trustworthy artificial intelligence (AI) within large-scale decision systems. As AI technologies increasingly shape consequential decisions in domains such as healthcare, finance, employment, and judicial processes, ensuring transparency, equity, and legitimacy has become paramount. Drawing on a comprehensive review of 152 peer-reviewed studies, this research synthesized conceptual foundations, methodological advancements, and empirical findings to build a robust framework for understanding how explain ability and fairness jointly contribute to trustworthiness. A quantitative research design was employed, incorporating large-scale datasets and multi-phase statistical analyses to evaluate how explanation fidelity, stability, and sparsity influence comprehension, trust, and perceived fairness, and how fairness interventions impact model performance and equity outcomes. Results demonstrated that explanation fidelity significantly enhanced user comprehension, while stability strongly predicted trust, highlighting the importance of consistent and faithful explanations in shaping user confidence. Fairness metrics such as demographic parity and equal opportunity gaps were powerful predictors of perceived fairness, and reductions in these disparities substantially increased user acceptance of AI decisions. Interaction analyses revealed that combining counterfactual explanations with fairness constraints produced synergistic effects, improving both equity and trust without excessively compromising predictive performance. The study also quantified trade-offs, showing that fairness interventions slightly reduced accuracy but delivered substantial gains in legitimacy and social acceptability. Human-cantered outcomes such as trust and reliance were closely linked to technical measures, illustrating that the social impact of AI is deeply intertwined with its design. By integrating findings across technical, ethical, and behavioural dimensions, this study contributed new empirical evidence and theoretical insights into how explain ability and fairness shape trustworthy AI. The results provide a comprehensive foundation for designing, evaluating, and governing AI systems that are transparent, equitable, and socially aligned in large-scale decision-making contexts.

CrossRef 2022
EXPLAINABLE REINFORCEMENT LEARNING FOR HIGH-STAKES DECISION SYSTEMS DEVELOPING INTERPRETABLE RL MODELS FOR AUTONOMOUS VEHICLES, HEALTHCARE, OR FINANCE

Ms in CS Candidate, Campbellsville University, USA, Sai Srinivas Matta, Manish Bolli et al.

The study on Explainable Reinforcement Learning (XRL) for High-Stakes Decision Systems: Developing Interpretable RL Models for Autonomous Vehicles, Healthcare, or Finance had been conducted to investigate how interpretability in reinforcement learning enhances performance, trust, and accountability in critical decision-making environments. This research had reviewed and synthesized findings from 126 peer-reviewed papers spanning the past decade, focusing on the integration of explain ability mechanisms into reinforcement learning models applied to safety-critical and ethically sensitive domains. The study aimed to identify quantitative relationships between key explain ability constructs—fidelity, stability, and comprehensibility—and measurable human or system outcomes such as decision accuracy, response time, trust calibration, and accountability perception. Using a mixed quantitative framework, the research combined simulation-based performance data, human-centered evaluation metrics, and statistical modeling to assess how explainable RL architectures perform compared to non-explainable counterparts. The findings revealed that explainable reinforcement learning models consistently outperformed traditional opaque systems across all three domains. In autonomous vehicles, explanations improved driver response times and reduced intervention rates; in healthcare, they enhanced clinician confidence and treatment decision accuracy; and in finance, they improved risk-adjusted returns and investor trust. Regression and correlation analyses demonstrated that explanation fidelity strongly predicted decision accuracy, while explanation stability and comprehensibility were significant predictors of trust and accountability. Furthermore, repeated-measures ANOVA confirmed statistically significant improvements in user trust and performance under explainable conditions, supported by large effect sizes. The study also identified several persistent challenges, including the trade-off between interpretability and performance, variability in user comprehension, and limitations in real-time explanation delivery. Overall, the review and empirical analysis provided a comprehensive understanding of how explainable reinforcement learning contributes to safer, more transparent, and ethically accountable AI-driven decision systems. The insights derived from the 126 reviewed studies establish a robust foundation for developing future XRL frameworks capable of balancing performance optimization with human interpretability in complex, high-stakes environments such as autonomous vehicles, clinical systems, and financial analytics.

CrossRef 2022
BLOCKCHAIN-BASED DECENTRALIZED IDENTITY FOR CROSS-BORDER AUTHENTICATION: ENHANCING CYBERSECURITY AND IMMIGRATION APPLICATIONS

Ms in CS Candidate, Campbellsville University, USA, Sai Srinivas Matta, Manish Bolli et al.

This study conducts a meta-analysis of scholarly and policy literature on blockchain as an enabler of decentralized digital identity with a specific focus on cross-border authentication and immigration contexts. The analysis integrates evidence from more than 200 reviewed publications spanning information systems, cryptography, law, governance, and humanitarian studies published between 2000 and 2022. Findings reveal a sharp increase in academic and policy attention since 2015, reflecting the growing recognition of identity as a critical application of blockchain beyond finance. Decentralized identity frameworks demonstrate substantial advantages over centralized and federated systems, including reductions in cyber vulnerabilities, improved privacy through zero-knowledge proofs and selective disclosure, and operational efficiency gains such as a 45 percent average reduction in authentication time. Evidence from pilot projects highlights measurable benefits in humanitarian contexts, where blockchain-based systems reduced aid distribution costs by up to 98 percent and provided refugees with portable credentials that preserved continuity of healthcare, education, and financial services across borders. Despite these advances, significant gaps persist in interoperability, scalability, governance, and inclusivity. Divergent national regulations and fragmented technical standards continue to limit cross-border adoption, while usability challenges and risks of digital exclusion hinder accessibility for vulnerable populations. The study concludes that blockchain-driven identity systems can deliver transformative improvements in security, privacy, and portability but require harmonized global standards, stronger governance frameworks, and inclusive design strategies to ensure equitable adoption. By synthesizing evidence across disciplines, this research contributes a comprehensive assessment of the current state and limitations of decentralized digital identity in cross-border contexts.

CrossRef 2021
POST-QUANTUM CRYPTOGRAPHY FRAMEWORKS FOR SECURING GLOBAL CLOUD SYSTEMS

MS in CS Candidate, Campbellsville University, USA, Sai Srinivas Matta, Manish Bolli et al.

This study addresses the emerging problem that quantum attacks can undermine classical public key cryptography that secures global cloud platforms, leaving long lived, cross border data at risk, and evaluates how far organizations have progressed toward post quantum cryptography (PQC) frameworks. The purpose is to quantify PQC framework maturity and its security, compliance, and performance implications in cloud and enterprise cases. Using a quantitative, cross sectional, case-based design, survey data were collected from 220 organizations that provide or consume multi region cloud services, with key informants in security, cloud architecture, and compliance rating Likert five-point items. Core variables included PQC awareness, adoption intention, regulatory and contractual pressure, security governance capability, perceived performance impact, PQC framework maturity, perceived quantum resilient security posture, perceived regulatory compliance, and perceived operational performance. Descriptive statistics, reliability and validity tests, Pearson correlations, and multiple regression models with sector, size, region, and deployment model as controls were applied. Results show moderate PQC maturity (mean 3.21) but higher awareness (3.68) and adoption intention (3.55). PQC maturity correlated strongly with quantum resilient security posture (r = 0.68) and regulatory compliance (r = 0.64), and significantly predicted both outcomes (β = 0.59 and β = 0.55, R² = 0.47 and 0.42). Regulatory pressure (β = 0.34) and governance capability (β = 0.18) were also significant drivers of maturity. These findings imply that building systematic, governance anchored PQC frameworks can measurably strengthen cloud security and compliance while maintaining acceptable performance, guiding prioritized, phased PQC migration for global cloud providers and enterprise users.

CrossRef 2020
Role of Emotional Intelligence Dimensions in Stress Detection

M.Phil. (C.S.), Department of CS & IT, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad MS (India), Sarika K. Swami*, Mukta G. Dhopeshwarkar et al.

Emotional Intelligence plays a vital role in our day to day life. EI helps to manage our emotions in positive ways. The objective of the present research paper is to study all the Intra PA, Inter PA, Intra PM, Inter PM dimensions of gathered dataset regarding EI and its impact on stress detection. The goal of this paper is to make the gender smart comparative evaluation on nowadays society for this the dataset is created for the same by way of the usage of psychometric test via the statistical analysis on the identical self-created database it has been discovered that female should improve her EI Dimensions to overcome stress & by using t-test it proves that there may be a statistically huge difference between male and female close to normal strain however it can be different if there is another parameter is becoming a member of.

Halaman 1 dari 13929