Nonparametric Statistics for the Behavioral Sciences
Donald Bren, E. Tchetgen
diabetes statistics cdc ��glucagon megaroll.infoCorn oil, but not cocaine, is a more effective reinforcer Data Analysis of Students Marks with Descriptive StatisticsFriedman test WikipediaDownload Free any eBook PDF, Epub, Tuebl and MobiStatistics (STAT) < University of PennsylvaniaErik Sudderth Donald Bren School of Information and Bootstrapping (statistics) WikipediaRunze Li's Homepage Pennsylvania State UniversityCausal inference in statistics: An overviewFind a Doctor | Clinicians, Researchers & Nurses ETDAUndergraduate Course Descriptions Statistics DepartmentComputation of different effect sizes like d, f, r and Biography and Activities | Susan HolmesFaculty | Department of StatisticsNonparametric Method Definition InvestopediaStatistics Final Exam Flashcards | QuizletTest di Kruskal-Wallis WikipediaDepartment of Statistics and Data Science < Carnegie The use of statistics in social sciences | Emerald InsightBehavioral Genetics Psychology Oxford BibliographiesInterpreting statistics Introduction to statistics G*Power 3: a flexible statistical power analysis program Lifetime Data Analysis | Home SpringerWilcoxon Test Definition InvestopediaGraphPad Prism 9 Statistics Guide Interpreting results Log In BACBNonparametric Tests Boston UniversityTopic #1: Introduction to measurement and statisticsStatistics Assignment Help | Statistics Homework HelpStatistics (STAT) | Iowa State University CatalogEric J. Tchetgen Tchetgen – Department of Statistics and Journals American Statistical AssociationWhat is the rationale behind the magic number 30 in
Machine learning: Trends, perspectives, and prospects
Michael I. Jordan, T. Mitchell
Making sense of implementation theories, models and frameworks
P. Nilsen
BackgroundImplementation science has progressed towards increased use of theoretical approaches to provide better understanding and explanation of how and why implementation succeeds or fails. The aim of this article is to propose a taxonomy that distinguishes between different categories of theories, models and frameworks in implementation science, to facilitate appropriate selection and application of relevant approaches in implementation research and practice and to foster cross-disciplinary dialogue among implementation researchers.DiscussionTheoretical approaches used in implementation science have three overarching aims: describing and/or guiding the process of translating research into practice (process models); understanding and/or explaining what influences implementation outcomes (determinant frameworks, classic theories, implementation theories); and evaluating implementation (evaluation frameworks).SummaryThis article proposes five categories of theoretical approaches to achieve three overarching aims. These categories are not always recognized as separate types of approaches in the literature. While there is overlap between some of the theories, models and frameworks, awareness of the differences is important to facilitate the selection of relevant approaches. Most determinant frameworks provide limited “how-to” support for carrying out implementation endeavours since the determinants usually are too generic to provide sufficient detail for guiding an implementation process. And while the relevance of addressing barriers and enablers to translating research into practice is mentioned in many process models, these models do not identify or systematically structure specific determinants associated with implementation success. Furthermore, process models recognize a temporal sequence of implementation endeavours, whereas determinant frameworks do not explicitly take a process perspective of implementation.
Julia: A Fresh Approach to Numerical Computing
Jeff Bezanson, A. Edelman, S. Karpinski
et al.
Bridging cultures that have often been distant, Julia combines expertise from the diverse fields of computer science and computational science to create a new approach to numerical computing. Julia is designed to be easy and fast and questions notions generally held to be “laws of nature" by practitioners of numerical computing: \beginlist \item High-level dynamic programs have to be slow. \item One must prototype in one language and then rewrite in another language for speed or deployment. \item There are parts of a system appropriate for the programmer, and other parts that are best left untouched as they have been built by the experts. \endlist We introduce the Julia programming language and its design---a dance between specialization and abstraction. Specialization allows for custom treatment. Multiple dispatch, a technique from computer science, picks the right algorithm for the right circumstance. Abstraction, which is what good computation is really about, recognizes what remains the same after dif...
6361 sitasi
en
Mathematics, Computer Science
Fiji: an open-source platform for biological-image analysis
J. Schindelin, Ignacio Arganda-Carreras, E. Frise
et al.
58323 sitasi
en
Engineering, Medicine
The status of linguistics as a science
E. Sapir
1051 sitasi
en
Psychology
Science, perception, and reality
Wilfrid S. Sellars
Materials Science and Engineering: An Introduction
W. Callister
758 sitasi
en
Materials Science
The Matthew Matilda Effect in Science
M. Rossiter
Epistemic logic for AI and computer science
J. Meyer, W. Hoek
710 sitasi
en
Computer Science
Teachers' Beliefs About the Nature of Science and Their Relationship to Classroom Practice
N. Brickhouse
Computer Science Education in the Age of Generative AI
Russell Beale
Generative AI tools - most notably large language models (LLMs) like ChatGPT and Codex - are rapidly revolutionizing computer science education. These tools can generate, debug, and explain code, thereby transforming the landscape of programming instruction. This paper examines the profound opportunities that AI offers for enhancing computer science education in general, from coding assistance to fostering innovative pedagogical practices and streamlining assessments. At the same time, it highlights challenges including academic integrity concerns, the risk of over-reliance on AI, and difficulties in verifying originality. We discuss what computer science educators should teach in the AI era, how to best integrate these technologies into curricula, and the best practices for assessing student learning in an environment where AI can generate code, prototypes and user feedback. Finally, we propose a set of policy recommendations designed to harness the potential of generative AI while preserving the integrity and rigour of computer science education. Empirical data and emerging studies are used throughout to support our arguments.
Spiral Model Technique For Data Science & Machine Learning Lifecycle
Rohith Mahadevan
Analytics play an important role in modern business. Companies adapt data science lifecycles to their culture to seek productivity and improve their competitiveness among others. Data science lifecycles are fairly an important contributing factor to start and end a project that are data dependent. Data science and Machine learning life cycles comprises of series of steps that are involved in a project. A typical life cycle states that it is a linear or cyclical model that revolves around. It is mostly depicted that it is possible in a traditional data science life cycle to start the process again after reaching the end of cycle. This paper suggests a new technique to incorporate data science life cycle to business problems that have a clear end goal. A new technique called spiral technique is introduced to emphasize versatility, agility and iterative approach to business processes.
Mapping the changing structure of science through diachronic periodical embeddings
Zhuoqi Lyu, Qing Ke
Understanding the changing structure of science over time is essential to elucidating how science evolves. We develop diachronic embeddings of scholarly periodicals to quantify "semantic changes" of periodicals across decades, allowing us to track the evolution of research topics and identify rapidly developing fields. By mapping periodicals within a physical-life-health triangle, we reveal an evolving interdisciplinary science landscape, finding an overall trend toward specialization for most periodicals but increasing interdisciplinarity for bioscience periodicals. Analyzing a periodical's trajectory within this triangle over time allows us to visualize how its research focus shifts. Furthermore, by monitoring the formation of local clusters of periodicals, we can identify emerging research topics such as AIDS research and nanotechnology in the 1980s. Our work offers novel quantification in the science of science and provides a quantitative lens to examine the evolution of science, which may facilitate future investigations into the emergence and development of research fields.
Science Hierarchography: Hierarchical Organization of Science Literature
Muhan Gao, Jash Shah, Weiqi Wang
et al.
Scientific knowledge is growing rapidly, making it difficult to track progress and high-level conceptual links across broad disciplines. While tools like citation networks and search engines help retrieve related papers, they lack the abstraction needed to capture the needed to represent the density and structure of activity across subfields. We motivate SCIENCE HIERARCHOGRAPHY, the goal of organizing scientific literature into a high-quality hierarchical structure that spans multiple levels of abstraction -- from broad domains to specific studies. Such a representation can provide insights into which fields are well-explored and which are under-explored. To achieve this goal, we develop a hybrid approach that combines efficient embedding-based clustering with LLM-based prompting, striking a balance between scalability and semantic precision. Compared to LLM-heavy methods like iterative tree construction, our approach achieves superior quality-speed trade-offs. Our hierarchies capture different dimensions of research contributions, reflecting the interdisciplinary and multifaceted nature of modern science. We evaluate its utility by measuring how effectively an LLM-based agent can navigate the hierarchy to locate target papers. Results show that our method improves interpretability and offers an alternative pathway for exploring scientific literature beyond traditional search methods. Code, data and demo are available: https://github.com/JHU-CLSP/science-hierarchography
Generalization and the Rise of System-level Creativity in Science
Hongbo Fang, James Evans
Scientific progress has long been understood as recombinant, with breakthroughs arising when existing ideas are joined in new ways. Empirical work in this tradition has focused on the inputs to discovery, asking whether a paper draws together atypical or distant prior knowledge. Far less is known about how knowledge is supplied for downstream recombination, or how individual contributions are forged to play distinct and distant roles in the broader system of science. Using citation networks from tens of millions of publications in OpenAlex and the Web of Science, here we show that scientific contributions stably decompose into three functional types, foundations, extensions, and generalizations, distinguishable by the local structure of their forward citations. This decomposition of the 'functional role' of scientific work presents an unseen pattern of scientific production: foundational and extensional work, which respectively build and elaborate within disciplines, dominated the post-war decades but has declined steadily since the early 1990s, while generalizations, meaning compressed and modular contributions reused in distant fields, have risen sharply. Stacked difference-in-differences analyses that exploit venues' transitions to online access and authors' adoption of large language models provide causal evidence that digital knowledge infrastructure is driving this shift. The locus of innovation has thus migrated from within what Simon might characterize as nearly decomposable disciplinary modules to the interfaces between them, recasting the much-discussed decline of disruption as a structural reorganization of science rather than a slowdown, and revealing a growing misalignment between how science now advances and how it is recognized and rewarded.
Slow‐Wave HMSIW‐SSPP Leaky‐Wave Antenna With Phase‐Shift Asymmetric Coupling for Continuous Beam Scanning
Yiming Zhang, Yuxi Liu, Sailing He
ABSTRACT A compact leaky‐wave antenna (LWA) with innovative phase‐shift asymmetric coupling for continuous beam scanning is presented. The antenna utilises a slow‐wave half‐mode substrate integrated waveguide with spoof surface plasmon polaritons (SW‐HMSIW‐SSPP) transmission line structure to achieve ultra‐compact dimensions in both longitudinal and lateral directions. The radiation characteristic is achieved using sinusoidal modulation on the SSPP structure. To enable continuous beam scanning through broadside, a novel and simple phase‐shift asymmetric coupling method is developed by placing sinusoidally modulated patches with π/2 phase shift on the metallised blind via‐hole arrays. This approach effectively suppresses the open stopband (OSB) and enables continuous beam scanning from backward to forward directions without radiation degradation at broadside. A prototype of the proposed LWA is fabricated and characterised. The measured results demonstrate that the antenna with 12 unit‐cells operates over a wide frequency range from 14.3 to 20.5 GHz with continuous beam scanning from −40° to +30°, while maintaining an ultra‐compact aperture of only 6.67 λ0 × 0.27 λ0.
Telecommunication, Electricity and magnetism
Multi-Resolution and Multi-Temporal Satellite Remote Sensing Analysis to Understand Human-Induced Changes in the Landscape for the Protection of Cultural Heritage: The Case Study of the MapDam Project, Syria
Nicodemo Abate, Diego Ronchi, Sara Elettra Zaia
et al.
This study presents a multi-resolution and multi-temporal remote sensing approach to assess human-induced changes in cultural landscapes, with a focus on the archaeological site of Amrit (Syria) within the MapDam project. By integrating satellite archives (KH, Landsat series, NASADEM) with ancillary geospatial data (OpenStreetMap) and advanced analytical methods, four decades (1984–2024) of land-use/land-cover (LULC) change and shoreline dynamics were reconstructed. Machine learning classification (Random Forest) achieved high accuracy (Test Accuracy = 0.94; Kappa = 0.89), enabling robust LULC mapping, while predictive modelling of urban expansion, calibrated through a Gradient Boosting Machine, attained a Figure of Merit of 0.157, confirming strong predictive reliability. The results reveal path-dependent urban growth concentrated on low-slope terrains (≤5°) and consistent with proximity to infrastructure, alongside significant shoreline regression after 1974. A Business-as-Usual projection for 2024–2034 estimates 8.676 ha of new anthropisation, predominantly along accessible plains and peri-urban fringes. Beyond quantitative outcomes, this study demonstrates the replicability and scalability of open-source, data-driven workflows using Google Earth Engine and Python 3.14, making them applicable to other high-risk heritage contexts. This transparent methodology is particularly critical in conflict zones or in regions where cultural assets are neglected due to economic constraints, political agendas, or governance limitations, offering a powerful tool to document and safeguard endangered archaeological landscapes.
The nature of science in science education : rationales and strategies
W. McComas
615 sitasi
en
Political Science
Science in Public: Communication, Culture and Credibility
John A. Palen, J. Gregory, Stephen W. Miller
607 sitasi
en
Biology, Political Science