Ethical and sustainable mathematics is localised: why global paradigms fail and culturally-situated practices are essential
Dennis Müller, Maurice Chiodo
This paper identifies several different interconnected challenges preventing the move towards more ethical and sustainable mathematics education: the entrenched belief in mathematical neutrality, the difficulty of simultaneously reforming mathematics and its pedagogy, the gap between academic theory and classroom practice, and the need for epistemic decolonisation. In this context, we look at both bottom-up and top-down approaches, and argue that globalised frameworks such as the United Nations' Sustainable Development Goals are insufficient for this transformation, and that ethical and sustainable forms of mathematics ought not to be built using these as their (philosophical) foundation. These frameworks are often rooted in a Western-centric development paradigm that can perpetuate colonial hierarchies and fails to resolve inherent conflicts between economic growth and ecological integrity. As an alternative, this paper advocates for embracing localised, culturally-situated mathematical practices. Using the Ethics in Mathematics Project as a case study within a Western, Global North institution, this paper illustrates a critical-pragmatic, multi-level strategy for fostering ethical consciousness within a specific research community, and shows how this may be achieved in otherwise adversarial circumstances.
Row and column detection complexities of character tables
Adrian Padellaro, Sanjaye Ramgoolam, Rak-Kyeong Seong
Character tables of finite groups and closely related commutative algebras have been investigated recently using new perspectives arising from the AdS/CFT correspondence and low-dimensional topological quantum field theories. Two important elements in these new perspectives are physically motivated definitions of quantum complexity for the algebras and a notion of row-column duality. These elements are encoded in properties of the character table of a group G and the associated algebras, notably the centre of the group algebra and the fusion algebra of irreducible representations of the group. Motivated by these developments, we define row and column versions of detection complexities for character tables, and investigate the relation between these complexities under the exchange of rows and columns. We observe regularities that arise in the statistical averages over small character tables and propose corresponding conjectures for arbitrarily large character tables.
Arithmetic and $k$-maximality of the cyclic free magma
Carles Cardó
We survey free magmas and we explore the structure of their submagmas. By equipping the cyclic free magma with a second distributive operation we obtain a ringoid-like structure with some primitive arithmetical properties. A submagma is $k$-maximal when there are only $k-1$ submagmas between it and the free magma itself. These two tools, arithmetic and maximality, allow us to study the lattice of the submagmas of a free magma.
Determinasi Teori Fraud Hexagon dan Karakteristik Komite Audit dalam Mendeteksi Kecurangan Laporan Keuangan
Astri Hardirmaningrum, Abdul Rohman
Purpose: This study aims to determine the influence of elements from the fraud hexagon theory and characteristics of audit committees on detecting financial statement fraud.
Methodology/approach: The study data uses secondary data sourced from annual reports of manufacturing companies in the basic and chemical industry sebsectors listed on the IDX for 2019-2022 period.
Findings: This study resulted in findings that pressure has a positive effect and opportunity has a negative effect on financial statement fraud. Rationalization, capability, arrogance, collusion and two characteristics of audit committee, namely financial expertise and frequency of audit committee meetings has no effect on financial statement fraud.
Practical and Theoretical contribution/Originality: These findings contribute to researchers and business managers in increasing understanding of the factors that lead to fraud through the hexagon fraud model an characteristics of audit committees, so as to reduce frequency and amount of losses due to fraud. Novelty this study is to use the independent variable characteristics of audit committee, namely financial expertise and frequency of meetings on detecting financial statement fraud.
Research Limitation: The independent variables in this study are only 31.6% which affect the detection of financial statement fraud. While the remaining 68.4% is influenced by other variables outside this research model. In addition, this study also cannot be generalized because only one sub-sector of the company is examined.
Accounting. Bookkeeping, Business mathematics. Commercial arithmetic. Including tables, etc.
Semantic Table Detection with LayoutLMv3
Ivan Silajev, Niels Victor, Phillip Mortimer
This paper presents an application of the LayoutLMv3 model for semantic table detection on financial documents from the IIIT-AR-13K dataset. The motivation behind this paper's experiment was that LayoutLMv3's official paper had no results for table detection using semantic information. We concluded that our approach did not improve the model's table detection capabilities, for which we can give several possible reasons. Either the model's weights were unsuitable for our purpose, or we needed to invest more time in optimising the model's hyperparameters. It is also possible that semantic information does not improve a model's table detection accuracy.
A Q# Implementation of a Quantum Lookup Table for Quantum Arithmetic Functions
Rajiv Krishnakumar, Mathias Soeken, Martin Roetteler
et al.
In this paper, we present Q# implementations for arbitrary single-variabled fixed-point arithmetic operations for a gate-based quantum computer based on lookup tables (LUTs). In general, this is an inefficent way of implementing a function since the number of inputs can be large or even infinite. However, if the input domain can be bounded and there can be some error tolerance in the output (both of which are often the case in practical use-cases), the quantum LUT implementation of certain quantum arithmetic functions can be more efficient than their corresponding reversible arithmetic implementations. We discuss the implementation of the LUT using Q\# and its approximation errors. We then show examples of how to use the LUT to implement quantum arithmetic functions and compare the resources required for the implementation with the current state-of-the-art bespoke implementations of some commonly used arithmetic functions. The implementation of the LUT is designed for use by practitioners to use when implementing end-to-end quantum algorithms. In addition, given its well-defined approximation errors, the LUT implementation makes for a clear benchmark for evaluating the efficiency of bespoke quantum arithmetic circuits .
Situating "Ethics in Mathematics" as a Philosophy of Mathematics Ethics Education
Dennis Müller
In this paper, we situate the educational movement of "Ethics in Mathematics," as outlined by the Cambridge University Ethics in Mathematics Project, in the wider area of mathematics ethics education. By focusing on the core message coming out of Ethics in Mathematics, its target group, and educational philosophy, we set it into relation with "Mathematics for Social Justice" and Paul Ernest's recent work on ethics of mathematics. We conclude that, although both Ethics in Mathematics and Mathematics for Social Justice appear antagonistic at first glance, they can be understood as complementary rather than competing educational strategies.
Treatment of Dye Wastewater Containing Chromium from Batik Industry using Coconut Shell Activated Carbon Adsorption
Aulia Qisti, Riza Agung Pribadi, Hamda Ali
et al.
Secured Steganographic Scheme Utilizing Fuzzy Threshold with Weighted Matrix
Sharmistha Jana, Biswapati Jana, Tzu-Chuen Lu
The idea of similarity measure, also known as entropy measurement, is used to discriminate between dissimilar objects. In many cases, mathematical, psychological, and fuzzy approaches have been utilized to explore it. Taking these principles into consideration, a new fuzzy threshold-based steganographic system is designed to choose pixel blocks for data embedding based on the degree of the pixel category. Then the secret data has been embedded through sum-of-entry-wise multiplication operation using predefined weighted matrix and selected pixel block. The secret data has been successfully recovered during the extraction phase using the required pixel range, fuzzy threshold, and weighted matrix, which are responsible for shared secret key to improve security and robustness. Finally, the proposed technique is tested using steganographic attacks and several types of analysis to determine its imperceptibility and robustness. Moreover, various experimental tests have been carried out to demonstrate the efficacy and effectiveness of the proposed method. From a security standpoint, it has been observed that unknown weighted matrix, threshold value and reference table have failed to retrieve the secret from watermarked image. The anticipated consequence highlighted certain outstanding magnificent aspects in the fields of image authentication, tamper detection, and digital forgery detection, all of which are essential to the technological life. This system benefits a wide range of government and business sectors, including health care, commercial security, defense, and intellectual property rights.
Editorial: Minimizing Workplace Bias-What Surgeons, Scientists, and Their Organizations Can Do.
C. Rimnac
For clinicians and scientists, managing bias in the studies we design, conduct, and read is a topic that we deal with all the time. Perhaps because of this, we may imagine that we are less prone to carrying normal human biases into our places of work, and that we know how to handle it during interactions with our peers, staff, and patients. Indeed, much progress has been made in addressing overt or explicit bias in the workplace. And although it’s by no means fully behind us, gone are the days of “Want Ads” stating “only men need apply” or women being fired (explicitly) because they became pregnant. Unfortunately, there is ample evidence that explicit, and also unconscious (or more broadly, implicit) biases still exist in the doctor’s office, the hallways of hospitals, and the laboratory, and that these biases harm patients, providers, and scientists [8, 9, 11]. Both individuals and institutions have important roles to play to minimize bias and achieve fairer healthcare systems and workplaces. Explicit biases are easily understood as overt prejudices and attitudes about a group that an individual realizes (s)he holds; overt racism or misogyny are examples of explicit bias [14]. Unconscious and implicit biases are more subtle. Unconscious biases are associations or deeply held beliefs that drive our attitudes and behavior, even though at a conscious level we are not aware of them [4, 12]. Unconscious bias often develops early in life and can be reinforced by repeated social stereotypes; the persistent (and incorrect) idea that girls are biologically inferior at math or that Asian people are good at it are examples of unconscious bias [2, 10, 12]. Implicit bias is closely related to unconscious bias, but more broadly captures the notion that even when we recognize and understand on an intellectual level that a deeply held belief is inaccurate or false, it may still be hard to control its effect on our behavior. To carry the same example a bit further, an implicit bias would be demonstrated if an employer were to make a hiring decision predicated on the false belief that girls (and thus, women) are not good at math, despite being provided with evidence to the contrary [2] (Table 1). Certainly, we all have seen studies where identical resumes in science, engineering, or mathematics were evaluated either as though they were submitted by a man or a woman (only the names were changed), and even though everything else about them was identical, those that appeared to have been submitted by a man were rated more favorably and were more likely to be hired than those apparently submitted by a woman. It’s likely that both unconscious and implicit biases were at work in those studies; it may not be easy (or even possible) in some cases to separate their effects. It’s important to understand that implicit bias is pervasive, variable, and normal [1]. We all hold implicit biases. To better understand how deeply embedded implicit biases can be within us, take one (or more) of the Harvard Implicit Association tests (IATs) (https:// implicit.harvard.edu/implicit/takeatest. html). These tests cover a sobering range of topics, including Sexuality, Race, Age, Religion, Skin-Tone, Weight, Gender-Career, and Disability. The IATs can be a useful means by which to The author certifies that neither she, nor any members of her immediate family, have any commercial associations (such as consultancies, stock ownership, equity interest, patent/licensing arrangements, etc.) that might pose a conflict of interest in connection with the submitted article. The opinions expressed are those of the writer, and do not reflect the opinion or policy of CORR or The Association of Bone and Joint Surgeons.
Handling Concept Drift for Predictions in Business Process Mining
Lucas Baier, Josua Reimold, Niklas Kühl
Predictive services nowadays play an important role across all business sectors. However, deployed machine learning models are challenged by changing data streams over time which is described as concept drift. Prediction quality of models can be largely influenced by this phenomenon. Therefore, concept drift is usually handled by retraining of the model. However, current research lacks a recommendation which data should be selected for the retraining of the machine learning model. Therefore, we systematically analyze different data selection strategies in this work. Subsequently, we instantiate our findings on a use case in process mining which is strongly affected by concept drift. We can show that we can improve accuracy from 0.5400 to 0.7010 with concept drift handling. Furthermore, we depict the effects of the different data selection strategies.
Classifying Cartographic Projections Based on Dynamic Analysis of Program Code
Florian Ledermann
Abstract. Analyzing a given map to identify its projection and other geometrical properties has long been an important aspect of cartographic analysis. If explicit information about the projection used in a particular map is not available, the properties of the cartographic transformation can sometimes be reconstructed from the map image. However, such a process of projection analysis requires significant manual labor and oversight. For digital maps, we usually expect the projection from geographic space to map space to have been calculated by a computer program. Such a program can be expected to contain the implementation of the mathematical rules of the projection and subsequent coordinate transformations such as translation and scaling. The program code, therefore, contains information that would allow an analyst to reliably identify map projections and other geometrical transformations applied to the input data. In the case of interactive online maps, the code generating the map is in fact delivered to the map user and could be used for cartographic analysis. The core idea of our novel method proposed for map analysis is to apply reverse engineering techniques on the code implementing the cartographic transformations in order to retrieve the properties of the applied map projection. However, automatic reasoning about computer code by way of static analysis (analyzing the source code without running it) is provably limited – for example, the code delivered to the map user may contain a whole library of different map projections, of which only a specific one may be actually used at runtime. Instead, we propose a dynamic analysis approach to observe and monitor the operations performed by the code as the program runs, and to retrieve the mathematical operations that have been used to calculate the coordinates of every graphical element on the map. The presented method produces, for every graphical element of the map, a transformation graph consisting of low-level mathematical operations. Cartographic projections can be identified as distinctive patterns in the transformation graph, and can be distinguished in a fully automatic way by matching a set of predefined patterns against a particular graph. Projections vary widely in their arithmetic structure, and therefore by the structure of the corresponding transformation graphs extracted from program code. Some projections can be computed directly using continuous equations involving trigonometric functions. Other projections involve solving nonlinear equations, which need to be solved by approximation. Composite projections use different projections depending on some threshold value. Yet other projections, such as the Robinson projection, define a table of predefined values, between which interpolation is used etc.. In each of these cases, we expect to find the operations corresponding to the mathematical structure of the projection in the transformation graph extracted by the presented method. For verifying the method, we have implemented the patterns of several well-known cartographic projections based on the literature and have used it on the transformation graphs extracted from a variety of sample programs. To ensure a diversity of implementations, we have evaluated programs using different and independent JavaScript implementations of projections, including the open source libraries D3.js, proj4js, Leaflet, OpenLayers, and informal implementations of example programs found online. For these case studies, we could successfully identify many projections based on identifying patterns in the transformation graph in a fully automated, unsupervised manner. In the future, the proposed method may be further developed for many innovative application scenarios, such as building a “cartographic search engine” or constructing novel tools for semi-automatic cartographic analysis and review.
Methods and tools for GDPR Compliance through P rivacy and D ata P rotection 4 E ngineering Specification and design of assurance tool for data protection and privacy
Final Comments : Money and “ Pan-relationalism
Héctor Vera
Foundations for Spread Page: review of existing concepts, solutions, technologies capabile of improving effectiveness of conveying knowledge
T. Tarnawski, R. Kasprzyk, R. Waszkowski
1 sitasi
en
Computer Science
Distance between arithmetic progressions and perfect squares
Tsz Ho Chan
In this paper, we study how close the terms of a finite arithmetic progression can get to a perfect square. The answer depends on the initial term, the common difference and the number of terms in the arithmetic progression.
OMNIDATA and the Computerization of Scientific Data
D. R. Lide
on Petrochemical Engineering Unleashing potential of Micro-Ice-GTL technology for lucrative capture of Methane gas emissions
V. Piven
Weaving Entrepreneurially Minded Learning Throughout a Civil Engineering Curriculum
A. Welker, K. Sample-Lord, J. Yost
Exam Survival Guide: Physical Chemistry
J. Vogt