REST APIs enable collaboration among microservices. A single fault in a REST API can bring down the entire microservice system and cause significant financial losses, underscoring the importance of REST API testing. Effectively testing REST APIs requires thoroughly exercising the functionalities behind them. To this end, existing techniques leverage REST specifications (e.g., Swagger or OpenAPI) to generate test cases. Using the resource constraints extracted from specifications, these techniques work well for testing simple, business-insensitive functionalities, such as resource creation, retrieval, update, and deletion. However, for complex, business-sensitive functionalities, these specification-based techniques often fall short, since exercising such functionalities requires additional business constraints that are typically absent from REST specifications. In this paper, we present LoBREST, a log-based, business-aware REST API testing technique that leverages historical request logs (HRLogs) to effectively exercise the business-sensitive functionalities behind REST APIs. To obtain compact operation sequences that preserve clean and complete business constraints, LoBREST first employs a locality-slicing strategy to partition HRLogs into smaller slices. Then, to ensure the effectiveness of the obtained slices, LoBREST enhances them in two steps: (1) adding slices for operations missing from HRLogs, and (2) completing missing resources within the slices. Finally, to improve test adequacy, LoBREST uses these enhanced slices as initial seeds to perform business-aware fuzzing. LoBREST outperformed eight tools (including Arat-rl, Morest, and Deeprest) across 17 real-world services. It achieved top operation coverage on 16 services and line coverage on 15, averaging 2.1x and 1.2x improvements over the runner-up. LoBREST detected 108 5XX bugs, including 38 found by no other tool.
Liyang Zhao, Olurotimi Seton, Himadeep Reddy Reddivari
et al.
The sales process involves sales functions converting leads or opportunities to customers and selling more products to existing customers. The optimization of the sales process thus is key to success of any B2B business. In this work, we introduce a principled approach to sales optimization and business AI, namely the Causal Predictive Optimization and Generation, which includes three layers: 1) prediction layer with causal ML 2) optimization layer with constraint optimization and contextual bandit 3) serving layer with Generative AI and feedback-loop for system enhancement. We detail the implementation and deployment of the system in LinkedIn, showcasing significant wins over legacy systems and sharing learning and insight broadly applicable to this field.
This article examines the relationships between environmental and task uncertainty, and the importance of controllers’ activities. Survey data from 412 Dutch organizations operating in a wide variety of sectors are used to explore these relationships, focusing explicitly on their entire controllership function. The results show much variation in terms of the importance of controllers’ activities, both within and across organizations. Factor analysis identifies seven dimensions of controllers’ activities importance, of which budgeting/reporting activities are the most important. Environmental and task uncertainty are differently related to these dimensions of controllers’ activities importance.
Business, Business mathematics. Commercial arithmetic. Including tables, etc.
The waste problem is still becoming a big concern in Indonesia. Waste, especially plastic waste comes from single-use packaging of daily necessities such as personal care and home care. PT. Siklus Refil Indonesia or Siklus, a retail company, comes to offer a sustainable solution of buying daily necessities by refill method. Since April 2020, Siklus has operated in the Greater Jakarta area and already impacted 20,000 customers. However, Siklus must change its new business model due to regulation from the Food and Drug Supervisory Agency (BPOM) that warned the company not to sell personal care who come in direct contact with skin. The warning impacted the decreasing customers, sales, and profit of Siklus. This research has the purpose of determining the new business model of Siklus using the design thinking concept. By this concept, this research empathizes with customers, defines customer needs, and ideates a business model. This research continues to decide the new business model by creating a matrix of stepwise selection. Then this research does a prototype business model and tests the new business model. After doing the process, Return from Home is selected as the new business model for Siklus.
Although great progress has been made by previous table understanding methods including recent approaches based on large language models (LLMs), they rely heavily on the premise that given tables must be converted into a certain text sequence (such as Markdown or HTML) to serve as model input. However, it is difficult to access such high-quality textual table representations in some real-world scenarios, and table images are much more accessible. Therefore, how to directly understand tables using intuitive visual information is a crucial and urgent challenge for developing more practical applications. In this paper, we propose a new problem, multimodal table understanding, where the model needs to generate correct responses to various table-related requests based on the given table image. To facilitate both the model training and evaluation, we construct a large-scale dataset named MMTab, which covers a wide spectrum of table images, instructions and tasks. On this basis, we develop Table-LLaVA, a generalist tabular multimodal large language model (MLLM), which significantly outperforms recent open-source MLLM baselines on 23 benchmarks under held-in and held-out settings. The code and data is available at this https://github.com/SpursGoZmy/Table-LLaVA
Credit ratings are becoming one of the primary references for financial institutions of the country to assess credit risk in order to accurately predict the likelihood of business failure of an individual or an enterprise. Financial institutions, therefore, depend on credit rating tools and services to help them predict the ability of creditors to meet financial persuasions. Conventional credit rating is broadly categorized into two classes namely: good credit and bad credit. This approach lacks adequate precision to perform credit risk analysis in practice. Related studies have shown that data-driven machine learning algorithms outperform many conventional statistical approaches in solving this type of problem, both in terms of accuracy and efficiency. The purpose of this paper is to construct and validate a credit risk assessment model using Linear Discriminant Analysis as a dimensionality reduction technique to discriminate good creditors from bad ones and identify the best classifier for credit assessment of commercial banks based on real-world data. This will help commercial banks to avoid monetary losses and prevent financial crisis
This is the second installment in a series of papers applying descriptive set theoretic techniques to both analyze and enrich classical functors from homological algebra and algebraic topology. In it, we show that the Čech cohomology functors $\check{\mathrm{H}}^n$ on the category of locally compact separable metric spaces each factor into (i) what we term their definable version, a functor $\check{\mathrm{H}}^n_{\mathrm{def}}$ taking values in the category $\mathsf{GPC}$ of groups with a Polish cover (a category first introduced in this work's predecessor), followed by (ii) a forgetful functor from $\mathsf{GPC}$ to the category of groups. These definable cohomology functors powerfully refine their classical counterparts: we show that they are complete invariants, for example, of the homotopy types of mapping telescopes of $d$-spheres or $d$-tori for any $d\geq 1$, and, in contrast, that there exist uncountable families of pairwise homotopy inequivalent mapping telescopes of either sort on which the classical cohomology functors are constant. We then apply the functors $\check{\mathrm{H}}^n_{\mathrm{def}}$ to show that a seminal problem in the development of algebraic topology, namely Borsuk and Eilenberg's 1936 problem of classifying, up to homotopy, the maps from a solenoid complement $S^3\backslashΣ$ to the $2$-sphere, is essentially hyperfinite but not smooth. In the course of this work, we record Borel definable versions of a number of classical results bearing on both the combinatorial and homotopical formulations of Čech cohomology; in aggregate, this work may be regarded as laying foundations for the descriptive set theoretic study of the homotopy relation on the space of maps from a locally compact Polish space to a polyhedron, a relation which embodies a substantial variety of classification problems arising throughout mathematics.
Tables on the Web contain a vast amount of knowledge in a structured form. To tap into this valuable resource, we address the problem of table retrieval: answering an information need with a ranked list of tables. We investigate this problem in two different variants, based on how the information need is expressed: as a keyword query or as an existing table ("query-by-table"). The main novel contribution of this work is a semantic table retrieval framework for matching information needs (keyword or table queries) against tables. Specifically, we (i) represent queries and tables in multiple semantic spaces (both discrete sparse and continuous dense vector representations) and (ii) introduce various similarity measures for matching those semantic representations. We consider all possible combinations of semantic representations and similarity measures and use these as features in a supervised learning model. Using two purpose-built test collections based on Wikipedia tables, we demonstrate significant and substantial improvements over state-of-the-art baselines.
There is a massive underserved market for small business lending in the US with the Federal Reserve estimating over \$650B in unmet annual financing needs. Assessing the credit risk of a small business is key to making good decisions whether to lend and at what terms. Large corporations have a well-established credit assessment ecosystem, but small businesses suffer from limited publicly available data and few (if any) credit analysts who cover them closely. We explore the applicability of (DL-based) large corporate credit risk models to small business credit rating.
“Number” is an important learning dimension in primary mathematics education. It covers a large proportion of mathematical topics in the primary mathematics curriculum, and teachers use most of their class time to teach fundamental number concepts and basic arithmetic operations. This paper focuses on the nature of mathematics pedagogical content knowledge (MPCK) concerning arithmetic word problems. The aim of this qualitative research was to investigate how well the future primary school teachers in Hong Kong had been prepared to teach mathematical application problems for third and sixth graders. Nineteen pre-service teachers who majored in both mathematics and primary education were interviewed using two sets of scenario-based questions. The results revealed that innovative approaches were suggested for teaching third graders while the strategies suggested for teaching sixth graders were mostly based on a profound understanding of mathematical content knowledge. Many participants demonstrated sound knowledge about the sixth grader’s mathematical misconception, but most of them were unable to precisely indicate the third grader’s error in presenting a complete solution for a typical mathematics word problem. A deep understanding of elementary number theory seems to be a precondition for developing pre-service teachers’ MPCK in teaching arithmetic word problems.
Private blockchain is driving the creation of business networks, resulting in the creation of new value or new business models to the enterprises participating in the network. Such business networks form when enterprises come together to derive value through a network which is greater than the value that can be derived solely by any single company. This results in a setting that combines both competitive and cooperative behavior, and which we call strategic coopetition. Traditionally, cooperative and competitive behavior have been analyzed separately in game theory. In this article, we provide a formal model enabling to jointly analyze these different types of behaviors and the interdependencies between them. Using this model, we formally demonstrate and analyze the incentives for both cooperative and competitive behavior.
Documents are often used for knowledge sharing and preservation in business and science, within which are tables that capture most of the critical data. Unfortunately, most documents are stored and distributed as PDF or scanned images, which fail to preserve logical table structure. Recent vision-based deep learning approaches have been proposed to address this gap, but most still cannot achieve state-of-the-art results. We present Global Table Extractor (GTE), a vision-guided systematic framework for joint table detection and cell structured recognition, which could be built on top of any object detection model. With GTE-Table, we invent a new penalty based on the natural cell containment constraint of tables to train our table network aided by cell location predictions. GTE-Cell is a new hierarchical cell detection network that leverages table styles. Further, we design a method to automatically label table and cell structure in existing documents to cheaply create a large corpus of training and test data. We use this to enhance PubTabNet with cell labels and create FinTabNet, real-world and complex scientific and financial datasets with detailed table structure annotations to help train and test structure recognition. Our framework surpasses previous state-of-the-art results on the ICDAR 2013 and ICDAR 2019 table competition in both table detection and cell structure recognition with a significant 5.8% improvement in the full table extraction system. Further experiments demonstrate a greater than 45% improvement in cell structure recognition when compared to a vanilla RetinaNet object detection model in our new out-of-domain FinTabNet.
Roberto Bagnara, Abramo Bagnara, Fabio Biselli
et al.
Verification of programs using floating-point arithmetic is challenging on several accounts. One of the difficulties of reasoning about such programs is due to the peculiarities of floating-point arithmetic: rounding errors, infinities, non-numeric objects (NaNs), signed zeroes, denormal numbers, different rounding modes, etc. One possibility to reason about floating-point arithmetic is to model a program computation path by means of a set of ternary constraints of the form z = x op y and use constraint propagation techniques to infer new information on the variables' possible values. In this setting, we define and prove the correctness of algorithms to precisely bound the value of one of the variables x, y or z, starting from the bounds known for the other two. We do this for each of the operations and for each rounding mode defined by the IEEE 754 binary floating-point standard, even in the case the rounding mode in effect is only partially known. This is the first time that such so-called filtering algorithms are defined and their correctness is formally proved. This is an important slab for paving the way to formal verification of programs that use floating-point arithmetics.
Ook deze maand presenteren wij weer enkele Audit Research Summaries uit de database van de American Accounting Association (www.auditingresearchsummaries. org). De eerste samenvatting betreft een onderzoek van Svanberg en Öhman naar de invloed van charismatisch leiderschap op de objectiviteit van de accountant. De veronderstelling is dat charismatische leiders deze negatief beïnvloeden. Aan de hand van een survey onder Zweedse accountants concluderen de onderzoekers een positieve relatie tussen de mate waarin de objectiviteit wordt beleefd en de mate waarin de bestuurder van de cliënt als charismatisch wordt ervaren. Deze constatering zou bij het aanvaarden van cliënten en inschatten van risico's moeten worden betrokken.
Business, Business mathematics. Commercial arithmetic. Including tables, etc.
Andrea Burattin, Vered Bernstein, Manuel Neurauter
et al.
Business process models abstract complex business processes by representing them as graphical models. Their layout, solely determined by the modeler, affects their understandability. To support the construction of understandable models it would be beneficial to systematically study this effect. However, this requires a basic set of measurable key visual features, depicting the layout properties that are meaningful to the human user. The aim of this research is thus twofold. First, to empirically identify key visual features of business process models which are perceived as meaningful to the user. Second, to show how such features can be quantified into computational metrics, which are applicable to business process models. We focus on one particular feature, consistency of flow direction, and show the challenges that arise when transforming it into a precise metric. We propose three different metrics addressing these challenges, each following a different view of flow consistency. We then report the results of an empirical evaluation, which indicates which metric is more effective in predicting the human perception of this feature. Moreover, two other automatic evaluations describing the performance and the computational capabilities of our metrics are reported as well.
Three types of reciprocity laws for arithmetic surfaces are established. For these around a point or along a vertical curve, we first construct $K_2$ type central extensions, then introduce reciprocity symbols, and finally prove the law as an application of Parshin-Beilinson's theory of adelic complex. For reciprocity law along a horizontal curve, we first introduce a new type of arithmetic central extensions, then apply our arithmetic adelic cohomology theory and arithmetic intersection theory to prove the related reciprocity law. All this can be interpreted within the framework of arithmetic central extensions. We add an appendix to deal with some basic structures of such extensions.
Yelp online reviews are invaluable source of information for users to choose where to visit or what to eat among numerous available options. But due to overwhelming number of reviews, it is almost impossible for users to go through all reviews and find the information they are looking for. To provide a business overview, one solution is to give the business a 1-5 star(s). This rating can be subjective and biased toward users personality. In this paper, we predict a business rating based on user-generated reviews texts alone. This not only provides an overview of plentiful long review texts but also cancels out subjectivity. Selecting the restaurant category from Yelp Dataset Challenge, we use a combination of three feature generation methods as well as four machine learning models to find the best prediction result. Our approach is to create bag of words from the top frequent words in all raw text reviews, or top frequent words/adjectives from results of Part-of-Speech analysis. Our results show Root Mean Square Error (RMSE) of 0.6 for the combination of Linear Regression with either of the top frequent words from raw data or top frequent adjectives after Part-of-Speech (POS).
We describe a high performance parallel implementation of a derivative pricing model, within which we introduce a new parallel method for the calibration of the industry standard SABR (stochastic-αβρ) stochastic volatility model using three strike inputs. SABR calibration involves a non-linear three dimensional minimisation and parallelisation is achieved by incorporating several assumptions unique to the SABR class of models. Our calibration method is based on principles of surface intersection, guarantees convergence to a unique solution and operates by iteratively refining a two dimensional grid with local mesh refinement. As part of our pricing model we additionally present a fast parallel iterative algorithm for the creation of dynamically sized cumulative probability lookup tables that are able to cap maximum estimated linear interpolation error. We optimise performance for probability distributions that exhibit clustering of linear interpolation error. We also make an empirical assessment of error propagation through our pricing model as a result of changes in accuracy parameters within the pricing model's multiple algorithmic steps. Algorithms are implemented on a GPU (graphics processing unit) using Nvidia's Fermi architecture. The pricing model targets the evaluation of spread options using copula methods, however the presented algorithms can be applied to a wider class of financial instruments.