{"results":[{"id":"crossref_10.7717/peerj-cs.3628","title":"Gender and positional biases in LLM-based hiring decisions: evidence from comparative CV/résumé evaluations","authors":[{"name":"David Rozado"}],"abstract":"This study examines the choices made by Large Language Models (LLMs) when selecting professional candidates for a job based on their résumés or curricula vitae (CVs). In an experiment involving 22 leading LLMs, each model was systematically given a job description along with a pair of profession-matched CVs—one bearing a male first name, the other a female first name—and asked to select the more suitable candidate for the job. Each CV pair was presented twice, with names swapped to ensure that any observed preferences in candidate selection stemmed from gendered names cues. Despite equalized professional qualifications between genders, all LLMs consistently favored female-named candidates across 70 different professions. Adding an explicit gender field (male/female) to the CVs further increased the preference for female applicants. When gendered names were replaced with gender-neutral identifiers ( i.e ., Candidate A/B), several models displayed a slight preference for selecting “Candidate A”. Counterbalancing gender assignment between these gender-neutral identifiers resulted in gender parity in candidate selection. When asked to rate CVs in isolation rather than compare pairs, LLMs assigned slightly higher average scores to female CVs overall, but the effect size was negligible. Including preferred pronouns (he/him or she/her) next to a candidate’s name slightly increased the odds of the candidate being selected. Finally, most models exhibited a substantial positional bias to select the candidate listed first in the prompt. These findings underscore the need for caution when deploying LLMs in high-stakes autonomous decision-making contexts and raise doubts about whether LLMs consistently apply principled reasoning.","source":"CrossRef","year":2026,"language":"en","subjects":null,"doi":"10.7717/peerj-cs.3628","url":"https://doi.org/10.7717/peerj-cs.3628","pdf_url":"https://peerj.com/articles/cs-3628.pdf","is_open_access":true,"citations":1,"published_at":"","score":70.03},{"id":"ss_6700ce4ad2c9ecaf29396101b04f22c3b4ffd507","title":"Efficient detection of AI-generated scientific abstracts with a lightweight transformer","authors":[{"name":"Cuilian Zhang"},{"name":"Weijun Zhou"}],"abstract":"The rapid growth of advanced large language models challenges the authenticity of scientific work, which requires reliable methods for detecting AI-generated scientific text. This paper addresses this challenge by developing and evaluating an efficient text classifier. We first constructed a balanced dataset, focusing initially on the Computer Vision (cs.CV) domain, and subsequently expanding it to include four additional diverse scientific domains (totaling 5,000 abstracts), using human-written samples from arXiv and corresponding AI-generated versions created using Google’s Gemini 2.0 Flash. We then fine-tuned a lightweight Transformer model, DistilBERT, for the classification task. On the primary in-domain (cs.CV) test set, our approach achieved excellent performance, with an accuracy of 99.4% and an Area Under the ROC Curve of 0.9999. Subsequent cross-domain evaluations demonstrated robust generalization (Macro-F1 = 0.948). Further analysis revealed that our model surpasses traditional machine learning baselines not only in accuracy but also in robustness, as it learns deep semantic patterns rather than relying on superficial statistical cues. This work provides a practical, high-performance tool for safeguarding scientific authenticity and establishes a valuable benchmark for future research in AI text detection.","source":"Semantic Scholar","year":2026,"language":"en","subjects":["Medicine"],"doi":"10.1038/s41598-026-35203-3","url":"https://www.semanticscholar.org/paper/6700ce4ad2c9ecaf29396101b04f22c3b4ffd507","is_open_access":true,"citations":1,"published_at":"","score":70.03},{"id":"arxiv_2602.21425","title":"Automating Timed Up and Go Phase Segmentation and Gait Analysis via the tugturn Markerless 3D Pipeline","authors":[{"name":"Abel Gonçalves Chinaglia"},{"name":"Guilherme Manna Cesar"},{"name":"Paulo Roberto Pereira Santiago"}],"abstract":"Instrumented Timed Up and Go (TUG) analysis can support clinical and research decision-making, but robust and reproducible markerless pipelines are still limited. We present \\textit{tugturn.py}, a Python-based workflow for 3D markerless TUG processing that combines phase segmentation, gait-event detection, spatiotemporal metrics, intersegmental coordination, and dynamic stability analysis. The pipeline uses spatial thresholds to segment each trial into stand, first gait, turning, second gait, and sit phases, and applies a relative-distance strategy to detect heel-strike and toe-off events within valid gait windows. In addition to conventional kinematics, \\textit{tugturn} provides Vector Coding outputs and Extrapolated Center of Mass (XCoM)-based metrics. The software is configured through TOML files and produces reproducible artifacts, including HTML reports, CSV tables, and quality-assurance visual outputs. A complete runnable example is provided with test data and command-line instructions. This manuscript describes the implementation, outputs, and reproducibility workflow of \\textit{tugturn} as a focused software contribution for markerless biomechanical TUG analysis.","source":"arXiv","year":2026,"language":"en","subjects":["cs.CV"],"url":"https://arxiv.org/abs/2602.21425","pdf_url":"https://arxiv.org/pdf/2602.21425","is_open_access":true,"published_at":"2026-02-24T22:56:54Z","score":70},{"id":"arxiv_2601.15368","title":"Aligned Stable Inpainting: Mitigating Unwanted Object Insertion and Preserving Color Consistency","authors":[{"name":"Yikai Wang"},{"name":"Junqiu Yu"},{"name":"Chenjie Cao"},{"name":"Xiangyang Xue"},{"name":"Yanwei Fu"}],"abstract":"Generative image inpainting can produce realistic, high-fidelity results even with large, irregular masks. However, existing methods still face key issues that make inpainted images look unnatural. In this paper, we identify two main problems: (1) Unwanted object insertion: generative models may hallucinate arbitrary objects in the masked region that do not match the surrounding context. (2) Color inconsistency: inpainted regions often exhibit noticeable color shifts, leading to smeared textures and degraded image quality. We analyze the underlying causes of these issues and propose efficient post-hoc solutions for pre-trained inpainting models. Specifically, we introduce the principled framework of Aligned Stable inpainting with UnKnown Areas prior (ASUKA). To reduce unwanted object insertion, we use reconstruction-based priors to guide the generative model, suppressing hallucinated objects while preserving generative flexibility. To address color inconsistency, we design a specialized VAE decoder that formulates latent-to-image decoding as a local harmonization task. This design significantly reduces color shifts and produces more color-consistent results. We implement ASUKA on two representative inpainting architectures: a U-Net-based model and a DiT-based model. We analyze and propose lightweight injection strategies that minimize interference with the model's original generation capacity while ensuring the mitigation of the two issues. We evaluate ASUKA using the Places2 dataset and MISATO, our proposed diverse benchmark. Experiments show that ASUKA effectively suppresses object hallucination and improves color consistency, outperforming standard diffusion, rectified flow models, and other inpainting methods. Dataset, models and codes will be released in github.","source":"arXiv","year":2026,"language":"en","subjects":["cs.CV","eess.IV"],"url":"https://arxiv.org/abs/2601.15368","pdf_url":"https://arxiv.org/pdf/2601.15368","is_open_access":true,"published_at":"2026-01-21T17:57:18Z","score":70},{"id":"crossref_10.1038/s41598-023-27618-z","title":"Protective effects of chitosan based salicylic acid nanocomposite (CS-SA NCs) in grape (Vitis vinifera cv. ‘Sultana’) under salinity stress","authors":[{"name":"Mohammad Ali Aazami"},{"name":"Maryam Maleki"},{"name":"Farzad Rasouli"},{"name":"Gholamreza Gohari"}],"abstract":"AbstractSalinity is one of the most important abiotic stresses that reduce plant growth and performance by changing physiological and biochemical processes. In addition to improving the crop, using nanomaterials in agriculture can reduce the harmful effects of environmental stresses, particularly salinity. A factorial experiment was conducted in the form of a completely randomized design with two factors including salt stress at three levels (0, 50, and 100 mM NaCl) and chitosan-salicylic acid nanocomposite at three levels (0, 0.1, and 0.5 mM). The results showed reductions in chlorophylls (a, b, and total), carotenoids, and nutrient elements (excluding sodium) while proline, hydrogen peroxide, malondialdehyde, total soluble protein, soluble carbohydrate, total antioxidant, and antioxidant enzymes activity increased with treatment chitosan-salicylic acid nanocomposite (CS-SA NCs) under different level NaCl. Salinity stress reduced Fm', Fm, and Fv/Fm by damage to photosynthetic systems, but treatment with CS-SA NCs improved these indices during salinity stress. In stress-free conditions, applying the CS-SA NCs improved the grapes' physiological, biochemical, and nutrient elemental balance traits. CS-SA NCs at 0.5 mM had a better effect on the studied traits of grapes under salinity stress. The CS-SA nanoparticle is a biostimulant that can be effectively used to improve the grape plant yield under salinity stress.","source":"CrossRef","year":2023,"language":"en","subjects":null,"doi":"10.1038/s41598-023-27618-z","url":"https://doi.org/10.1038/s41598-023-27618-z","pdf_url":"https://www.nature.com/articles/s41598-023-27618-z.pdf","is_open_access":true,"citations":76,"published_at":"","score":69.28},{"id":"arxiv_2511.01194","title":"A Topology-Aware Graph Convolutional Network for Human Pose Similarity and Action Quality Assessment","authors":[{"name":"Minmin Zeng"}],"abstract":"Action Quality Assessment (AQA) requires fine-grained understanding of human motion and precise evaluation of pose similarity. This paper proposes a topology-aware Graph Convolutional Network (GCN) framework, termed GCN-PSN, which models the human skeleton as a graph to learn discriminative, topology-sensitive pose embeddings. Using a Siamese architecture trained with a contrastive regression objective, our method outperforms coordinate-based baselines and achieves competitive performance on AQA-7 and FineDiving benchmarks. Experimental results and ablation studies validate the effectiveness of leveraging skeletal topology for pose similarity and action quality assessment.","source":"arXiv","year":2025,"language":"en","subjects":["cs.CV","cs.AI"],"url":"https://arxiv.org/abs/2511.01194","pdf_url":"https://arxiv.org/pdf/2511.01194","is_open_access":true,"published_at":"2025-11-03T03:38:24Z","score":69},{"id":"arxiv_2503.04496","title":"Learning Object Placement Programs for Indoor Scene Synthesis with Iterative Self Training","authors":[{"name":"Adrian Chang"},{"name":"Kai Wang"},{"name":"Yuanbo Li"},{"name":"Manolis Savva"},{"name":"Angel X. Chang"},{"name":"Daniel Ritchie"}],"abstract":"Data driven and autoregressive indoor scene synthesis systems generate indoor scenes automatically by suggesting and then placing objects one at a time. Empirical observations show that current systems tend to produce incomplete next object location distributions. We introduce a system which addresses this problem. We design a Domain Specific Language (DSL) that specifies functional constraints. Programs from our language take as input a partial scene and object to place. Upon execution they predict possible object placements. We design a generative model which writes these programs automatically. Available 3D scene datasets do not contain programs to train on, so we build upon previous work in unsupervised program induction to introduce a new program bootstrapping algorithm. In order to quantify our empirical observations we introduce a new evaluation procedure which captures how well a system models per-object location distributions. We ask human annotators to label all the possible places an object can go in a scene and show that our system produces per-object location distributions more consistent with human annotators. Our system also generates indoor scenes of comparable quality to previous systems and while previous systems degrade in performance when training data is sparse, our system does not degrade to the same degree.","source":"arXiv","year":2025,"language":"en","subjects":["cs.GR","cs.CV","cs.LG"],"url":"https://arxiv.org/abs/2503.04496","pdf_url":"https://arxiv.org/pdf/2503.04496","is_open_access":true,"published_at":"2025-03-06T14:44:25Z","score":69},{"id":"arxiv_2510.16078","title":"ISO/IEC-Compliant Match-on-Card Face Verification with Short Binary Templates","authors":[{"name":"Abdelilah Ganmati"},{"name":"Karim Afdel"},{"name":"Lahcen Koutti"}],"abstract":"We present a practical match-on-card design for face verification in which compact 64/128-bit templates are produced off-card by PCA-ITQ and compared on-card via constant-time Hamming distance. We specify ISO/IEC 7816-4 and 14443-4 command APDUs with fixed-length payloads and decision-only status words (no score leakage), together with a minimal per-identity EEPROM map. Using real binary codes from a CelebA working set (55 identities, 412 images), we (i) derive operating thresholds from ROC/DET, (ii) replay enroll-\u003everify transactions at those thresholds, and (iii) bound end-to-end time by pure link latency plus a small constant on-card budget. Even at the slowest contact rate (9.6 kbps), total verification time is 43.9 ms (64 b) and 52.3 ms (128 b); at 38.4 kbps both are \u003c14 ms. At FAR = 1%, both code lengths reach TPR = 0.836, while 128 b lowers EER relative to 64 b. An optional +6 B helper (targeted symbol-level parity over empirically unstable bits) is latency-negligible. Overall, short binary templates, fixed-payload decision-only APDUs, and constant-time matching satisfy ISO/IEC transport constraints with wide timing margin and align with ISO/IEC 24745 privacy goals. Limitations: single-dataset evaluation and design-level (pre-hardware) timing; we outline AgeDB/CFP-FP and on-card microbenchmarks as next steps.","source":"arXiv","year":2025,"language":"en","subjects":["cs.CR","cs.AI","cs.CV"],"url":"https://arxiv.org/abs/2510.16078","pdf_url":"https://arxiv.org/pdf/2510.16078","is_open_access":true,"published_at":"2025-10-17T11:42:56Z","score":69},{"id":"arxiv_2511.14698","title":"HyMAD: A Hybrid Multi-Activity Detection Approach for Border Surveillance and Monitoring","authors":[{"name":"Sriram Srinivasan"},{"name":"Srinivasan Aruchamy"},{"name":"Siva Ram Krisha Vadali"}],"abstract":"Seismic sensing has emerged as a promising solution for border surveillance and monitoring; the seismic sensors that are often buried underground are small and cannot be noticed easily, making them difficult for intruders to detect, avoid, or vandalize. This significantly enhances their effectiveness compared to highly visible cameras or fences. However, accurately detecting and distinguishing between overlapping activities that are happening simultaneously, such as human intrusions, animal movements, and vehicle rumbling, remains a major challenge due to the complex and noisy nature of seismic signals. Correctly identifying simultaneous activities is critical because failing to separate them can lead to misclassification, missed detections, and an incomplete understanding of the situation, thereby reducing the reliability of surveillance systems. To tackle this problem, we propose HyMAD (Hybrid Multi-Activity Detection), a deep neural architecture based on spatio-temporal feature fusion. The framework integrates spectral features extracted with SincNet and temporal dependencies modeled by a recurrent neural network (RNN). In addition, HyMAD employs self-attention layers to strengthen intra-modal representations and a cross-modal fusion module to achieve robust multi-label classification of seismic events. e evaluate our approach on a dataset constructed from real-world field recordings collected in the context of border surveillance and monitoring, demonstrating its ability to generalize to complex, simultaneous activity scenarios involving humans, animals, and vehicles. Our method achieves competitive performance and offers a modular framework for extending seismic-based activity recognition in real-world security applications.","source":"arXiv","year":2025,"language":"en","subjects":["cs.CV","cs.LG","eess.SP"],"url":"https://arxiv.org/abs/2511.14698","pdf_url":"https://arxiv.org/pdf/2511.14698","is_open_access":true,"published_at":"2025-11-18T17:37:38Z","score":69},{"id":"arxiv_2508.16696","title":"DecoMind: A Generative AI System for Personalized Interior Design Layouts","authors":[{"name":"Reema Alshehri"},{"name":"Rawan Alotaibi"},{"name":"Leen Almasri"},{"name":"Rawan Altaweel"}],"abstract":"This paper introduces a system for generating interior design layouts based on user inputs, such as room type, style, and furniture preferences. CLIP extracts relevant furniture from a dataset, and a layout that contains furniture and a prompt are fed to Stable Diffusion with ControlNet to generate a design that incorporates the selected furniture. The design is then evaluated by classifiers to ensure alignment with the user's inputs, offering an automated solution for realistic interior design.","source":"arXiv","year":2025,"language":"en","subjects":["cs.GR","cs.AI"],"url":"https://arxiv.org/abs/2508.16696","pdf_url":"https://arxiv.org/pdf/2508.16696","is_open_access":true,"published_at":"2025-08-22T00:01:48Z","score":69},{"id":"arxiv_2510.17650","title":"ZACH-ViT: A Zero-Token Vision Transformer with ShuffleStrides Data Augmentation for Robust Lung Ultrasound Classification","authors":[{"name":"Athanasios Angelakis"},{"name":"Amne Mousa"},{"name":"Micah L. A. Heldeweg"},{"name":"Laurens A. Biesheuvel"},{"name":"Mark A. Haaksma"},{"name":"Jasper M. Smit"},{"name":"Pieter R. Tuinman"},{"name":"Paul W. G. Elbers"}],"abstract":"Differentiating cardiogenic pulmonary oedema (CPE) from non-cardiogenic and structurally normal lungs in lung ultrasound (LUS) videos remains challenging due to the high visual variability of non-cardiogenic inflammatory patterns (NCIP/ARDS-like), interstitial lung disease, and healthy lungs. This heterogeneity complicates automated classification as overlapping B-lines and pleural artefacts are common. We introduce ZACH-ViT (Zero-token Adaptive Compact Hierarchical Vision Transformer), a 0.25 M-parameter Vision Transformer variant that removes both positional embeddings and the [CLS] token, making it fully permutation-invariant and suitable for unordered medical image data. To enhance generalization, we propose ShuffleStrides Data Augmentation (SSDA), which permutes probe-view sequences and frame orders while preserving anatomical validity. ZACH-ViT was evaluated on 380 LUS videos from 95 critically ill patients against nine state-of-the-art baselines. Despite the heterogeneity of the non-cardiogenic group, ZACH-ViT achieved the highest validation and test ROC-AUC (0.80 and 0.79) with balanced sensitivity (0.60) and specificity (0.91), while all competing models collapsed to trivial classification. It trains 1.35x faster than Minimal ViT (0.62M parameters) with 2.5x fewer parameters, supporting real-time clinical deployment. These results show that aligning architectural design with data structure can outperform scale in small-data medical imaging.","source":"arXiv","year":2025,"language":"en","subjects":["cs.LG","cs.CV"],"url":"https://arxiv.org/abs/2510.17650","pdf_url":"https://arxiv.org/pdf/2510.17650","is_open_access":true,"published_at":"2025-10-20T15:26:38Z","score":69},{"id":"crossref_10.1016/j.dyepig.2023.111305","title":"Efficient adsorption of crystal violet (CV) dye onto benign chitosan-modified l-cysteine/bentonite (CS-Cys/Bent) bionanocomposite: Synthesis, characterization and experimental studies","authors":[{"name":"Rais Ahmad"},{"name":"Mohammad Osama Ejaz"}],"abstract":"","source":"CrossRef","year":2023,"language":"en","subjects":null,"doi":"10.1016/j.dyepig.2023.111305","url":"https://doi.org/10.1016/j.dyepig.2023.111305","is_open_access":true,"citations":63,"published_at":"","score":68.89},{"id":"ss_0cab739497afeb1885425e58325fd49353a732e4","title":"Steel ball surface inspection using modified DRAEM and machine vision","authors":[{"name":"Chun-Chin Hsu"},{"name":"Ya-Chen Hsu"},{"name":"Po-Chou Shih"},{"name":"Yong Yang"},{"name":"F. Tien"}],"abstract":"","source":"Semantic Scholar","year":2024,"language":"en","subjects":["Computer Science"],"doi":"10.1007/s10845-024-02370-x","url":"https://www.semanticscholar.org/paper/0cab739497afeb1885425e58325fd49353a732e4","is_open_access":true,"citations":5,"published_at":"","score":68.15},{"id":"crossref_10.3390/fib12100089","title":"The Influence of Abaca Fiber Treated with Sodium Hydroxide on the Deformation Coefficients Cc, Cs, and Cv of Organic Soils","authors":[{"name":"Carlos Contreras"},{"name":"Jorge Albuja-Sánchez"},{"name":"Oswaldo Proaño"},{"name":"Carlos Ávila"},{"name":"Andreina Damián-Chalán"},{"name":"Mateo Peñaherrera-Aguirre"}],"abstract":"This study shows the influence of the inclusion of abaca fiber (Musa Textilis) on the coefficients of consolidation, expansion, and compression for normally consolidated clayey silt organic soil specimens using reconstituted samples. For this purpose, abaca fiber was added according to the dry mass of the soil, in lengths (5, 10, and 15 mm) and concentrations (0.5, 1.0, and 1.5%) subjected to a curing process with sodium hydroxide (NaOH). The virgin and fiber-added soil samples were reconstituted as slurry, and one-dimensional consolidation tests were performed in accordance with ASTM D2435. The results showed a reduction in void ratio (compared to the soil without fiber) and an increase in the coefficient of consolidation (Cv) as a function of fiber concentration and length, with values corresponding to 1.5% and 15 mm increasing from 75.16 to 144.51 cm2/s. Although no significant values were obtained for the compression and expansion coefficients, it was assumed that the soil maintained its compressibility. The statistical analysis employed hierarchical linear models to assess the significance of the effects of incorporating fibers of varying lengths and percentages on the coefficients, comparing them with the control samples. Concurrently, mixed linear models were utilized to evaluate the influence of the methods for obtaining the Cv, revealing that Taylor’s method yielded more conservative values, whereas the Casagrande method produced higher values.","source":"CrossRef","year":2024,"language":"en","subjects":null,"doi":"10.3390/fib12100089","url":"https://doi.org/10.3390/fib12100089","is_open_access":true,"citations":3,"published_at":"","score":68.09},{"id":"ss_53bf8b97f4451a588c444f1a3ffefd92cf07647c","title":"Is there really a Citation Age Bias in NLP?","authors":[{"name":"H. Nguyen"},{"name":"Steffen Eger"}],"abstract":"Citations are a key ingredient of scientific research to relate a paper to others published in the community. Recently, it has been noted that there is a citation age bias in the Natural Language Processing (NLP) community, one of the currently fastest growing AI subfields, in that the mean age of the bibliography of NLP papers has become ever younger in the last few years, leading to `citation amnesia' in which older knowledge is increasingly forgotten. In this work, we put such claims into perspective by analyzing the bibliography of $\\sim$300k papers across 15 different scientific fields submitted to the popular preprint server Arxiv in the time period from 2013 to 2022. We find that all AI subfields (in particular: cs.AI, cs.CL, cs.CV, cs.LG) have similar trends of citation amnesia, in which the age of the bibliography has roughly halved in the last 10 years (from above 12 in 2013 to below 7 in 2022), on average. Rather than diagnosing this as a citation age bias in the NLP community, we believe this pattern is an artefact of the dynamics of these research fields, in which new knowledge is produced in ever shorter time intervals.","source":"Semantic Scholar","year":2024,"language":"en","subjects":["Computer Science"],"doi":"10.48550/arXiv.2401.03545","url":"https://www.semanticscholar.org/paper/53bf8b97f4451a588c444f1a3ffefd92cf07647c","is_open_access":true,"citations":3,"published_at":"","score":68.09},{"id":"ss_71e0f0da4e09e0405381c3e624ab9e51e92c9f05","title":"NLLG Quarterly arXiv Report 09/24: What are the most influential current AI Papers?","authors":[{"name":"Christoph Leiter"},{"name":"Jonas Belouadi"},{"name":"Yanran Chen"},{"name":"Ran Zhang"},{"name":"Daniil Larionov"},{"name":"Aida Kostikova"},{"name":"Steffen Eger"}],"abstract":"The NLLG (Natural Language Learning\u0026Generation) arXiv reports assist in navigating the rapidly evolving landscape of NLP and AI research across cs.CL, cs.CV, cs.AI, and cs.LG categories. This fourth installment captures a transformative period in AI history - from January 1, 2023, following ChatGPT's debut, through September 30, 2024. Our analysis reveals substantial new developments in the field - with 45% of the top 40 most-cited papers being new entries since our last report eight months ago and offers insights into emerging trends and major breakthroughs, such as novel multimodal architectures, including diffusion and state space models. Natural Language Processing (NLP; cs.CL) remains the dominant main category in the list of our top-40 papers but its dominance is on the decline in favor of Computer vision (cs.CV) and general machine learning (cs.LG). This report also presents novel findings on the integration of generative AI in academic writing, documenting its increasing adoption since 2022 while revealing an intriguing pattern: top-cited papers show notably fewer markers of AI-generated content compared to random samples. Furthermore, we track the evolution of AI-associated language, identifying declining trends in previously common indicators such as\"delve\".","source":"Semantic Scholar","year":2024,"language":"en","subjects":["Computer Science"],"doi":"10.48550/arXiv.2412.12121","url":"https://www.semanticscholar.org/paper/71e0f0da4e09e0405381c3e624ab9e51e92c9f05","is_open_access":true,"citations":3,"published_at":"","score":68.09},{"id":"crossref_10.47663/ibec.v3i1.233","title":"Utilization of E-Commerce and Digital Marketing in Increasing Customer Purchasing Decisions at CV. CS Lestari Jaya Kisaran","authors":[{"name":"Michael"},{"name":"Sri Rezeki"}],"abstract":"At CV.  CS  Lestari  Jaya,  a  decline  in  sales  is  evident  due  to  a  continuous  decrease  in  customer  purchases.  This  drop  in  customer  purchasing  decisions  may  be  linked  to  the  company's limited  use  of  e-commerce  and  digital  marketing  strategies.  The  aim  of  this  study  is  to  assess  how the  use  of  e-commerce  and  digital  marketing  can  influence  customer  purchasing  decisions  at CV.  CS  Lestari  Jaya  Kisaran.  This  research  is  quantitative  in  nature,  with  a  population  of  149 customers  who  made  purchases  at  CV.  CS  Lestari  Jaya  Kisaran  in  2023.  The  sample  size  was determined  using  the  Slovin  formula  with  a  5%  standard  error,  resulting  in  109  samples.  The findings  indicate  that  both  e-commerce  and  digital  marketing  have  a  significant  impact  on customer  purchasing  decisions,  both  individually  and  together.  To  enhance  customer purchasing  decisions,  it  is  crucial  for  CV.  CS  Lestari  Jaya  Kisaran  to  effectively  leverage  e-commerce platforms  and  digital  marketing  strategies.  Furthermore,  running  targeted  paid  advertising campaigns  on  these  platforms  can  help  reach  specific  audiences  based  on  demographics  and interests,  increasing  the  likelihood  of  conversions","source":"CrossRef","year":2024,"language":"en","subjects":null,"doi":"10.47663/ibec.v3i1.233","url":"https://doi.org/10.47663/ibec.v3i1.233","pdf_url":"https://conference.eka-prasetya.ac.id/index.php/IBEC/article/download/233/110","is_open_access":true,"published_at":"","score":68},{"id":"arxiv_2412.12121","title":"NLLG Quarterly arXiv Report 09/24: What are the most influential current AI Papers?","authors":[{"name":"Christoph Leiter"},{"name":"Jonas Belouadi"},{"name":"Yanran Chen"},{"name":"Ran Zhang"},{"name":"Daniil Larionov"},{"name":"Aida Kostikova"},{"name":"Steffen Eger"}],"abstract":"The NLLG (Natural Language Learning \u0026 Generation) arXiv reports assist in navigating the rapidly evolving landscape of NLP and AI research across cs.CL, cs.CV, cs.AI, and cs.LG categories. This fourth installment captures a transformative period in AI history - from January 1, 2023, following ChatGPT's debut, through September 30, 2024. Our analysis reveals substantial new developments in the field - with 45% of the top 40 most-cited papers being new entries since our last report eight months ago and offers insights into emerging trends and major breakthroughs, such as novel multimodal architectures, including diffusion and state space models. Natural Language Processing (NLP; cs.CL) remains the dominant main category in the list of our top-40 papers but its dominance is on the decline in favor of Computer vision (cs.CV) and general machine learning (cs.LG). This report also presents novel findings on the integration of generative AI in academic writing, documenting its increasing adoption since 2022 while revealing an intriguing pattern: top-cited papers show notably fewer markers of AI-generated content compared to random samples. Furthermore, we track the evolution of AI-associated language, identifying declining trends in previously common indicators such as \"delve\".","source":"arXiv","year":2024,"language":"en","subjects":["cs.DL","cs.AI","cs.CL","cs.CV","cs.LG"],"url":"https://arxiv.org/abs/2412.12121","pdf_url":"https://arxiv.org/pdf/2412.12121","is_open_access":true,"published_at":"2024-12-02T22:10:38Z","score":68},{"id":"arxiv_2401.03545","title":"Is there really a Citation Age Bias in NLP?","authors":[{"name":"Hoa Nguyen"},{"name":"Steffen Eger"}],"abstract":"Citations are a key ingredient of scientific research to relate a paper to others published in the community. Recently, it has been noted that there is a citation age bias in the Natural Language Processing (NLP) community, one of the currently fastest growing AI subfields, in that the mean age of the bibliography of NLP papers has become ever younger in the last few years, leading to `citation amnesia' in which older knowledge is increasingly forgotten. In this work, we put such claims into perspective by analyzing the bibliography of $\\sim$300k papers across 15 different scientific fields submitted to the popular preprint server Arxiv in the time period from 2013 to 2022. We find that all AI subfields (in particular: cs.AI, cs.CL, cs.CV, cs.LG) have similar trends of citation amnesia, in which the age of the bibliography has roughly halved in the last 10 years (from above 12 in 2013 to below 7 in 2022), on average. Rather than diagnosing this as a citation age bias in the NLP community, we believe this pattern is an artefact of the dynamics of these research fields, in which new knowledge is produced in ever shorter time intervals.","source":"arXiv","year":2024,"language":"en","subjects":["cs.DL","cs.AI","cs.CL"],"url":"https://arxiv.org/abs/2401.03545","pdf_url":"https://arxiv.org/pdf/2401.03545","is_open_access":true,"published_at":"2024-01-07T17:12:08Z","score":68}],"total":116217,"page":1,"page_size":20,"sources":["CrossRef","DOAJ","arXiv","Semantic Scholar"],"query":"cs.CV"}