Neural networks have enabled state-of-the-art approaches to achieve incredible results on computer vision tasks such as object detection. However, such success greatly relies on costly computation resources, which hinders people with cheap devices from appreciating the advanced technology. In this paper, we propose Cross Stage Partial Network (CSPNet) to mitigate the problem that previous works require heavy inference computations from the network architecture perspective. We attribute the problem to the duplicate gradient information within network optimization. The proposed networks respect the variability of the gradients by integrating feature maps from the beginning and the end of a network stage, which, in our experiments, reduces computations by 20% with equivalent or even superior accuracy on the ImageNet dataset, and significantly outperforms state-of-the-art approaches in terms of AP50 on the MS COCO object detection dataset. The CSPNet is easy to implement and general enough to cope with architectures based on ResNet, ResNeXt, and DenseNet.
Despite the remarkable ability of large language models (LMs) to comprehend and generate language, they have a tendency to hallucinate and create factually inaccurate output. Augmenting LMs by retrieving information from external knowledge resources is one promising solution. Most existing retrieval augmented LMs employ a retrieve-and-generate setup that only retrieves information once based on the input. This is limiting, however, in more general scenarios involving generation of long texts, where continually gathering information throughout generation is essential. In this work, we provide a generalized view of active retrieval augmented generation, methods that actively decide when and what to retrieve across the course of the generation. We propose Forward-Looking Active REtrieval augmented generation (FLARE), a generic method which iteratively uses a prediction of the upcoming sentence to anticipate future content, which is then utilized as a query to retrieve relevant documents to regenerate the sentence if it contains low-confidence tokens. We test FLARE along with baselines comprehensively over 4 long-form knowledge-intensive generation tasks/datasets. FLARE achieves superior or competitive performance on all tasks, demonstrating the effectiveness of our method. Code and datasets are available at https://github.com/jzbjyb/FLARE.
AbstractIn this report, we present a framework for implementing an arbitrary n-outcome generalized quantum measurement (POVM) on an m-qubit register as a sequence of two-outcome measurements requiring only single ancillary qubit. Our procedure offers a particular construction for the two-outcome partial measurements which can be composed into a full implementation of the measurement on any gate architecture. This implementation in general requires classical feedback; we present specific cases when this is not the case. We apply this framework on the unambiguous state discrimination and analyze possible strategies. In the simplest case, it gives the same construction as is known, if we opt for performing conclusiveness measurement first. However, it also offers possibility of performing measurement for one of the state outcomes first, leaving conclusiveness measurement for later. This shows flexibility of presented framework and opens possibilities for further optimization. We present discussion also on biased qubit case as well as general case of unambiguous quantum state discrimination in higher dimension.
Soil erosion and sediment buildup are the factors that speed up the decline in capacity and function of reservoirs, agricultural products, and water resources. In order to simulate sediment and runoff and map high sediment-yielding sub-basins in the Gibe Gojeb catchment in southwest Ethiopia, this study used the Soil and Water Assessment Tool (SWAT) model. Using data on sediment and river flow, calibration and validation were carried out. Between 2003 and 2016, the catchment produced an average annual sediment loading of 62.5 tons ha−1 yr−1, with loading fluctuations ranging from 0.2 to 108.4 tons ha−1 yr−1. The acceptable sediment yield threshold value ranges from 12.3 to 108.4 tons ha−1 yr−1 for 56 sub-basins, and from 0.2 to 10 tons ha−1 yr−1 for 5 sub-basins. The most significant sub-basins with very high to extremely severe sediment yields were sub-basins 1 to 30, 32 to 44, 47, 48, 50, 51, and 53 to 61. After thirteen years of operation, the yearly amount of 58,802 tons of sediment transferred from the catchment and deposited into Gibe One reservoir has decreased the capacity by 5.7 %. The accumulation of sediment in a reservoir has an impact on its functionality, power production, and capacity, affecting the safety of dams and the environment. The study's findings enhanced our comprehension of sediment accumulation in reservoirs and furnished us with the necessary information regarding reservoir safety, integrated soil, and water management.
Marwah Alaofi, Negar Arabzadeh, Charles L. A. Clarke
et al.
In this chapter, we consider generative information retrieval evaluation from two distinct but interrelated perspectives. First, large language models (LLMs) themselves are rapidly becoming tools for evaluation, with current research indicating that LLMs may be superior to crowdsource workers and other paid assessors on basic relevance judgement tasks. We review past and ongoing related research, including speculation on the future of shared task initiatives, such as TREC, and a discussion on the continuing need for human assessments. Second, we consider the evaluation of emerging LLM-based generative information retrieval (GenIR) systems, including retrieval augmented generation (RAG) systems. We consider approaches that focus both on the end-to-end evaluation of GenIR systems and on the evaluation of a retrieval component as an element in a RAG system. Going forward, we expect the evaluation of GenIR systems to be at least partially based on LLM-based assessment, creating an apparent circularity, with a system seemingly evaluating its own output. We resolve this apparent circularity in two ways: 1) by viewing LLM-based assessment as a form of "slow search", where a slower IR system is used for evaluation and training of a faster production IR system; and 2) by recognizing a continuing need to ground evaluation in human assessment, even if the characteristics of that human assessment must change.
Recent progress has shown that the generalization error of the Gibbs algorithm can be exactly characterized using the symmetrized KL information between the learned hypothesis and the entire training dataset. However, evaluating such a characterization is cumbersome, as it involves a high-dimensional information measure. In this paper, we address this issue by considering individual sample information measures within the Gibbs algorithm. Our main contribution lies in establishing the asymptotic equivalence between the sum of symmetrized KL information between the output hypothesis and individual samples and that between the hypothesis and the entire dataset. We prove this by providing explicit expressions for the gap between these measures in the non-asymptotic regime. Additionally, we characterize the asymptotic behavior of various information measures in the context of the Gibbs algorithm, leading to tighter generalization error bounds. An illustrative example is provided to verify our theoretical results, demonstrating our analysis holds in broader settings.
Temporal Graph Neural Networks, a new and trending area of machine learning, suffers from a lack of formal analysis. In this paper, information theory is used as the primary tool to provide a framework for the analysis of temporal GNNs. For this reason, the concept of information bottleneck is used and adjusted to be suitable for a temporal analysis of such networks. To this end, a new definition for Mutual Information Rate is provided, and the potential use of this new metric in the analysis of temporal GNNs is studied.
Many fields of science investigate states and processes as resources. Chemistry, thermodynamics, Shannon's theory of communication channels, and the theory of quantum entanglement are prominent examples. Questions addressed by these theories include: Which resources can be converted into which others? At what rate can many copies of one resource be converted into many copies of another? Can a catalyst enable a conversion? How to quantify a resource? We propose a general mathematical definition of resource theory. We prove general theorems about how resource theories can be constructed from theories of processes with a subclass of processes that are freely implementable. These define the means by which costly states and processes can be interconverted. We outline how various existing resource theories fit into our framework, which is a first step in a project of identifying universal features and principles of resource theories. We develop a few general results concerning resource convertibility.
Smart mobile devices and mobile apps have been rolling out at swift speeds over the last decade, turning these devices into convenient and general-purpose computing platforms. Sensory data from smart devices are important resources to nourish mobile services, and they are regarded as innocuous information that can be obtained without user permissions. In this article, we show that this seemingly innocuous information could cause serious privacy issues. First, we demonstrate that users' tap positions on the screens of smart devices can be identified based on sensory data by employing some deep learning techniques. Second, it is shown that tap stream profiles for each type of apps can be collected, so that a user's app usage habit can be accurately inferred. In our experiments, the sensory data and mobile app usage information of 102 volunteers are collected. The experiment results demonstrate that the prediction accuracy of tap position inference can be at least 90 percent by utilizing convolutional neural networks. Furthermore, based on the inferred tap position information, users' app usage habits and passwords may be inferred with high accuracy.
Alan Diego Briem Stamm, Maria Salome Outes, Marta Alicia Fernandez Iriarte
et al.
Resumen
El proceso para establecer una identificación odontológica inequívoca se sustenta en la recuperación de la mayor cantidad posible de información post mortem, y su posterior cotejo con aquellos registros ante mortem de la víctima.
Los dientes son tejidos del cuerpo humano con una elevada resistencia en su estructura, lo que les permite tolerar el embate de los efectos ambientales como el fuego, la desecación, la descomposición o la inmersión prolongada. En la mayoría de los desastres naturales, y también en los provocados por el hombre, los registros odontológicos pueden contribuir para identificar cuerpos que sería irreconocibles aplicando metodologías tradicionales. En cadáveres quemados o carbonizados, resulta imperativo conservar la evidencia odontológica recuperada, para evitar que su manipulación pueda desvirtuarla e incluso destruirla; por eso se suele fijar y estabilizar antes de ser transportada. Los recursos imagenológicos constituyen una sólida estrategia de perennización de evidencia, los cuales pueden ser complementados por fotografías y toma de impresiones. El presente artículo revisa varios estudios sobre restos dentales, materiales de obturación y aparatos protésicos quemados o carbonizados, haciendo énfasis sobre su importancia en el proceso de identificación humana.
Palabras clave: Cuerpos carbonizados, diente, identificación humana, materiales de obturación, odontología forense, restauraciones protésicas//
Abstract
The process to establish an unequivocal dental identification is based on the recovery of the greatest possible amount of post mortem information, and its subsequent comparison with the ante mortem records of the victim.
Teeth are tissues of the human body with high resistance in their structure, which allows them to tolerate the onslaught of environmental effects such as fire, desiccation, decomposition, or prolonged immersion. In most natural disasters, an also in those caused by men, dental records can help identify a body that would be unrecognizable using traditional methodologies.
In burned or charred corpses, it is imperative to preserve the recovered dental evidence, to avoid its manipulation from distorting and even destroying it; this is why it is usually fixed and stabilized before being transported. Imaging resources constitute a solid strategy for the perpetuation of evidence, which can also be complemented by photographs and impression taking. This article reviews several studies on dental remains, materials and burned or charred prosthetic devices, emphasizing their importance in the human identification process.
Key words: Charred bodies, forensic odontology, human identification, prosthetic restorations, sealing materials, tooth.
Shintaro Minagawa, M. Hamed Mohammady, Kenta Sakai
et al.
Adiabatic measurements, followed by feedback and erasure protocols, have often been considered as a model to embody Maxwell's Demon paradox and to study the interplay between thermodynamics and information processing. Such studies have led to the conclusion, now widely accepted in the community, that Maxwell's Demon and the second law of thermodynamics can peacefully coexist because any gain provided by the demon must be offset by the cost of performing the measurement and resetting the demon's memory to its initial state. Statements of this kind are collectively referred to as second laws of information thermodynamics and have recently been extended to include quantum theoretical scenarios. However, previous studies in this direction have made several assumptions, particularly about the feedback process and the demon's memory readout, and thus arrived at statements that are not universally applicable and whose range of validity is not clear. In this work, we fill this gap by precisely characterizing the full range of quantum feedback control and erasure protocols that are overall consistent with the second law of thermodynamics. This leads us to conclude that the second law of information thermodynamics is indeed universal: it must hold for any quantum feedback control and erasure protocol, regardless of the measurement process involved, as long as the protocol is overall compatible with thermodynamics. Our comprehensive analysis not only encompasses new scenarios but also retrieves previous ones, doing so with fewer assumptions. This simplification contributes to a clearer understanding of the theory.
Ronnie Wirestam, Anna Lundberg, Arthur Chakwizira
et al.
Background: Estimation of the oxygen extraction fraction (OEF) by quantitative susceptibility mapping (QSM) magnetic resonance imaging (MRI) is promising but requires systematic evaluation. Extraction of OEF-related information from the tissue residue function in dynamic susceptibility contrast MRI (DSC-MRI) has also been proposed. In this study, whole-brain OEF repeatability was investigated, as well as the relationships between QSM-based OEF and DSC-MRI-based parameters, i.e., mean transit time (MTT) and an oxygen extraction index, referred to as apparent OEF (AOEF). Method: Test-retest data were obtained from 20 healthy volunteers at 3 T. QSM maps were reconstructed from 3D gradient-echo MRI phase data, using morphology-enabled dipole inversion. DSC-MRI was accomplished using gradient-echo MRI at a temporal resolution of 1.24 s. Results: The whole-brain QSM-based OEF was (40.4±4.8) % and, in combination with a previously published cerebral blood flow (CBF) estimate, this corresponds to a cerebral metabolic rate of oxygen level of CMRO2 = 3.36 ml O2/min/100 g. The intra-class correlation coefficient [ICC(2,1)] for OEF test-retest data was 0.73. The MTT-versus-OEF and AOEF-versus-OEF relationships showed correlation coefficients of 0.61 (p = 0.004) and 0.52 (p = 0.019), respectively. Discussion: QSM-based OEF showed a convincing absolute level and good test-retest results in terms of the ICC. Moderate to good correlations between QSM-based OEF and DSC-MRI-based parameters were observed. The present results constitute an indicator of the level of robustness that can be achieved without applying extraordinary resources in terms of MRI equipment, imaging protocol, QSM reconstruction, and OEF analysis.
Sequences of the complete mitochondrial cytochrome oxidase subunit I gene were used to identify Trichiurus species and examine their population genetic structure and demographic history along the coast of China. Three Trichiurus species were found. Trichiurus japonicus lives in colder waters along the continental shelves in the China Seas, while Trichiurus nanhaiensis lives warmer waters along continental slopes in the South China Sea, and Trichiurus brevis lives in shallow and warmer waters in the South China Sea. The migrations of these species were mainly associated with feeding and spawning preferences. Two major wintering and spawning grounds in the East China Sea and South China Sea were found. All species showed a lack of population genetic structure resulting from their oceanodromous life cycle (the degree of population substructure index NST = 0.000–0.149), but the results of approximate Bayesian computational approaches suggested population declines or stabilization and differentiation. The results of the TMRCA (time to the most recent common ancestor) showed that during glaciations, the Yellow Sea and the East China Sea were completely exposed, and the South China Sea acted as a refugium. Thus, the populations of these three species experienced differentiation during glaciations. This study also examined the limitations of Bayesian skyline plot analysis.
Science, General. Including nature conservation, geographical distribution
Fatemeh Azadi, Heshaam Faili, Mohammad Javad Dousti
Translation Quality Estimation (QE) is the task of predicting the quality of machine translation (MT) output without any reference. This task has gained increasing attention as an important component in the practical applications of MT. In this paper, we first propose XLMRScore, which is a cross-lingual counterpart of BERTScore computed via the XLM-RoBERTa (XLMR) model. This metric can be used as a simple unsupervised QE method, nevertheless facing two issues: firstly, the untranslated tokens leading to unexpectedly high translation scores, and secondly, the issue of mismatching errors between source and hypothesis tokens when applying the greedy matching in XLMRScore. To mitigate these issues, we suggest replacing untranslated words with the unknown token and the cross-lingual alignment of the pre-trained model to represent aligned words closer to each other, respectively. We evaluate the proposed method on four low-resource language pairs of the WMT21 QE shared task, as well as a new English$\rightarrow$Persian (En-Fa) test dataset introduced in this paper. Experiments show that our method could get comparable results with the supervised baseline for two zero-shot scenarios, i.e., with less than 0.01 difference in Pearson correlation, while outperforming unsupervised rivals in all the low-resource language pairs for above 8%, on average.
BackgroundThe use of information technology, including internet- and telephone-based resources, is becoming an alternative and supporting method of providing many forms of services in a healthcare and health management setting. Telephone consultations provide a promising alternative and supporting service for face-to-face general practice care. The aim of this review is to utilize a systematic review to collate evidence on the use of telephone consultation as an alternative to face-to-face general practice visits.MethodsA systematic search of MEDLINE, CINAHL, The Cochrane Library, and the International Clinical Trials Registry Platform was performed using the search terms for the intervention (telephone consultation) and the comparator (general practice). Systematic reviews and randomized control trials that examined telephone consultation compared to normal face-to-face consultation in general practice were included in this review. Papers were reviewed, assessed for quality (Cochrane Collaboration’s ‘Risk of bias’ tool) and data extracted and analysed.ResultsTwo systematic reviews and one RCT were identified and included in the analysis.The RCT (N = 388) was of patients requesting same-day appointments from two general practices and patients were randomized to a same-day face-to-face appointment or a telephone call back consultation. There was a reduction in the time spent on consultations in the telephone group (1.5 min (0.6 to 2.4)) and patients in the telephone arm had 0.2 (0 to 0.3) more follow-up consultations than the face-to-face group.One systematic review focused on telephone consultation and triage on healthcare use, and included one RCT and one other observational study that examined telephone consultations. The other systematic review focused on patient access and included one RCT and four observational studies that examined telephone consultations. Both systematic reviews provided narrative interpretations of the evidence and concluded that telephone consultations provided an appropriate alternative to telephone consultations and reduced practice work load.ConclusionThere is a lack of high level evidence for telephone consultations in a GP setting; however, current evidence suggests that telephone consultations as an alternative to face-to-face general practice consultations offers an appropriate option in certain settings.Systematic review registrationPROSPERO CRD42015025225
METHOD A national cross-sectional online survey of Australian general practitioners was conducted in April and May 2020, with 572 respondents. RESULTS The COVID-19 pandemic in Australia has resulted in major changes to general practice business models. Most practices have experienced increased workload and reduced income. DISCUSSION Australian general practices have undertaken major innovation and realignment to respond to staff safety and patient care challenges during the COVID-19 pandemic. Increased administration, reduced billable time, managing staffing and pivoting to telehealth service provision have negatively affected practice viability. Major sources of information for general practice are primary care-specific, but many practices turn to colleagues for support and resources.
هدف: هدف پژوهش حاضر ارزیابی تأثیر تولیدات علمی پژوهشگران «مؤسسه تحقیقات جنگلها و مراتع کشور» در پایگاه اطلاعاتی اسکوپوس با استفاده از شاخصهای آلتمتریکس ابزار پلامایکس است.
روششناسی: مطالعه حاضر از لحاظ هدف، کاربردی و از لحاظ نوع، توصیفی علمسنجی که با استفاده از روش کتابخانهای و رویکرد آلتمتریکس انجام شده است. جامعه پژوهش، تولیدات علمی پژوهشگران «مؤسسه تحقیقات جنگلها و مراتع کشور» نمایهشده در اسکوپوس تا اسفند 1398 است. اطلاعات کتابشناختی، شاخص استناد و شاخصهای آلتمتریکس مقالات استخراج و در فایل اکسل ذخیره شد. نتایج با استفاده از روشهای آمار توصیفی و تحلیلی در قالب جداول و نمودار ارائه شد.
یافتهها: 6.81 درصد تولیدات مورد بررسی استناد دریافت کردهاند. بیش از 90 درصد این تولیدات در یکی از 13 عملکرد مربوط به پنج شاخص آلتمتریکس مورد توجه قرار گرفتهاند. شاخص استفاده با 61481 مرتبه بیشترین آمار و شاخصهای رسانه اجتماعی و اشاره، از کمترین میزان برخوردار بودند. از بین عملکردهای مختلف این شاخصها، بیشترین تعداد به عملکردهای مشاهده چکیده، خواندهشدن و مشاهده متن کامل اختصاص داشت. نتایج حاکی از همبستگی مثبت معنادار بین شاخصهای آلتمتریکس و شاخص سنتی استناد دارد.
نتیجهگیری: فعالیت پژوهشگران در رسانههای اجتماعی میتواند افزایش مشاهدهپذیری آثار علمی را به همراه داشته باشد. پژوهشگران میتوانند از این بستر بهعنوان ابزار خود-آرشیوی استفاده نمایند.
Science (General), Information resources (General)