{"results":[{"id":"doaj_10.46298/jdmdh.9045","title":"Some Reflections on the Interface between Professional Machine Translation Literacy and Data Literacy","authors":[{"name":"Ralph Krüger"}],"abstract":"Due to the widespread use of data-driven neural machine translation, both by professional translators and layperson users, an adequate machine translation literacy on the part of the users of this technology is becoming more and more important. At the same time, the increasing datafication of both the private and the business sphere requires an adequate data literacy in modern society. The present article takes a closer look at machine translation literacy and data literacy and investigates the interface between the two concepts. This is done to lay the preliminary theoretical foundations for a didactic project aiming to develop didactic resources for teaching data literacy in its machine translation-specific form to students of BA programmes in translation/specialised communication.","source":"DOAJ","year":2023,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.9045","url":"https://jdmdh.episciences.org/9045/pdf","pdf_url":"https://jdmdh.episciences.org/9045/pdf","is_open_access":true,"published_at":"","score":67},{"id":"doaj_10.46298/jdmdh.9114","title":"La traduction littéraire automatique : Adapter la machine à la traduction humaine individualisée","authors":[{"name":"Damien Hansen"},{"name":"Emmanuelle Esperança-Rodier"},{"name":"Hervé Blanchon"},{"name":"Valérie Bada"}],"abstract":"La traduction automatique neuronale et son adaptation à des domaines spécifiques par le biais de corpus spécialisés ont permis à cette technologie d’intégrer bien plus largement qu’auparavant le métier et la formation des traducteur·trice·s. Si le paradigme neuronal (et le deep learning de manière générale) a ainsi pu investir des domaines parfois insoupçonnés, y compris certains où la créativité est de mise, celui-ci est moins marqué par un gain phénoménal de performance que par une utilisation massive auprès du public et les débats qu’il génère, nombre d’entre eux invoquant couramment le cas littéraire pour (in)valider telle ou telle observation. Pour apprécier la pertinence de cette technologie, et ce faisant surmonter les discours souvent passionnés des opposants et partisans de la traduction automatique, il est toutefois nécessaire de mettre l’outil à l’épreuve, afin de fournir un exemple concret de ce que pourrait produire un système entraîné spécifiquement pour la traduction d’œuvres littéraires. Inscrit dans un projet de recherche plus vaste visant à évaluer l’aide que peuvent fournir les outils informatiques aux traducteurs et traductrices littéraires, cet article propose par conséquent une expérience de traduction automatique de la prose qui n’a plus été tentée pour le français depuis les systèmes probabilistes et qui rejoint un nombre croissant d’études sur le sujet pour d’autres paires de langues. Nous verrons que si les résultats sont encourageants, ceux-ci laissent présager une tout autre manière d’envisager la traduction automatique, plus proche de la traduction humaine assistée par ordinateur que de la post-édition pure, et que l’exemple des œuvres de littérature soulève en outre des réflexions utiles pour la traduction dans son ensemble.","source":"DOAJ","year":2022,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.9114","url":"https://jdmdh.episciences.org/9114/pdf","pdf_url":"https://jdmdh.episciences.org/9114/pdf","is_open_access":true,"published_at":"","score":66},{"id":"doaj_10.46298/jdmdh.9226","title":"Hate speech, Censorship, and Freedom of Speech: The Changing Policies of Reddit","authors":[{"name":"Elissa Nakajima Wickham"},{"name":"Emily Öhman"}],"abstract":"This paper examines the shift in focus on content policies and user attitudes on the social media platform Reddit. We do this by focusing on comments from general Reddit users from five posts made by admins (moderators) on updates to Reddit Content Policy. All five concern the nature of what kind of content is allowed to be posted on Reddit, and which measures will be taken against content that violates these policies. We use topic modeling to probe how the general discourse for Redditors has changed around limitations on content, and later, limitations on hate speech, or speech that incites violence against a particular group. We show that there is a clear shift in both the contents and the user attitudes that can be linked to contemporary societal upheaval as well as newly passed laws and regulations, and contribute to the wider discussion on hate speech moderation.","source":"DOAJ","year":2022,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.9226","url":"https://jdmdh.episciences.org/9226/pdf","pdf_url":"https://jdmdh.episciences.org/9226/pdf","is_open_access":true,"published_at":"","score":66},{"id":"doaj_10.46298/jdmdh.9152","title":"Adapting vs. Pre-training Language Models for Historical Languages","authors":[{"name":"Enrique Manjavacas"},{"name":"Lauren Fonteyn"}],"abstract":"As large language models such as BERT are becoming increasingly popular in Digital Humanities (DH), the question has arisen as to how such models can be made suitable for application to specific textual domains, including that of 'historical text'. Large language models like BERT can be pretrained from scratch on a specific textual domain and achieve strong performance on a series of downstream tasks. However, this is a costly endeavour, both in terms of the computational resources as well as the substantial amounts of training data it requires. An appealing alternative, then, is to employ existing 'general purpose' models (pre-trained on present-day language) and subsequently adapt them to a specific domain by further pre-training. Focusing on the domain of historical text in English, this paper demonstrates that pre-training on domain-specific (i.e. historical) data from scratch yields a generally stronger background model than adapting a present-day language model. We show this on the basis of a variety of downstream tasks, ranging from established tasks such as Part-of-Speech tagging, Named Entity Recognition and Word Sense Disambiguation, to ad-hoc tasks like Sentence Periodization, which are specifically designed to test historically relevant processing.","source":"DOAJ","year":2022,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.9152","url":"https://jdmdh.episciences.org/9152/pdf","pdf_url":"https://jdmdh.episciences.org/9152/pdf","is_open_access":true,"published_at":"","score":66},{"id":"doaj_10.46298/jdmdh.9067","title":"Source or target first? Comparison of two post-editing strategies with translation students","authors":[{"name":"Lise Volkart"},{"name":"Sabrina Girletti"},{"name":"Johanna Gerlach"},{"name":"Jonathan David Mutal"},{"name":"Pierrette Bouillon"}],"abstract":"We conducted an experiment with translation students to assess the influence of two different post-editing (PE) strategies (reading the source segment or the target segment first) on three aspects: PE time, ratio of corrected errors and number of optional modifications per word. Our results showed that the strategy that is adopted has no influence on the PE time or ratio of corrected errors. However, it does have an influence on the number of optional modifications per word. Two other thought-provoking observations emerged from this study: first, the ratio of corrected errors showed that, on average, students correct only half of the MT errors, which underlines the need for PE practice. Second, the time logs of the experiment showed that when students are not forced to read the source segment first, they tend to neglect the source segment and almost do monolingual PE. This experiment provides new insight relevant to PE teaching as well as the designing of PE environments.","source":"DOAJ","year":2022,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.9067","url":"https://jdmdh.episciences.org/9067/pdf","pdf_url":"https://jdmdh.episciences.org/9067/pdf","is_open_access":true,"published_at":"","score":66},{"id":"doaj_TraduXio+Project%3A+Latest+Upgrades+and+Feedback","title":"TraduXio Project: Latest Upgrades and Feedback","authors":[{"name":"Philippe Lacour"},{"name":"Aurélien Bénel"}],"abstract":"International audience TraduXio is a digital environment for computer assisted multilingual translation which is web-based, free to use and with an open source code. Its originality is threefold-whereas traditional technologies are limited to two languages (source/target), TraduXio enables the comparison of different versions of the same text in various languages; its concordancer provides relevant and multilingual suggestions through a classification of the source according to the history, genre and author; it uses collaborative devices (privilege management, forums, networks, history of modification, etc.) to promote collective (and distributed) translation. TraduXio is designed to encourage the diversification of language learning and to promote a reappraisal of translation as a professional skill. It can be used in many different ways, by very diverse kind of people. In this presentation, I will present the recent developments of the software (its version 2.1) and illustrate how specific groups (language teaching, social sciences, literature) use it on a regular basis. In this paper, I present the technology but concentrate more on the possible uses of TraduXio, thus focusing on translators' feedback about their experience when working in this digital environment in a truly collaborative way.","source":"DOAJ","year":2021,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"url":"https://jdmdh.episciences.org/7025/pdf","pdf_url":"https://jdmdh.episciences.org/7025/pdf","is_open_access":true,"published_at":"","score":65},{"id":"doaj_10.46298/jdmdh.6733","title":"TraduXio Project: Latest Upgrades and Feedback","authors":[{"name":"Philippe Lacour"},{"name":"Aurélien Bénel"}],"abstract":"TraduXio is a digital environment for computer assisted multilingual translation which is web-based, free to use and with an open source code. Its originality is threefold-whereas traditional technologies are limited to two languages (source/target), TraduXio enables the comparison of different versions of the same text in various languages; its concordancer provides relevant and multilingual suggestions through a classification of the source according to the history, genre and author; it uses collaborative devices (privilege management, forums, networks, history of modification, etc.) to promote collective (and distributed) translation. TraduXio is designed to encourage the diversification of language learning and to promote a reappraisal of translation as a professional skill. It can be used in many different ways, by very diverse kind of people. In this presentation, I will present the recent developments of the software (its version 2.1) and illustrate how specific groups (language teaching, social sciences, literature) use it on a regular basis. In this paper, I present the technology but concentrate more on the possible uses of TraduXio, thus focusing on translators' feedback about their experience when working in this digital environment in a truly collaborative way.","source":"DOAJ","year":2021,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.6733","url":"http://jdmdh.episciences.org/6733/pdf","pdf_url":"http://jdmdh.episciences.org/6733/pdf","is_open_access":true,"published_at":"","score":65},{"id":"doaj_The+expansion+of+isms%2C+1820-1917%3A+Data-driven+analysis+of+political+language+in+digitized+newspaper+collections","title":"The expansion of isms, 1820-1917: Data-driven analysis of political language in digitized newspaper collections","authors":[{"name":"Jani Marjanen"},{"name":"Jussi Kurunmäki"},{"name":"Lidia Pivovarova"},{"name":"Elaine Zosa"}],"abstract":"Words with the suffix-ism are reductionist terms that help us navigate complex social issues by using a simple one-word label for them. On the one hand they are often associated with political ideologies, but on the other they are present in many other domains of language, especially culture, science, and religion. This has not always been the case. This paper studies isms in a historical record of digitized newspapers from 1820 to 1917 published in Finland to find out how the language of isms developed historically. We use diachronic word embeddings and affinity propagation clustering to trace how new isms entered the lexicon and how they relate to one another over time. We are able to show how they became more common and entered more and more domains. Still, the uses of isms as traditions for political action and thinking stand out in our analysis.","source":"DOAJ","year":2020,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"url":"https://jdmdh.episciences.org/6728/pdf","pdf_url":"https://jdmdh.episciences.org/6728/pdf","is_open_access":true,"published_at":"","score":64},{"id":"doaj_Spoken+word+corpus+and+dictionary+definition+for+an+African+language","title":"Spoken word corpus and dictionary definition for an African language","authors":[{"name":"Wanjiku Nganga"},{"name":"Ikechukwu Achebe"}],"abstract":"The preservation of languages is critical to maintaining and strengthening the cultures and identities of communities, and this is especially true for under-resourced languages with a predominantly oral culture. Most African languages have a relatively short literary past, and as such the task of dictionary making cannot rely on textual corpora as has been the standard practice in lexicography. This paper emphasizes the significance of the spoken word and the oral tradition as repositories of vocabulary, and argues that spoken word corpora greatly outweigh the value of printed texts for lexicography. We describe a methodology for creating a digital dialectal dictionary for the Igbo language from such a spoken word corpus. We also highlight the language technology tools and resources that have been created to support the transcription of thousands of hours of Igbo speech and the subsequent compilation of these transcriptions into an XML-encoded textual corpus of Igbo dialects. The methodology described in this paper can serve as a blueprint that can be adopted for other under-resourced languages that have predominantly oral cultures.","source":"DOAJ","year":2020,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"url":"https://jdmdh.episciences.org/6953/pdf","pdf_url":"https://jdmdh.episciences.org/6953/pdf","is_open_access":true,"published_at":"","score":64},{"id":"doaj_10.46298/jdmdh.6703","title":"Spoken word corpus and dictionary definition for an African language","authors":[{"name":"Wanjiku Nganga"},{"name":"Ikechukwu Achebe"}],"abstract":"The preservation of languages is critical to maintaining and strengthening the cultures and identities of communities, and this is especially true for under-resourced languages with a predominantly oral culture. Most African languages have a relatively short literary past, and as such the task of dictionary making cannot rely on textual corpora as has been the standard practice in lexicography. This paper emphasizes the significance of the spoken word and the oral tradition as repositories of vocabulary, and argues that spoken word corpora greatly outweigh the value of printed texts for lexicography. We describe a methodology for creating a digital dialectal dictionary for the Igbo language from such a spoken word corpus. We also highlight the language technology tools and resources that have been created to support the transcription of thousands of hours of Igbo speech and the subsequent compilation of these transcriptions into an XML-encoded textual corpus of Igbo dialects. The methodology described in this paper can serve as a blueprint that can be adopted for other under-resourced languages that have predominantly oral cultures.","source":"DOAJ","year":2020,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.6703","url":"https://jdmdh.episciences.org/6703/pdf","pdf_url":"https://jdmdh.episciences.org/6703/pdf","is_open_access":true,"published_at":"","score":64},{"id":"doaj_10.46298/jdmdh.6159","title":"The expansion of isms, 1820-1917: Data-driven analysis of political language in digitized newspaper collections","authors":[{"name":"Jani Marjanen"},{"name":"Jussi Kurunmäki"},{"name":"Lidia Pivovarova"},{"name":"Elaine Zosa"}],"abstract":"Words with the suffix-ism are reductionist terms that help us navigate complex social issues by using a simple one-word label for them. On the one hand they are often associated with political ideologies, but on the other they are present in many other domains of language, especially culture, science, and religion. This has not always been the case. This paper studies isms in a historical record of digitized newspapers from 1820 to 1917 published in Finland to find out how the language of isms developed historically. We use diachronic word embeddings and affinity propagation clustering to trace how new isms entered the lexicon and how they relate to one another over time. We are able to show how they became more common and entered more and more domains. Still, the uses of isms as traditions for political action and thinking stand out in our analysis.","source":"DOAJ","year":2020,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.6159","url":"https://jdmdh.episciences.org/6159/pdf","pdf_url":"https://jdmdh.episciences.org/6159/pdf","is_open_access":true,"published_at":"","score":64},{"id":"doaj_Deep+Learning+for+Period+Classification+of+Historical+Hebrew+Texts","title":"Deep Learning for Period Classification of Historical Hebrew Texts","authors":[{"name":"Chaya Liebeskind"},{"name":"Shmuel Liebeskind"}],"abstract":"In this study, we address the interesting task of classifying historical texts by their assumed period of writ-ing. This task is useful in digital humanity studies where many texts have unidentified publication dates.For years, the typical approach for temporal text classification was supervised using machine-learningalgorithms.  These algorithms require careful feature engineering and considerable domain expertise todesign a feature extractor to transform the raw text into a feature vector from which the classifier couldlearn to classify any unseen valid input.  Recently, deep learning has produced extremely promising re-sults for various tasks in natural language processing (NLP). The primary advantage of deep learning isthat human engineers did not design the feature layers, but the features were extrapolated from data witha general-purpose learning procedure. We investigated deep learning models for period classification ofhistorical texts. We compared three common models: paragraph vectors, convolutional neural networks (CNN) and recurrent neural networks (RNN), and conventional machine-learning methods. We demon-strate that the CNN and RNN models outperformed the paragraph vector model and the conventionalsupervised machine-learning algorithms.  In addition, we constructed word embeddings for each timeperiod and analyzed semantic changes of word meanings over time.","source":"DOAJ","year":2020,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"url":"https://jdmdh.episciences.org/6525/pdf","pdf_url":"https://jdmdh.episciences.org/6525/pdf","is_open_access":true,"published_at":"","score":64},{"id":"doaj_10.46298/jdmdh.5864","title":"Deep Learning for Period Classification of Historical Hebrew Texts","authors":[{"name":"Chaya Liebeskind"},{"name":"Shmuel Liebeskind"}],"abstract":"In this study, we address the interesting task of classifying historical texts by their assumed period of writ-ing. This task is useful in digital humanity studies where many texts have unidentified publication dates.For years, the typical approach for temporal text classification was supervised using machine-learningalgorithms.  These algorithms require careful feature engineering and considerable domain expertise todesign a feature extractor to transform the raw text into a feature vector from which the classifier couldlearn to classify any unseen valid input.  Recently, deep learning has produced extremely promising re-sults for various tasks in natural language processing (NLP). The primary advantage of deep learning isthat human engineers did not design the feature layers, but the features were extrapolated from data witha general-purpose learning procedure. We investigated deep learning models for period classification ofhistorical texts. We compared three common models: paragraph vectors, convolutional neural networks (CNN) and recurrent neural networks (RNN), and conventional machine-learning methods. We demon-strate that the CNN and RNN models outperformed the paragraph vector model and the conventionalsupervised machine-learning algorithms.  In addition, we constructed word embeddings for each timeperiod and analyzed semantic changes of word meanings over time.","source":"DOAJ","year":2020,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.5864","url":"https://jdmdh.episciences.org/5864/pdf","pdf_url":"https://jdmdh.episciences.org/5864/pdf","is_open_access":true,"published_at":"","score":64},{"id":"doaj_Mapping+the+Bentham+Corpus%3A+Concept-based+Navigation","title":"Mapping the Bentham Corpus: Concept-based Navigation","authors":[{"name":"Pablo Ruiz Fabo"},{"name":"Thierry Poibeau"}],"abstract":"International audience British philosopher and reformer Jeremy Bentham (1748-1832) left over 60,000 folios of unpublished manuscripts. The Bentham Project, at University College London, is creating a TEI version of the manuscripts, via crowdsourced transcription verified by experts. We present here an interface to navigate these largely unedited manuscripts, and the language technologies the corpus was enriched with to facilitate navigation, i.e Entity Linking against the DBpedia knowledge base and keyphrase extraction. The challenges of tagging a historical domain-specific corpus with a contemporary knowledge base are discussed. The concepts extracted were used to create interactive co-occurrence networks, that serve as a map for the corpus and help navigate it, along with a search index. These corpus representations were integrated in a user interface. The interface was evaluated by domain experts with satisfactory results , e.g. they found the distributional semantics methods exploited here applicable in order to assist in retrieving related passages for scholarly editing of the corpus.","source":"DOAJ","year":2019,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"url":"https://jdmdh.episciences.org/5257/pdf","pdf_url":"https://jdmdh.episciences.org/5257/pdf","is_open_access":true,"published_at":"","score":63},{"id":"doaj_A+Hackathon+for+Classical+Tibetan","title":"A Hackathon for Classical Tibetan","authors":[{"name":"Orna Almogi"},{"name":"Lena Dankin"},{"name":"Nachum Dershowitz"},{"name":"Lior Wolf"}],"abstract":"We describe the course of a hackathon dedicated to the development of linguistic tools for Tibetan Buddhist studies. Over a period of five days, a group of seventeen scholars, scientists, and students developed and compared algorithms for intertextual alignment and text classification, along with some basic language tools, including a stemmer and word segmenter.","source":"DOAJ","year":2019,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"url":"https://jdmdh.episciences.org/5058/pdf","pdf_url":"https://jdmdh.episciences.org/5058/pdf","is_open_access":true,"published_at":"","score":63},{"id":"doaj_10.46298/jdmdh.5044","title":"Mapping the Bentham Corpus: Concept-based Navigation","authors":[{"name":"Pablo Ruiz"},{"name":"Thierry Poibeau"}],"abstract":"British philosopher and reformer Jeremy Bentham (1748-1832) left over 60,000 folios of unpublished manuscripts. The Bentham Project, at University College London, is creating a TEI version of the manuscripts, via crowdsourced transcription verified by experts. We present here an interface to navigate these largely unedited manuscripts, and the language technologies the corpus was enriched with to facilitate navigation, i.e Entity Linking against the DBpedia knowledge base and keyphrase extraction. The challenges of tagging a historical domain-specific corpus with a contemporary knowledge base are discussed. The concepts extracted were used to create interactive co-occurrence networks, that serve as a map for the corpus and help navigate it, along with a search index. These corpus representations were integrated in a user interface. The interface was evaluated by domain experts with satisfactory results , e.g. they found the distributional semantics methods exploited here applicable in order to assist in retrieving related passages for scholarly editing of the corpus.","source":"DOAJ","year":2019,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.5044","url":"http://jdmdh.episciences.org/5044/pdf","pdf_url":"http://jdmdh.episciences.org/5044/pdf","is_open_access":true,"published_at":"","score":63},{"id":"doaj_10.46298/jdmdh.2047","title":"A Hackathon for Classical Tibetan","authors":[{"name":"Orna Almogi"},{"name":"Lena Dankin"},{"name":"Nachum Dershowitz"},{"name":"Lior Wolf"}],"abstract":"We describe the course of a hackathon dedicated to the development of linguistic tools for Tibetan Buddhist studies. Over a period of five days, a group of seventeen scholars, scientists, and students developed and compared algorithms for intertextual alignment and text classification, along with some basic language tools, including a stemmer and word segmenter.","source":"DOAJ","year":2019,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.2047","url":"https://jdmdh.episciences.org/2047/pdf","pdf_url":"https://jdmdh.episciences.org/2047/pdf","is_open_access":true,"published_at":"","score":63},{"id":"doaj_10.46298/jdmdh.4184","title":"Processing Tools for Greek and Other Languages of the Christian Middle East","authors":[{"name":"Bastien Kindt"}],"abstract":"This paper presents some computer tools and linguistic resources of the GREgORI project. These developments allow automated processing of texts written in the main languages of the Christian Middel East, such as Greek, Arabic, Syriac, Armenian and Georgian. The main goal is to provide scholars with tools (lemmatized indexes and concordances) making corpus-based linguistic information available. It focuses on the questions of text processing, lemmatization, information retrieval, and bitext alignment.","source":"DOAJ","year":2018,"language":"","subjects":["History of scholarship and learning. The humanities","Bibliography. Library science. Information resources"],"doi":"10.46298/jdmdh.4184","url":"https://jdmdh.episciences.org/4184/pdf","pdf_url":"https://jdmdh.episciences.org/4184/pdf","is_open_access":true,"published_at":"","score":62},{"id":"crossref_10.1134/s0036024413050324","title":"Prediction of phase equilibrium for the aqueous systems Rb+, Cs+/Cl−, SO–4-H2O and K+, Cs+/Cl−, SO2−4-H2O at 25°C","authors":[{"name":"B. Hu"}],"abstract":"","source":"CrossRef","year":2013,"language":"en","subjects":null,"doi":"10.1134/s0036024413050324","url":"https://doi.org/10.1134/s0036024413050324","pdf_url":"https://link.springer.com/content/pdf/10.1134/S0036024413050324.pdf","is_open_access":true,"citations":5,"published_at":"","score":57.15},{"id":"doaj_10.46298/dmtcs.637","title":"A generic method for the enumeration of various classes of directed polycubes","authors":[{"name":"Jean-Marc Champarnaud"},{"name":"Jean-Philippe Dubernard"},{"name":"Hadrien Jeanne"}],"abstract":"Combinatorics","source":"DOAJ","year":2013,"language":"","subjects":["Mathematics"],"doi":"10.46298/dmtcs.637","url":"https://dmtcs.episciences.org/637/pdf","pdf_url":"https://dmtcs.episciences.org/637/pdf","is_open_access":true,"published_at":"","score":57}],"total":154619,"page":1,"page_size":20,"sources":["CrossRef","DOAJ"],"query":"cs.CL"}