Der Beitrag zeigt ein Defizit auf, das in der Medienpädagogik darin besteht, in ihren Überlegungen zur Medienkompetenz die allen Umgang mit Medienbotschaften fundierende Fähigkeit zu vernachlässigen, Medientexte zu decodieren und zu verstehen. Es wird aufgezeigt, wie dieses Defizit mit der Einsicht in der medienpädagogischen Mediendidaktik in Konflikt steht, mediale Zeichenkompetenz als fundamentale Medienkompetenz zu konzeptualisieren und empirisch zu erforschen. Aus deutschdidaktischer Perspektive fällt jedoch auf, dass dabei der Bereich anspruchsvoller medialer Zeichenkompetenzen (Advanced Media Sign Literacy) ausgespart bleibt, für den Deutschdidaktik und Deutschunterricht besondere Expertise beanspruchen können. Im Anschluss an eine Position der Literaturwissenschaft wird aufgezeigt, wie der spezifische Beitrag der Literaturdidaktik Deutsch zur Medienbildung gerade mit der
Vermittlung von Advanced Media Sign Literacy begründet werden kann. Beispiele der Vermittlung von Advanced Media Sign Literacy im Deutschunterricht werden vorgestellt und aufzeigt, inwiefern diese auch für die kritische Rezeption des medienpädagogischen Diskurses selbst unabdingbar sind.
Abstract (English)
The article points out a deficit in the discourse of media education which consists in neglecting the ability to decode and understand media texts as a basic skill needed in processing all media messages. It is shown how this deficit in media education conflicts with the insight in educational media psychology to conceptualize media sign literacy as fundamental component of media literacy and make it the object of empirical research. From the perspective of German didactics, however, it is noticeable that the area not of rudimentary, but of advanced media sign literacy, for which German didactics can claim specific expertise, is left out. Following an argument found in German literary theory, it is shown how the specific contribution of German literary didactics to media education can be seen exactly in fostering advanced media sign literacy. Examples of such an approach to foster advanced media sign literacy in German lessons are presented. The article ends by pointing out the extent to which advanced media sign literacy is indispensable even for the critical reception of the media education discourse itself.
Modeling interoperability between programs in different languages is a key problem when modeling verified and secure compilation, which has been successfully addressed using multi-language semantics. Unfortunately, existing models of compilation using multi-language semantics define two variants of each compiler pass: a syntactic translation on open terms to model compilation, and a run-time translation of closed terms at multi-language boundaries to model interoperability. In this talk, I discuss work-in-progress approach to uniformly model a compiler entirely as a reduction system on open term in a multi-language semantics, rather than as a syntactic translation. This simultaneously defines the compiler and the interoperability semantics, reducing duplication. It also provides interesting semantic insights. Normalization of the cross-language redexes performs ahead-of-time (AOT) compilation. Evaluation in the multi-language models just-in-time (JIT) compilation. Confluence of multi-language reduction implies compiler correctness, and part of the secure compilation proof (full abstraction), enabling focus on the difficult part of the proof. Subject reduction of the multi-language reduction implies type-preservation of the compiler.
Aditya Thimmaiah, Jiyang Zhang, Jayanth Srinivasa
et al.
As large language models (LLMs) excel at code reasoning, a natural question arises: can an LLM execute programs (i.e., act as an interpreter) purely based on a programming language's formal semantics? If so, it will enable rapid prototyping of new programming languages and language features. We study this question using the imperative language IMP (a subset of C), formalized via small-step operational semantics (SOS) and rewriting-based operational semantics (K-semantics). We introduce three evaluation sets-Human-Written, LLM-Translated, and Fuzzer- Generated-whose difficulty is controlled by code-complexity metrics spanning the size, control-flow, and data-flow axes. Given a program and its semantics formalized with SOS/K-semantics, models are evaluated on three tasks ranging from coarse to fine: (1) final-state prediction, (2) semantic rule prediction, and (3) execution trace prediction. To distinguish pretraining memorization from semantic competence, we define two nonstandard semantics obtained through systematic mutations of the standard rules. Across strong code/reasoning LLMs, performance drops under nonstandard semantics despite high performance under the standard one. We further find that (i) there are patterns to different model failures, (ii) most reasoning models perform exceptionally well on coarse grained tasks involving reasoning about highly complex programs often containing nested loop depths beyond five, and surprisingly, (iii) providing formal semantics helps on simple programs but often hurts on more complex ones. Overall, the results show a promise that LLMs could serve as programming language interpreters, but points to the lack of their robust semantics understanding. We release the benchmark and the supporting code at https://github.com/EngineeringSoftware/PLSemanticsBench.
Perceptions of Developing Reading and Writing Skills in Swedish in an Online Context. This paper intends to focus on BA students’ reading and writing skills in Swedish as a foreign language in a Scandinavian context. In addition, the study aims to discuss the difficulties students have encountered when studying Swedish as a foreign language in an online academic context amid Covid-19 pandemic. A survey research comprising closed-ended and open-ended questions was conducted by using a questionnaire as the main instrument for collecting data. The respondents were BA students in Norwegian language and literature, enrolled at the Faculty of Letters at Babeș-Bolyai University, who had already studied Norwegian for four semesters in the frame of this programme and who took the one semester optional course in Swedish. The language distance between Norwegian and Swedish is relatively small because both languages are part of the North-Germanic branch. We considered it relevant to explore the manner in which students tackle these similarities, differences and the cross-linguistic transfer between the two languages and whether their reading and writing practices in Norwegian have influenced in any way the acquisition of Swedish. Nowadays, new technological advances provide additional support to foreign language learning and develop learners’ digital literacy. Therefore, the paper aimed at understanding what types of authentic resources are used by students, in order to develop their linguistic and sociolinguistic competence in Swedish. The results showed that students are willing to improve their language skills, as they believe that mastering another Scandinavian language could help them increase their academic and professional opportunities and would constitute an advantage in terms of easiness to develop writing and reading skills in Swedish.
REZUMAT. Percepțiile studenților cu privire la dezvoltarea competențelor de citire și scriere în limba suedeză într-un context de predare online. Acest studiu își propune să ofere detalii despre autoevaluarea abilităților de citire și scriere ale studenților de la nivel licență înscriși la cursul de Limbă și cultură suedeză în context scandinav. În plus, studiul își propune să discute dificultățile de învățare a limbii suedeze în contextul predării online, pe fondul pandemiei Covid-19. Instrumentul de colectare a datelor folosit în acest studiua fost un chestionar care a cuprins întrebări cu răspunsuri deschise și închise. Respondenții sunt studenți înscriși la specializarea limbă și literatură norvegiană, nivel licență, la Facultatea de Litere, Universitatea Babeș-Bolyai, care au studiat deja limba norvegiană timp de patru semestre în cadrul acestui program și au urmat cursul opțional de suedeză din semestrul cinci. Deoarece fac parte din aceeași grupă nordică a limbilor germanice, distanța lingvistică dintre norvegiană și suedeză este relativ mică. De aceea, am considerat că este relevant să explorăm modul în care studenții abordează aceste asemănări și diferențe precum și modul în care se relaționează la transferul translingvistic între cele două limbi. S-a pus accentul pe abilitățile de citire și de scriere, deoarece am dorit să vedem dacă aceste competențe deja dobândite în limba norvegiană au influențat în vreun fel abilitățile corespondente în limba suedeză. Actualmente, noile progrese tehnologice oferă sprijin suplimentar învățării limbilor străine și dezvoltă competențele digitale ale cursanților. Prin urmare, articolul și-a propus să observe ce tipuri de resurse autentice sunt utilizate de studenți pentru a-și dezvolta competențele lingvistice și sociolingvistice în limba suedeză. Rezultatele au arătat că studenții sunt dispuși să-și îmbunătățească abilitățile lingvistice, deoarece consideră că stăpânirea unei alte limbi scandinave îi poate ajuta să își sporească oportunitățile academice și profesionale și constituie un avantaj în ceea ce privește ușurința de a dezvolta abilități de scriere și citire în suedeză.
Cuvinte-cheie: norvegiană, suedeză, similitudini și diferențe lingvistice, interferențe lingvistice, învățarea limbilor străine, nivel licență
Preposition choice is amongst the most prominent phenomena in which German in Austria differs from other varieties of German. Many instances are explained by language contact with Slavic languages in general and Czech in particular. This contribution sheds light on diamedial and diatopic variation of preposition choice in directive arguments (of the verbs Germ. gehen / Cz. jít/chodit) with the translation equivalents of school as the PPs inner objects. It chooses a contrastive, databased approach by analysing spoken and written corpora of various German varieties and Czech. We find evidence for diamedial variation in all analysed varieties or languages, respectively, with the prepositions Germ. in and Cz. do 'into' being relatively more frequent in spoken than in written language. Also, we identify two larger areal patterns with gradual transitions in Central Europe: first, a north-western one in the Hamburg/Hannover region, in which the preposition Germ. zu 'to' prevails; second a south-eastern one in Austria, Bavaria and the Czech Republic with the dominant preposition Germ. in / Cz. do 'into'.
Germanic languages. Scandinavian languages, History of Northern Europe. Scandinavia
We define a new quantitative measure for an arbitrary factorial language: the entropy of a random walk in the prefix tree associated with the language; we call it Markov entropy. We relate Markov entropy to the growth rate of the language and to the parameters of branching of its prefix tree. We show how to compute Markov entropy for a regular language. Finally, we develop a framework for experimental study of Markov entropy by modelling random walks and present the results of experiments with power-free and Abelian-power-free languages.
This paper advocates the convergence of terminology and lexicography, and illustrates this view by presenting some of the steps taken for incorporating terminological resources and ideas in an online dictionary portal that is being constructed at the University of Valladolid (Spain). This dictionary portal contains several dictionary types, was designed by the same team and is being constructed from the same theoretical perspective, regardless of whether some of the lexical items included are judged "lexicographic", i.e. related to general language expressions, or "terminological", i.e. connected with terms. In addition to dealing with certain basic tenets of dictionary portals, the paper describes an ad-hoc typology of definitions that has been created for two main reasons. Firstly, it makes the process of compilation easier, more uniform, and more readily systematised, thus facilitating the efforts of different people in different places at different times. Secondly, these definitions will feed the Spanish–English Write Assistant, a commercially driven language tool that uses a language module based on statistics and is in the process of using Artificial Intelligence (AI) technologies, e.g. machine learning and neural networks, for creating patterns. We have found that precise definitions, similar to terminological (i.e. encyclopaedic) definitions, for most lemmas increase the tool's functions. Such definitions offer a very different picture of current monolingual Spanish and bilingual Spanish–English dictionaries.
Philology. Linguistics, Languages and literature of Eastern Asia, Africa, Oceania
The treatment of multiword expressions (MWEs) in dictionaries has not received much attention in metalexicography, although the significant role of phraseology has been stressed since the advent of corpus linguistics. The paper aims to analyse the lexicographic representation of semantically related MWEs, containing body part names. The study focuses on access routes to these MWEs in the 'Big Five' monolingual English learners' dictionaries online (MELDs). It investigates the presence and positions of hyperlinked MWEs on the page of the body part headword in order to find out if they depend on a given MWE or are dictionary-specific. Double or multiple hyperlinks to the same MWE are frequently found within a single body part entry, and the variety of access routes is evaluated with a view to offering a more homogeneous presentation of hyperlinked related MWEs.
Philology. Linguistics, Languages and literature of Eastern Asia, Africa, Oceania
We say that a language $L$ is \emph{constantly growing} if there is a constant $c$ such that for every word $u\in L$ there is a word $v\in L$ with $\vert u\vert<\vert v\vert\leq c+\vert u\vert$. We say that a language $L$ is \emph{geometrically growing} if there is a constant $c$ such that for every word $u\in L$ there is a word $v\in L$ with $\vert u\vert<\vert v\vert\leq c\vert u\vert$. Given two infinite languages $L_1,L_2$, we say that $L_1$ \emph{dissects} $L_2$ if $\vert L_2\setminus L_1\vert=\infty$ and $\vert L_1\cap L_2\vert=\infty$. In 2013, it was shown that for every constantly growing language $L$ there is a regular language $R$ such that $R$ dissects $L$. In the current article we show how to dissect a geometrically growing language by a homomorphic image of intersection of two context-free languages. Consider three alphabets $Γ$, $Σ$, and $Θ$ such that $\vert Σ\vert=1$ and $\vert Θ\vert=4$. We prove that there are context-free languages $M_1,M_2\subseteq Θ^*$, an erasing alphabetical homomorphism $π:Θ^*\rightarrow Σ^*$, and a nonerasing alphabetical homomorphism $\varphi : Γ^*\rightarrow Σ^*$ such that: If $L\subseteq Γ^*$ is a geometrically growing language then there is a regular language $R\subseteq Θ^*$ such that $\varphi^{-1}\left(π\left(R\cap M_1\cap M_2\right)\right)$ dissects the language $L$.
The row projection (resp., column projection) of a two-dimensional language $L$ is the one-dimensional language consisting of all first rows (resp., first columns) of each two-dimensional word in $L$. The operation of row projection has previously been studied under the name "frontier language", and previous work has focused on one- and two-dimensional language classes. In this paper, we study projections of languages recognized by various two-dimensional automaton classes. We show that both the row and column projections of languages recognized by (four-way) two-dimensional automata are exactly context-sensitive. We also show that the column projections of languages recognized by unary three-way two-dimensional automata can be recognized using nondeterministic logspace. Finally, we study the state complexity of projection languages for two-way two-dimensional automata, focusing on the language operations of union and diagonal concatenation.
The article addresses some issues connected with the disciplinary status of lexicography. Drawing on the views of scholars such as L. Zgusta, R. Ilson, H. Wiegand, R. Gouws, H. Bergenholtz, S. Tarp, R. Lew and others, the author argues in favour of the viewpoint that lexicography is a science and that working on a dictionary is a scientific activity. The main issues tackled in the paper include understanding the complex nature of word meaning, the role of dictionaries in the description of word meaning and the development of lexical semantics. Attention is also paid to the definitional method of the study of word meaning, which is based on the analysis of dictionary definitions, components of the theory of lexicography, the relation between lexicographic theory and practice, and the teaching of lexicography as an academic discipline at universities. The author argues that the right approach to lexicography and its disciplinary status is particularly important in our era of globalisation. Only state-of-the-art lexicographic and corpus resources will secure the future of many languages, particularly lesser-used languages, and such resources will not be created until lexicography receives proper recognition as a science with "big interdisciplinary vocation" (Tarp 2017); until lexicography is turned into an academic discipline through advanced theory of lexicography, through teaching lexicography at universities, etc.
Philology. Linguistics, Languages and literature of Eastern Asia, Africa, Oceania
This paper is about pregroup models of natural languages, and how they relate to the explicitly categorical use of pregroups in Compositional Distributional Semantics and Natural Language Processing. These categorical interpretations make certain assumptions about the nature of natural languages that, when stated formally, may be seen to impose strong restrictions on pregroup grammars for natural languages. We formalize this as a hypothesis about the form that pregroup models of natural languages must take, and demonstrate by an artificial language example that these restrictions are not imposed by the pregroup axioms themselves. We compare and contrast the artificial language examples with natural languages (using Welsh, a language where the 'noun' type cannot be taken as primitive, as an illustrative example). The hypothesis is simply that there must exist a causal connection, or information flow, between the words of a sentence in a language whose purpose is to communicate information. This is not necessarily the case with formal languages that are simply generated by a series of 'meaning-free' rules. This imposes restrictions on the types of pregroup grammars that we expect to find in natural languages; we formalize this in algebraic, categorical, and graphical terms. We take some preliminary steps in providing conditions that ensure pregroup models satisfy these conjectured properties, and discuss the more general forms this hypothesis may take.
Charles J. Colbourn, Ryan E. Dougherty, Thomas F. Lidbetter
et al.
Let $x$ and $y$ be words. We consider the languages whose words $z$ are those for which the numbers of occurrences of $x$ and $y$, as subwords of $z$, are the same (resp., the number of $x$'s is less than the number of $y$'s, resp., is less than or equal). We give a necessary and sufficient condition on $x$ and $y$ for these languages to be regular, and we show how to check this condition efficiently.
A regular language is almost fully characterized by its right congruence relation. Indeed, a regular language can always be recognized by a DFA isomorphic to the automaton corresponding to its right congruence, henceforth the Rightcon automaton. The same does not hold for regular omega-languages. The right congruence of a regular omega-language is not informative enough; many regular omega-languages have a trivial right congruence, and in general it is not always possible to define an omega-automaton recognizing a given language that is isomorphic to the rightcon automaton. The class of weak regular omega-languages does have an informative right congruence. That is, any weak regular omega-language can always be recognized by a deterministic Büchi automaton that is isomorphic to the rightcon automaton. Weak regular omega-languages reside in the lower levels of the expressiveness hierarchy of regular omega-languages. Are there more expressive sub-classes of regular omega languages that have an informative right congruence? Can we fully characterize the class of languages with a trivial right congruence? In this paper we try to place some additional pieces of this big puzzle.
We will look at a treatment of the semantics of taste predicates using TTR (Type Theory with Records). The central idea is that we take the notion of judgement from type theory as basic and derive a notion of truth from that, rather than starting from a semantics based on a notion of truth and trying to modify it to include a notion of judgement. Our analysis involves two types of propositions: Austinian propositions, whose components include a situation and a type, and a subtype of Austinian propositions called subjective Austinian propositions, whose components in addition include an agent who makes the judgement that the situation is of the type. We will argue that attitude verbs can select either for propositions in general (subjective or objective) or for subjective propositions, but that there is no type of objective propositions which can be selected for. We will discuss some apparent counterexamples to this from Germanic languages and argue that there is a phenomenon akin to switch reference in certain attitude predicates when their complement involves a subjective proposition.
Liu et al. (2017) provide a comprehensive account of research on dependency distance in human languages. While the article is a very rich and useful report on this complex subject, here I will expand on a few specific issues where research in computational linguistics (specifically natural language processing) can inform DDM research, and vice versa. These aspects have not been explored much in the article by Liu et al. or elsewhere, probably due to the little overlap between both research communities, but they may provide interesting insights for improving our understanding of the evolution of human languages, the mechanisms by which the brain processes and understands language, and the construction of effective computer systems to achieve this goal.