Search (122 results, page 2 of 7)

  • × year_i:[2020 TO 2030}
  1. Engel, B.: Corona-Gesundheitszertifikat als Exitstrategie (2020) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 5906) [ClassicSimilarity], result of:
          0.06958282 = score(doc=5906,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 5906, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=5906)
      0.2 = coord(1/5)
    
    Date
    4. 5.2020 17:22:28
  2. Arndt, O.: Totale Telematik (2020) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 5907) [ClassicSimilarity], result of:
          0.06958282 = score(doc=5907,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 5907, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=5907)
      0.2 = coord(1/5)
    
    Date
    22. 6.2020 19:11:24
  3. Arndt, O.: Erosion der bürgerlichen Freiheiten (2020) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 82) [ClassicSimilarity], result of:
          0.06958282 = score(doc=82,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 82, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=82)
      0.2 = coord(1/5)
    
    Date
    22. 6.2020 19:16:24
  4. Baecker, D.: ¬Der Frosch, die Fliege und der Mensch : zum Tod von Humberto Maturana (2021) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 236) [ClassicSimilarity], result of:
          0.06958282 = score(doc=236,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 236, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=236)
      0.2 = coord(1/5)
    
    Date
    7. 5.2021 22:10:24
  5. Eyert, F.: Mathematische Wissenschaftskommunikation in der digitalen Gesellschaft (2023) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 1001) [ClassicSimilarity], result of:
          0.06958282 = score(doc=1001,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 1001, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=1001)
      0.2 = coord(1/5)
    
    Source
    Mitteilungen der Deutschen Mathematiker-Vereinigung. 2023, H.1, S.22-25
  6. Wüllner, J.: Obsidian - das Organisationstalent : Ideen, Notizen, Planungen, Projekte - diese App kann für Schule, Uni und Beruf hilfreich sein (2023) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
          0.06958282 = score(doc=1080,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 1080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=1080)
      0.2 = coord(1/5)
    
    Date
    27. 1.2023 16:22:55
  7. Fugmann, R.: What is information? : an information veteran looks back (2022) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 1085) [ClassicSimilarity], result of:
          0.06958282 = score(doc=1085,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 1085, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=1085)
      0.2 = coord(1/5)
    
    Date
    18. 8.2022 19:22:57
  8. Zilm, G.: "Kl ist ein glorifizierter Taschenrechner" (2023) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 1129) [ClassicSimilarity], result of:
          0.06958282 = score(doc=1129,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 1129, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=1129)
      0.2 = coord(1/5)
    
    Date
    27. 1.2023 16:22:55
  9. Sokolow, A.: Es menschelt in der KI-Welt (2023) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 1169) [ClassicSimilarity], result of:
          0.06958282 = score(doc=1169,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 1169, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=1169)
      0.2 = coord(1/5)
    
    Date
    27. 1.2023 16:22:55
  10. Sokolow, A.: Chaostage bei ChatGPT (2023) 0.01
    0.013916564 = product of:
      0.06958282 = sum of:
        0.06958282 = weight(_text_:22 in 1170) [ClassicSimilarity], result of:
          0.06958282 = score(doc=1170,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.38690117 = fieldWeight in 1170, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=1170)
      0.2 = coord(1/5)
    
    Date
    27. 1.2023 16:22:55
  11. Amirhosseini, M.: ¬A novel method for ranking knowledge organization systems (KOSs) based on cognition states (2022) 0.01
    0.013708933 = product of:
      0.06854466 = sum of:
        0.06854466 = weight(_text_:thesaurus in 1105) [ClassicSimilarity], result of:
          0.06854466 = score(doc=1105,freq=4.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2888174 = fieldWeight in 1105, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.03125 = fieldNorm(doc=1105)
      0.2 = coord(1/5)
    
    Abstract
    The purpose of this article is to delineate the process of evolution of know­ledge organization systems (KOSs) through identification of principles of unity such as internal and external unity in organizing the structure of KOSs to achieve content storage and retrieval purposes and to explain a novel method used in ranking of KOSs by proposing the principle of rank unity. Different types of KOSs which are addressed in this article include dictionaries, Roget's thesaurus, thesauri, micro, macro, and meta-thesaurus, ontologies, and lower, middle, and upper-level ontologies. This article relied on dialectic models to clarify the ideas in Kant's know­ledge theory. This is done by identifying logical relationships between categories (i.e., Thesis, antithesis, and synthesis) in the creation of data, information, and know­ledge in the human mind. The Analysis has adapted a historical methodology, more specifically a documentary method, as its reasoning process to propose a conceptual model for ranking KOSs. The study endeavors to explain the main elements of data, information, and know­ledge along with engineering mechanisms such as data, information, and know­ledge engineering in developing the structure of KOSs and also aims to clarify their influence on content storage and retrieval performance. KOSs have followed related principles of order to achieve an internal order, which could be examined by analyzing the principle of internal unity in know­ledge organizations. The principle of external unity leads us to the necessity of compatibility and interoperability between different types of KOSs to achieve semantic harmonization in increasing the performance of content storage and retrieval. Upon introduction of the principle of rank unity, a ranking method of KOSs utilizing cognition states as criteria could be considered to determine the position of each know­ledge organization with respect to others. The related criteria of the principle of rank unity- cognition states- are derived from Immanuel Kant's epistemology. The research results showed that KOSs, while having defined positions in cognition states, specific principles of order, related operational mechanisms, and related principles of unity in achieving their specific purposes, have benefited from the developmental experiences of previous KOSs, and further, their developmental processes owe to the experiences and methods of their previous generations.
  12. Peponakis, M.; Mastora, A.; Kapidakis, S.; Doerr, M.: Expressiveness and machine processability of Knowledge Organization Systems (KOS) : an analysis of concepts and relations (2020) 0.01
    0.012117098 = product of:
      0.06058549 = sum of:
        0.06058549 = weight(_text_:thesaurus in 5787) [ClassicSimilarity], result of:
          0.06058549 = score(doc=5787,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2552809 = fieldWeight in 5787, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5787)
      0.2 = coord(1/5)
    
    Abstract
    This study considers the expressiveness (that is the expressive power or expressivity) of different types of Knowledge Organization Systems (KOS) and discusses its potential to be machine-processable in the context of the Semantic Web. For this purpose, the theoretical foundations of KOS are reviewed based on conceptualizations introduced by the Functional Requirements for Subject Authority Data (FRSAD) and the Simple Knowledge Organization System (SKOS); natural language processing techniques are also implemented. Applying a comparative analysis, the dataset comprises a thesaurus (Eurovoc), a subject headings system (LCSH) and a classification scheme (DDC). These are compared with an ontology (CIDOC-CRM) by focusing on how they define and handle concepts and relations. It was observed that LCSH and DDC focus on the formalism of character strings (nomens) rather than on the modelling of semantics; their definition of what constitutes a concept is quite fuzzy, and they comprise a large number of complex concepts. By contrast, thesauri have a coherent definition of what constitutes a concept, and apply a systematic approach to the modelling of relations. Ontologies explicitly define diverse types of relations, and are by their nature machine-processable. The paper concludes that the potential of both the expressiveness and machine processability of each KOS is extensively regulated by its structural rules. It is harder to represent subject headings and classification schemes as semantic networks with nodes and arcs, while thesauri are more suitable for such a representation. In addition, a paradigm shift is revealed which focuses on the modelling of relations between concepts, rather than the concepts themselves.
  13. Fagundes, P.B.; Freund, G.P.; Vital, L.P.; Monteiro de Barros, C.; Macedo, D.D.J.de: Taxonomias, ontologias e tesauros : possibilidades de contribuição para o processo de Engenharia de Requisitos (2020) 0.01
    0.012117098 = product of:
      0.06058549 = sum of:
        0.06058549 = weight(_text_:thesaurus in 5828) [ClassicSimilarity], result of:
          0.06058549 = score(doc=5828,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2552809 = fieldWeight in 5828, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5828)
      0.2 = coord(1/5)
    
    Theme
    Konzeption und Anwendung des Prinzips Thesaurus
  14. Villaespesa, E.; Crider, S.: ¬A critical comparison analysis between human and machine-generated tags for the Metropolitan Museum of Art's collection (2021) 0.01
    0.012117098 = product of:
      0.06058549 = sum of:
        0.06058549 = weight(_text_:thesaurus in 341) [ClassicSimilarity], result of:
          0.06058549 = score(doc=341,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2552809 = fieldWeight in 341, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.0390625 = fieldNorm(doc=341)
      0.2 = coord(1/5)
    
    Abstract
    Purpose Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags assigned by the museum and those produced by three computer vision systems. Design/methodology/approach This paper uses computer vision tools to generate the data and the Getty Research Institute's Art and Architecture Thesaurus (AAT) to compare the subject keyword tags. Findings This paper finds that there are clear opportunities to use computer vision technologies to automatically generate tags that expand the terms used by the museum. This brings a new perspective to the collection that is different from the traditional art historical one. However, the study also surfaces challenges about the accuracy and lack of context within the computer vision results. Practical implications This finding has important implications on how these machine-generated tags complement the current taxonomies and vocabularies inputted in the collection database. In consequence, the museum needs to consider the selection process for choosing which computer vision system to apply to their collection. Furthermore, they also need to think critically about the kind of tags they wish to use, such as colors, materials or objects. Originality/value The study results add to the rapidly evolving field of computer vision within the art information context and provide recommendations of aspects to consider before selecting and implementing these technologies.
  15. Xiang, R.; Chersoni, E.; Lu, Q.; Huang, C.-R.; Li, W.; Long, Y.: Lexical data augmentation for sentiment analysis (2021) 0.01
    0.012117098 = product of:
      0.06058549 = sum of:
        0.06058549 = weight(_text_:thesaurus in 392) [ClassicSimilarity], result of:
          0.06058549 = score(doc=392,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2552809 = fieldWeight in 392, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.0390625 = fieldNorm(doc=392)
      0.2 = coord(1/5)
    
    Abstract
    Machine learning methods, especially deep learning models, have achieved impressive performance in various natural language processing tasks including sentiment analysis. However, deep learning models are more demanding for training data. Data augmentation techniques are widely used to generate new instances based on modifications to existing data or relying on external knowledge bases to address annotated data scarcity, which hinders the full potential of machine learning techniques. This paper presents our work using part-of-speech (POS) focused lexical substitution for data augmentation (PLSDA) to enhance the performance of machine learning algorithms in sentiment analysis. We exploit POS information to identify words to be replaced and investigate different augmentation strategies to find semantically related substitutions when generating new instances. The choice of POS tags as well as a variety of strategies such as semantic-based substitution methods and sampling methods are discussed in detail. Performance evaluation focuses on the comparison between PLSDA and two previous lexical substitution-based data augmentation methods, one of which is thesaurus-based, and the other is lexicon manipulation based. Our approach is tested on five English sentiment analysis benchmarks: SST-2, MR, IMDB, Twitter, and AirRecord. Hyperparameters such as the candidate similarity threshold and number of newly generated instances are optimized. Results show that six classifiers (SVM, LSTM, BiLSTM-AT, bidirectional encoder representations from transformers [BERT], XLNet, and RoBERTa) trained with PLSDA achieve accuracy improvement of more than 0.6% comparing to two previous lexical substitution methods averaged on five benchmarks. Introducing POS constraint and well-designed augmentation strategies can improve the reliability of lexical data augmentation methods. Consequently, PLSDA significantly improves the performance of sentiment analysis algorithms.
  16. Lima, G.A. de; Castro, I.R.: Uso da classificacao decimal universal para a recuperacao da informacao em ambientes digitas : uma revisao sistematica da literatura (2021) 0.01
    0.012117098 = product of:
      0.06058549 = sum of:
        0.06058549 = weight(_text_:thesaurus in 760) [ClassicSimilarity], result of:
          0.06058549 = score(doc=760,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2552809 = fieldWeight in 760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.0390625 = fieldNorm(doc=760)
      0.2 = coord(1/5)
    
    Abstract
    Knowledge Organization Systems, even traditional ones, such as the Universal Decimal Classification, have been studied to improve the retrieval of information online, although the potential of using knowledge structures in the user interface has not yet been widespread. Objective: This study presents a mapping of scientific production on information retrieval methodologies, which make use of the Universal Decimal Classification. Methodology: Systematic Literature Review, conducted in two stages, with a selection of 44 publications, resulting in the time interval from 1964 to 2017, whose categories analyzed were: most productive authors, languages of publications, types of document, year of publication, most cited work, major impact journal, and thematic categories covered in the publications. Results: A total of nine more productive authors and co-authors were found; predominance of the English language (42 publications); works published in the format of journal articles (33); and highlight to the year 2007 (eight publications). In addition, it was identified that the most cited work was by Mcilwaine (1997), with 61 citations, and the journal Extensions & Corrections to the UDC was the one with the largest number of publications, in addition to the incidence of the theme Universal Automation linked to a thesaurus for information retrieval, present in 19 works. Conclusions: Shortage of studies that explore the potential of the Decimal Classification, especially in Brazilian literature, which highlights the need for further study on the topic, involving research at the national and international levels.
  17. Ahmed, M.: Automatic indexing for agriculture : designing a framework by deploying Agrovoc, Agris and Annif (2023) 0.01
    0.012117098 = product of:
      0.06058549 = sum of:
        0.06058549 = weight(_text_:thesaurus in 1024) [ClassicSimilarity], result of:
          0.06058549 = score(doc=1024,freq=2.0), product of:
            0.23732872 = queryWeight, product of:
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.051357865 = queryNorm
            0.2552809 = fieldWeight in 1024, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.6210785 = idf(docFreq=1182, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1024)
      0.2 = coord(1/5)
    
    Abstract
    There are several ways to employ machine learning for automating subject indexing. One popular strategy is to utilize a supervised learning algorithm to train a model on a set of documents that have been manually indexed by subject matter using a standard vocabulary. The resulting model can then predict the subject of new and previously unseen documents by identifying patterns learned from the training data. To do this, the first step is to gather a large dataset of documents and manually assign each document a set of subject keywords/descriptors from a controlled vocabulary (e.g., from Agrovoc). Next, the dataset (obtained from Agris) can be divided into - i) a training dataset, and ii) a test dataset. The training dataset is used to train the model, while the test dataset is used to evaluate the model's performance. Machine learning can be a powerful tool for automating the process of subject indexing. This research is an attempt to apply Annif (http://annif. org/), an open-source AI/ML framework, to autogenerate subject keywords/descriptors for documentary resources in the domain of agriculture. The training dataset is obtained from Agris, which applies the Agrovoc thesaurus as a vocabulary tool (https://www.fao.org/agris/download).
  18. Morris, V.: Automated language identification of bibliographic resources (2020) 0.01
    0.011133251 = product of:
      0.055666253 = sum of:
        0.055666253 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
          0.055666253 = score(doc=5749,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.30952093 = fieldWeight in 5749, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=5749)
      0.2 = coord(1/5)
    
    Date
    2. 3.2020 19:04:22
  19. Heisig, P.: Informationswissenschaft für Wissensmanager : Was Wissensmanager von der informationswissenschaftlichen Forschung lernen können (2021) 0.01
    0.011133251 = product of:
      0.055666253 = sum of:
        0.055666253 = weight(_text_:22 in 223) [ClassicSimilarity], result of:
          0.055666253 = score(doc=223,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.30952093 = fieldWeight in 223, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=223)
      0.2 = coord(1/5)
    
    Date
    22. 1.2021 14:38:21
  20. Hauff-Hartig, S.: Automatische Transkription von Videos : Fernsehen 3.0: Automatisierte Sentimentanalyse und Zusammenstellung von Kurzvideos mit hohem Aufregungslevel KI-generierte Metadaten: Von der Technologiebeobachtung bis zum produktiven Einsatz (2021) 0.01
    0.011133251 = product of:
      0.055666253 = sum of:
        0.055666253 = weight(_text_:22 in 251) [ClassicSimilarity], result of:
          0.055666253 = score(doc=251,freq=2.0), product of:
            0.1798465 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.051357865 = queryNorm
            0.30952093 = fieldWeight in 251, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0625 = fieldNorm(doc=251)
      0.2 = coord(1/5)
    
    Date
    22. 5.2021 12:43:05

Languages

  • e 86
  • d 33
  • pt 3
  • More… Less…

Types

  • a 113
  • el 21
  • p 3
  • m 2
  • x 2
  • More… Less…