Search (479 results, page 24 of 24)

  • × theme_ss:"Computerlinguistik"
  1. Lu, K.; Cai, X.; Ajiferuke, I.; Wolfram, D.: Vocabulary size and its effect on topic representation (2017) 0.00
    2.0495258E-4 = product of:
      0.0047139092 = sum of:
        0.0047139092 = product of:
          0.0094278185 = sum of:
            0.0094278185 = weight(_text_:1 in 3414) [ClassicSimilarity], result of:
              0.0094278185 = score(doc=3414,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.16284466 = fieldWeight in 3414, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3414)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    This study investigates how computational overhead for topic model training may be reduced by selectively removing terms from the vocabulary of text corpora being modeled. We compare the impact of removing singly occurring terms, the top 0.5%, 1% and 5% most frequently occurring terms and both top 0.5% most frequent and singly occurring terms, along with changes in the number of topics modeled (10, 20, 30, 40, 50, 100) using three datasets. Four outcome measures are compared. The removal of singly occurring terms has little impact on outcomes for all of the measures tested. Document discriminative capacity, as measured by the document space density, is reduced by the removal of frequently occurring terms, but increases with higher numbers of topics. Vocabulary size does not greatly influence entropy, but entropy is affected by the number of topics. Finally, topic similarity, as measured by pairwise topic similarity and Jensen-Shannon divergence, decreases with the removal of frequent terms. The findings have implications for information science research in information retrieval and informetrics that makes use of topic modeling.
  2. Corbara, S.; Moreo, A.; Sebastiani, F.: Syllabic quantity patterns as rhythmic features for Latin authorship attribution (2023) 0.00
    2.0495258E-4 = product of:
      0.0047139092 = sum of:
        0.0047139092 = product of:
          0.0094278185 = sum of:
            0.0094278185 = weight(_text_:1 in 846) [ClassicSimilarity], result of:
              0.0094278185 = score(doc=846,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.16284466 = fieldWeight in 846, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.046875 = fieldNorm(doc=846)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Journal of the Association for Information Science and Technology. 74(2023) no.1, S.128-141
  3. Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.: Improving language understanding by Generative Pre-Training 0.00
    2.0495258E-4 = product of:
      0.0047139092 = sum of:
        0.0047139092 = product of:
          0.0094278185 = sum of:
            0.0094278185 = weight(_text_:1 in 870) [ClassicSimilarity], result of:
              0.0094278185 = score(doc=870,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.16284466 = fieldWeight in 870, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.046875 = fieldNorm(doc=870)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Object
    GPT-1
  4. Conceptual structures : logical, linguistic, and computational issues. 8th International Conference on Conceptual Structures, ICCS 2000, Darmstadt, Germany, August 14-18, 2000 (2000) 0.00
    1.8897736E-4 = product of:
      0.004346479 = sum of:
        0.004346479 = product of:
          0.008692958 = sum of:
            0.008692958 = weight(_text_:international in 691) [ClassicSimilarity], result of:
              0.008692958 = score(doc=691,freq=2.0), product of:
                0.078619614 = queryWeight, product of:
                  3.33588 = idf(docFreq=4276, maxDocs=44218)
                  0.023567878 = queryNorm
                0.11056984 = fieldWeight in 691, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.33588 = idf(docFreq=4276, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=691)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
  5. Ferber, R.; Wettler, M.; Rapp, R.: ¬An associative model of word selection in the generation of search queries (1995) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 3177) [ClassicSimilarity], result of:
              0.007856515 = score(doc=3177,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 3177, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3177)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    To generate a search query based on an end user request, a database searcher has to select appropriate search terms. These terms can either be taken from the request, or they can be added by the searcher. This selection process is simulated by an associative lexical net; the nodes of the net are the terms used in 94 records of written requests to a psychological information agency and the respective online searches. The weights connecting the nodes are calculated from the co-occurrences of these terms in the abstracts of the database PsycLit. To simulate the term selection process of a query, the nodes of all terms used in the written requests are activated, and 1 or more spreading activation cycles are performed. The result of the simulation is a ranking of the terms according to the activities of their nodes. Simulations for all 94 records show a low mean activity rank for the terms selected from the request; the mean activity rank for new terms added by the searcher is lower than the mean activity rank for thode terms of the request that were not used in the query
  6. Ahlgren, P.; Kekäläinen, J.: Indexing strategies for Swedish full text retrieval under different user scenarios (2007) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 896) [ClassicSimilarity], result of:
              0.007856515 = score(doc=896,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 896, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=896)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Information processing and management. 43(2007) no.1, S.81-102
  7. Vilares, J.; Alonso, M.A.; Vilares, M.: Extraction of complex index terms in non-English IR : a shallow parsing based approach (2008) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 2107) [ClassicSimilarity], result of:
              0.007856515 = score(doc=2107,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 2107, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2107)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    1. 8.2008 12:35:48
  8. Yang, Y.; Lu, Q.; Zhao, T.: ¬A delimiter-based general approach for Chinese term extraction (2009) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 3315) [ClassicSimilarity], result of:
              0.007856515 = score(doc=3315,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 3315, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3315)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.1, S.111-125
  9. Jacquemin, C.: Spotting and discovering terms through natural language processing (2001) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 119) [ClassicSimilarity], result of:
              0.007856515 = score(doc=119,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 119, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=119)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Isbn
    0-262-10085-1
  10. Nissim, M.; Zaninello, A,: Modeling the internal variability of multiword expressions through a pattern-based method (2013) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 990) [ClassicSimilarity], result of:
              0.007856515 = score(doc=990,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 990, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=990)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    ACM Transactions on Speech and Language Processing. 10(2013) no.2, Article7, S.1-26
  11. Fagan, J.L.: ¬The effectiveness of a nonsyntactic approach to automatic phrase indexing for document retrieval (1989) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 1845) [ClassicSimilarity], result of:
              0.007856515 = score(doc=1845,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 1845, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1845)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    It may be possible to improve the quality of automatic indexing systems by using complex descriptors, for example, phrases, in addition to the simple descriptors (words or word stems) that are normally used in automatically constructed representations of document content. This study is directed toward the goal of developing effective methods of identifying phrases in natural language text from which good quality phrase descriptors can be constructed. The effectiveness of one method, a simple nonsyntactic phrase indexing procedure, has been tested on five experimental document collections. The results have been analyzed in order to identify the inadequacies of the procedure, and to determine what kinds of information about text structure are needed in order to construct phrase descriptors that are good indicators of document content. Two primary conclusions have been reached: (1) In the retrieval experiments, the nonsyntactic phrase construction procedure did not consistently yield substantial improvements in effectiveness. It is therefore not likely that phrase indexing of this kind will prove to be an important method of enhancing the performance of automatic document indexing and retrieval systems in operational environments. (2) Many of the shortcomings of the nonsyntactic approach can be overcome by incorporating syntactic information into the phrase construction process. However, a general syntactic analysis facility may be required, since many useful sources of phrases cannot be exploited if only a limited inventory of syntactic patterns can be recognized. Further research should be conducted into methods of incorporating automatic syntactic analysis into content analysis for document retrieval.
  12. Brychcín, T.; Konopík, M.: HPS: High precision stemmer (2015) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 2686) [ClassicSimilarity], result of:
              0.007856515 = score(doc=2686,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Information processing and management. 51(2015) no.1, S.68-91
  13. Agarwal, B.; Ramampiaro, H.; Langseth, H.; Ruocco, M.: ¬A deep network model for paraphrase detection in short text messages (2018) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 5043) [ClassicSimilarity], result of:
              0.007856515 = score(doc=5043,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 5043, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5043)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    This paper is concerned with paraphrase detection, i.e., identifying sentences that are semantically identical. The ability to detect similar sentences written in natural language is crucial for several applications, such as text mining, text summarization, plagiarism detection, authorship authentication and question answering. Recognizing this importance, we study in particular how to address the challenges with detecting paraphrases in user generated short texts, such as Twitter, which often contain language irregularity and noise, and do not necessarily contain as much semantic information as longer clean texts. We propose a novel deep neural network-based approach that relies on coarse-grained sentence modelling using a convolutional neural network (CNN) and a recurrent neural network (RNN) model, combined with a specific fine-grained word-level similarity matching model. More specifically, we develop a new architecture, called DeepParaphrase, which enables to create an informative semantic representation of each sentence by (1) using CNN to extract the local region information in form of important n-grams from the sentence, and (2) applying RNN to capture the long-term dependency information. In addition, we perform a comparative study on state-of-the-art approaches within paraphrase detection. An important insight from this study is that existing paraphrase approaches perform well when applied on clean texts, but they do not necessarily deliver good performance against noisy texts, and vice versa. In contrast, our evaluation has shown that the proposed DeepParaphrase-based approach achieves good results in both types of texts, thus making it more robust and generic than the existing approaches.
  14. Ali, C.B.; Haddad, H.; Slimani, Y.: Multi-word terms selection for information retrieval (2022) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 900) [ClassicSimilarity], result of:
              0.007856515 = score(doc=900,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=900)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Information discovery and delivery 51(2022) no.1, S.xx-xx
  15. Zaitseva, E.M.: Developing linguistic tools of thematic search in library information systems (2023) 0.00
    1.707938E-4 = product of:
      0.0039282576 = sum of:
        0.0039282576 = product of:
          0.007856515 = sum of:
            0.007856515 = weight(_text_:1 in 1187) [ClassicSimilarity], result of:
              0.007856515 = score(doc=1187,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.13570388 = fieldWeight in 1187, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1187)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Scientific and technical libraries. 1(2023) no.11, S.66-83
  16. Humphrey, S.M.; Rogers, W.J.; Kilicoglu, H.; Demner-Fushman, D.; Rindflesch, T.C.: Word sense disambiguation by selecting the best semantic type based on journal descriptor indexing : preliminary experiment (2006) 0.00
    1.3663506E-4 = product of:
      0.0031426062 = sum of:
        0.0031426062 = product of:
          0.0062852125 = sum of:
            0.0062852125 = weight(_text_:1 in 4912) [ClassicSimilarity], result of:
              0.0062852125 = score(doc=4912,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.1085631 = fieldWeight in 4912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4912)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.1, S.96-113
  17. Kajanan, S.; Bao, Y.; Datta, A.; VanderMeer, D.; Dutta, K.: Efficient automatic search query formulation using phrase-level analysis (2014) 0.00
    1.3663506E-4 = product of:
      0.0031426062 = sum of:
        0.0031426062 = product of:
          0.0062852125 = sum of:
            0.0062852125 = weight(_text_:1 in 1264) [ClassicSimilarity], result of:
              0.0062852125 = score(doc=1264,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.1085631 = fieldWeight in 1264, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1264)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Date
    1. 5.2014 18:10:13
  18. Aydin, Ö.; Karaarslan, E.: OpenAI ChatGPT generated literature review: : digital twin in healthcare (2022) 0.00
    1.3663506E-4 = product of:
      0.0031426062 = sum of:
        0.0031426062 = product of:
          0.0062852125 = sum of:
            0.0062852125 = weight(_text_:1 in 851) [ClassicSimilarity], result of:
              0.0062852125 = score(doc=851,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.1085631 = fieldWeight in 851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.03125 = fieldNorm(doc=851)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Abstract
    Literature review articles are essential to summarize the related work in the selected field. However, covering all related studies takes too much time and effort. This study questions how Artificial Intelligence can be used in this process. We used ChatGPT to create a literature review article to show the stage of the OpenAI ChatGPT artificial intelligence application. As the subject, the applications of Digital Twin in the health field were chosen. Abstracts of the last three years (2020, 2021 and 2022) papers were obtained from the keyword "Digital twin in healthcare" search results on Google Scholar and paraphrased by ChatGPT. Later on, we asked ChatGPT questions. The results are promising; however, the paraphrased parts had significant matches when checked with the Ithenticate tool. This article is the first attempt to show the compilation and expression of knowledge will be accelerated with the help of artificial intelligence. We are still at the beginning of such advances. The future academic publishing process will require less human effort, which in turn will allow academics to focus on their studies. In future studies, we will monitor citations to this study to evaluate the academic validity of the content produced by the ChatGPT. 1. Introduction OpenAI ChatGPT (ChatGPT, 2022) is a chatbot based on the OpenAI GPT-3 language model. It is designed to generate human-like text responses to user input in a conversational context. OpenAI ChatGPT is trained on a large dataset of human conversations and can be used to create responses to a wide range of topics and prompts. The chatbot can be used for customer service, content creation, and language translation tasks, creating replies in multiple languages. OpenAI ChatGPT is available through the OpenAI API, which allows developers to access and integrate the chatbot into their applications and systems. OpenAI ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI. It is designed to generate human-like text, allowing it to engage in conversation with users naturally and intuitively. OpenAI ChatGPT is trained on a large dataset of human conversations, allowing it to understand and respond to a wide range of topics and contexts. It can be used in various applications, such as chatbots, customer service agents, and language translation systems. OpenAI ChatGPT is a state-of-the-art language model able to generate coherent and natural text that can be indistinguishable from text written by a human. As an artificial intelligence, ChatGPT may need help to change academic writing practices. However, it can provide information and guidance on ways to improve people's academic writing skills.
  19. Nagy T., I.: Detecting multiword expressions and named entities in natural language texts (2014) 0.00
    1.19555676E-4 = product of:
      0.0027497804 = sum of:
        0.0027497804 = product of:
          0.005499561 = sum of:
            0.005499561 = weight(_text_:1 in 1536) [ClassicSimilarity], result of:
              0.005499561 = score(doc=1536,freq=2.0), product of:
                0.057894554 = queryWeight, product of:
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.023567878 = queryNorm
                0.09499271 = fieldWeight in 1536, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4565027 = idf(docFreq=10304, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1536)
          0.5 = coord(1/2)
      0.04347826 = coord(1/23)
    
    Content
    Vgl.: http://doktori.bibl.u-szeged.hu/2434/1/main.pdf.

Languages

  • e 253
  • d 211
  • ru 9
  • m 4
  • f 2
  • More… Less…

Types

  • a 362
  • m 74
  • el 50
  • s 27
  • x 12
  • p 3
  • d 2
  • pat 2
  • More… Less…

Subjects

Classifications