Search (58 results, page 1 of 3)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.17
    0.17360047 = product of:
      0.2893341 = sum of:
        0.067984015 = product of:
          0.20395203 = sum of:
            0.20395203 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
              0.20395203 = score(doc=562,freq=2.0), product of:
                0.36289233 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042803947 = queryNorm
                0.56201804 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.33333334 = coord(1/3)
        0.20395203 = weight(_text_:2f in 562) [ClassicSimilarity], result of:
          0.20395203 = score(doc=562,freq=2.0), product of:
            0.36289233 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042803947 = queryNorm
            0.56201804 = fieldWeight in 562, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=562)
        0.017398031 = product of:
          0.034796063 = sum of:
            0.034796063 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
              0.034796063 = score(doc=562,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.23214069 = fieldWeight in 562, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=562)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.10877442 = product of:
      0.27193606 = sum of:
        0.067984015 = product of:
          0.20395203 = sum of:
            0.20395203 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20395203 = score(doc=862,freq=2.0), product of:
                0.36289233 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042803947 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.20395203 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.20395203 = score(doc=862,freq=2.0), product of:
            0.36289233 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042803947 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  3. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.09
    0.088540025 = product of:
      0.22135006 = sum of:
        0.20395203 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.20395203 = score(doc=563,freq=2.0), product of:
            0.36289233 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042803947 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.017398031 = product of:
          0.034796063 = sum of:
            0.034796063 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.034796063 = score(doc=563,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  4. Melucci, M.; Orio, N.: Design, implementation, and evaluation of a methodology for automatic stemmer generation (2007) 0.02
    0.020992003 = product of:
      0.10496002 = sum of:
        0.10496002 = weight(_text_:great in 268) [ClassicSimilarity], result of:
          0.10496002 = score(doc=268,freq=2.0), product of:
            0.24101958 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042803947 = queryNorm
            0.43548337 = fieldWeight in 268, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0546875 = fieldNorm(doc=268)
      0.2 = coord(1/5)
    
    Abstract
    The authors describe a statistical approach based on hidden Markov models (HMMs), for generating stemmers automatically. The proposed approach requires little effort to insert new languages in the system even if minimal linguistic knowledge is available. This is a key advantage especially for digital libraries, which are often developed for a specific institution or government because the program can manage a great amount of documents written in local languages. The evaluation described in the article shows that the stemmers implemented by means of HMMs are as effective as those based on linguistic rules.
  5. Laparra, E.; Binford-Walsh, A.; Emerson, K.; Miller, M.L.; López-Hoffman, L.; Currim, F.; Bethard, S.: Addressing structural hurdles for metadata extraction from environmental impact statements (2023) 0.02
    0.019227838 = product of:
      0.096139185 = sum of:
        0.096139185 = weight(_text_:policy in 1042) [ClassicSimilarity], result of:
          0.096139185 = score(doc=1042,freq=4.0), product of:
            0.22950763 = queryWeight, product of:
              5.361833 = idf(docFreq=563, maxDocs=44218)
              0.042803947 = queryNorm
            0.41889322 = fieldWeight in 1042, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.361833 = idf(docFreq=563, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1042)
      0.2 = coord(1/5)
    
    Abstract
    Natural language processing techniques can be used to analyze the linguistic content of a document to extract missing pieces of metadata. However, accurate metadata extraction may not depend solely on the linguistics, but also on structural problems such as extremely large documents, unordered multi-file documents, and inconsistency in manually labeled metadata. In this work, we start from two standard machine learning solutions to extract pieces of metadata from Environmental Impact Statements, environmental policy documents that are regularly produced under the US National Environmental Policy Act of 1969. We present a series of experiments where we evaluate how these standard approaches are affected by different issues derived from real-world data. We find that metadata extraction can be strongly influenced by nonlinguistic factors such as document length and volume ordering and that the standard machine learning solutions often do not scale well to long documents. We demonstrate how such solutions can be better adapted to these scenarios, and conclude with suggestions for other NLP practitioners cataloging large document collections.
  6. Peis, E.; Herrera-Viedma, E.; Herrera, J.C.: On the evaluation of XML documents using Fuzzy linguistic techniques (2003) 0.02
    0.017993147 = product of:
      0.08996573 = sum of:
        0.08996573 = weight(_text_:great in 2778) [ClassicSimilarity], result of:
          0.08996573 = score(doc=2778,freq=2.0), product of:
            0.24101958 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042803947 = queryNorm
            0.37327147 = fieldWeight in 2778, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.046875 = fieldNorm(doc=2778)
      0.2 = coord(1/5)
    
    Abstract
    Recommender systems evaluate and filter the great amount of information available an the Web to assist people in their search processes. A fuzzy evaluation method of XML documents based an computing with words is presented. Given an XML document type (e.g. scientific article), we consider that its elements are not equally informative. This is indicated by the use of a DTD and defining linguistic importance attributes to the more meaningful elements of the DTD designed. Then, the evaluation method generates linguistic recommendations from linguistic evaluation judgements provided by different recommenders an meaningful elements of DTD.
  7. Rajasurya, S.; Muralidharan, T.; Devi, S.; Swamynathan, S.: Semantic information retrieval using ontology in university domain (2012) 0.01
    0.014994288 = product of:
      0.07497144 = sum of:
        0.07497144 = weight(_text_:great in 2861) [ClassicSimilarity], result of:
          0.07497144 = score(doc=2861,freq=2.0), product of:
            0.24101958 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042803947 = queryNorm
            0.31105953 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2861)
      0.2 = coord(1/5)
    
    Abstract
    Today's conventional search engines hardly do provide the essential content relevant to the user's search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain) confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.
  8. Rozinajová, V.; Macko, P.: Using natural language to search linked data (2017) 0.01
    0.014994288 = product of:
      0.07497144 = sum of:
        0.07497144 = weight(_text_:great in 3488) [ClassicSimilarity], result of:
          0.07497144 = score(doc=3488,freq=2.0), product of:
            0.24101958 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042803947 = queryNorm
            0.31105953 = fieldWeight in 3488, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3488)
      0.2 = coord(1/5)
    
    Abstract
    There are many endeavors aiming to offer users more effective ways of getting relevant information from web. One of them is represented by a concept of Linked Data, which provides interconnected data sources. But querying these types of data is difficult not only for the conventional web users but also for ex-perts in this field. Therefore, a more comfortable way of user query would be of great value. One direction could be to allow the user to use a natural language. To make this task easier we have proposed a method for translating natural language query to SPARQL query. It is based on a sentence structure - utilizing dependen-cies between the words in user queries. Dependencies are used to map the query to the semantic web structure, which is in the next step translated to SPARQL query. According to our first experiments we are able to answer a significant group of user queries.
  9. Ali, C.B.; Haddad, H.; Slimani, Y.: Multi-word terms selection for information retrieval (2022) 0.01
    0.014994288 = product of:
      0.07497144 = sum of:
        0.07497144 = weight(_text_:great in 900) [ClassicSimilarity], result of:
          0.07497144 = score(doc=900,freq=2.0), product of:
            0.24101958 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042803947 = queryNorm
            0.31105953 = fieldWeight in 900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=900)
      0.2 = coord(1/5)
    
    Abstract
    Purpose A number of approaches and algorithms have been proposed over the years as a basis for automatic indexing. Many of these approaches suffer from precision inefficiency at low recall. The choice of indexing units has a great impact on search system effectiveness. The authors dive beyond simple terms indexing to propose a framework for multi-word terms (MWT) filtering and indexing. Design/methodology/approach In this paper, the authors rely on ranking MWT to filter them, keeping the most effective ones for the indexing process. The proposed model is based on filtering MWT according to their ability to capture the document topic and distinguish between different documents from the same collection. The authors rely on the hypothesis that the best MWT are those that achieve the greatest association degree. The experiments are carried out with English and French languages data sets. Findings The results indicate that this approach achieved precision enhancements at low recall, and it performed better than more advanced models based on terms dependencies. Originality/value Using and testing different association measures to select MWT that best describe the documents to enhance the precision in the first retrieved documents.
  10. Warner, A.J.: Natural language processing (1987) 0.01
    0.009278951 = product of:
      0.046394754 = sum of:
        0.046394754 = product of:
          0.09278951 = sum of:
            0.09278951 = weight(_text_:22 in 337) [ClassicSimilarity], result of:
              0.09278951 = score(doc=337,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.61904186 = fieldWeight in 337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=337)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Annual review of information science and technology. 22(1987), S.79-108
  11. McMahon, J.G.; Smith, F.J.: Improved statistical language model performance with automatic generated word hierarchies (1996) 0.01
    0.008119082 = product of:
      0.04059541 = sum of:
        0.04059541 = product of:
          0.08119082 = sum of:
            0.08119082 = weight(_text_:22 in 3164) [ClassicSimilarity], result of:
              0.08119082 = score(doc=3164,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.5416616 = fieldWeight in 3164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3164)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Computational linguistics. 22(1996) no.2, S.217-248
  12. Ruge, G.: ¬A spreading activation network for automatic generation of thesaurus relationships (1991) 0.01
    0.008119082 = product of:
      0.04059541 = sum of:
        0.04059541 = product of:
          0.08119082 = sum of:
            0.08119082 = weight(_text_:22 in 4506) [ClassicSimilarity], result of:
              0.08119082 = score(doc=4506,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.5416616 = fieldWeight in 4506, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4506)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    8.10.2000 11:52:22
  13. Somers, H.: Example-based machine translation : Review article (1999) 0.01
    0.008119082 = product of:
      0.04059541 = sum of:
        0.04059541 = product of:
          0.08119082 = sum of:
            0.08119082 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.08119082 = score(doc=6672,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    31. 7.1996 9:22:19
  14. New tools for human translators (1997) 0.01
    0.008119082 = product of:
      0.04059541 = sum of:
        0.04059541 = product of:
          0.08119082 = sum of:
            0.08119082 = weight(_text_:22 in 1179) [ClassicSimilarity], result of:
              0.08119082 = score(doc=1179,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.5416616 = fieldWeight in 1179, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1179)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    31. 7.1996 9:22:19
  15. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.01
    0.008119082 = product of:
      0.04059541 = sum of:
        0.04059541 = product of:
          0.08119082 = sum of:
            0.08119082 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.08119082 = score(doc=3117,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    28. 2.1999 10:48:22
  16. ¬Der Student aus dem Computer (2023) 0.01
    0.008119082 = product of:
      0.04059541 = sum of:
        0.04059541 = product of:
          0.08119082 = sum of:
            0.08119082 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.08119082 = score(doc=1079,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27. 1.2023 16:22:55
  17. Byrne, C.C.; McCracken, S.A.: ¬An adaptive thesaurus employing semantic distance, relational inheritance and nominal compound interpretation for linguistic support of information retrieval (1999) 0.01
    0.0069592125 = product of:
      0.034796063 = sum of:
        0.034796063 = product of:
          0.069592126 = sum of:
            0.069592126 = weight(_text_:22 in 4483) [ClassicSimilarity], result of:
              0.069592126 = score(doc=4483,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.46428138 = fieldWeight in 4483, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4483)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    15. 3.2000 10:22:37
  18. Boleda, G.; Evert, S.: Multiword expressions : a pain in the neck of lexical semantics (2009) 0.01
    0.0069592125 = product of:
      0.034796063 = sum of:
        0.034796063 = product of:
          0.069592126 = sum of:
            0.069592126 = weight(_text_:22 in 4888) [ClassicSimilarity], result of:
              0.069592126 = score(doc=4888,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.46428138 = fieldWeight in 4888, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4888)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    1. 3.2013 14:56:22
  19. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.01
    0.0069592125 = product of:
      0.034796063 = sum of:
        0.034796063 = product of:
          0.069592126 = sum of:
            0.069592126 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.069592126 = score(doc=5429,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    c't. 2000, H.22, S.230-231
  20. Hutchins, J.: From first conception to first demonstration : the nascent years of machine translation, 1947-1954. A chronology (1997) 0.01
    0.0057993443 = product of:
      0.02899672 = sum of:
        0.02899672 = product of:
          0.05799344 = sum of:
            0.05799344 = weight(_text_:22 in 1463) [ClassicSimilarity], result of:
              0.05799344 = score(doc=1463,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.38690117 = fieldWeight in 1463, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1463)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    31. 7.1996 9:22:19

Years

Languages

  • e 42
  • d 16

Types

  • a 46
  • el 6
  • m 5
  • s 3
  • p 2
  • x 2
  • d 1
  • More… Less…