Search (7 results, page 1 of 1)

  • × theme_ss:"Computerlinguistik"
  • × type_ss:"a"
  • × year_i:[2020 TO 2030}
  1. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.11
    0.10877442 = product of:
      0.27193606 = sum of:
        0.067984015 = product of:
          0.20395203 = sum of:
            0.20395203 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.20395203 = score(doc=862,freq=2.0), product of:
                0.36289233 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.042803947 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.33333334 = coord(1/3)
        0.20395203 = weight(_text_:2f in 862) [ClassicSimilarity], result of:
          0.20395203 = score(doc=862,freq=2.0), product of:
            0.36289233 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.042803947 = queryNorm
            0.56201804 = fieldWeight in 862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=862)
      0.4 = coord(2/5)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  2. Laparra, E.; Binford-Walsh, A.; Emerson, K.; Miller, M.L.; López-Hoffman, L.; Currim, F.; Bethard, S.: Addressing structural hurdles for metadata extraction from environmental impact statements (2023) 0.02
    0.019227838 = product of:
      0.096139185 = sum of:
        0.096139185 = weight(_text_:policy in 1042) [ClassicSimilarity], result of:
          0.096139185 = score(doc=1042,freq=4.0), product of:
            0.22950763 = queryWeight, product of:
              5.361833 = idf(docFreq=563, maxDocs=44218)
              0.042803947 = queryNorm
            0.41889322 = fieldWeight in 1042, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.361833 = idf(docFreq=563, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1042)
      0.2 = coord(1/5)
    
    Abstract
    Natural language processing techniques can be used to analyze the linguistic content of a document to extract missing pieces of metadata. However, accurate metadata extraction may not depend solely on the linguistics, but also on structural problems such as extremely large documents, unordered multi-file documents, and inconsistency in manually labeled metadata. In this work, we start from two standard machine learning solutions to extract pieces of metadata from Environmental Impact Statements, environmental policy documents that are regularly produced under the US National Environmental Policy Act of 1969. We present a series of experiments where we evaluate how these standard approaches are affected by different issues derived from real-world data. We find that metadata extraction can be strongly influenced by nonlinguistic factors such as document length and volume ordering and that the standard machine learning solutions often do not scale well to long documents. We demonstrate how such solutions can be better adapted to these scenarios, and conclude with suggestions for other NLP practitioners cataloging large document collections.
  3. Ali, C.B.; Haddad, H.; Slimani, Y.: Multi-word terms selection for information retrieval (2022) 0.01
    0.014994288 = product of:
      0.07497144 = sum of:
        0.07497144 = weight(_text_:great in 900) [ClassicSimilarity], result of:
          0.07497144 = score(doc=900,freq=2.0), product of:
            0.24101958 = queryWeight, product of:
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.042803947 = queryNorm
            0.31105953 = fieldWeight in 900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.6307793 = idf(docFreq=430, maxDocs=44218)
              0.0390625 = fieldNorm(doc=900)
      0.2 = coord(1/5)
    
    Abstract
    Purpose A number of approaches and algorithms have been proposed over the years as a basis for automatic indexing. Many of these approaches suffer from precision inefficiency at low recall. The choice of indexing units has a great impact on search system effectiveness. The authors dive beyond simple terms indexing to propose a framework for multi-word terms (MWT) filtering and indexing. Design/methodology/approach In this paper, the authors rely on ranking MWT to filter them, keeping the most effective ones for the indexing process. The proposed model is based on filtering MWT according to their ability to capture the document topic and distinguish between different documents from the same collection. The authors rely on the hypothesis that the best MWT are those that achieve the greatest association degree. The experiments are carried out with English and French languages data sets. Findings The results indicate that this approach achieved precision enhancements at low recall, and it performed better than more advanced models based on terms dependencies. Originality/value Using and testing different association measures to select MWT that best describe the documents to enhance the precision in the first retrieved documents.
  4. ¬Der Student aus dem Computer (2023) 0.01
    0.008119082 = product of:
      0.04059541 = sum of:
        0.04059541 = product of:
          0.08119082 = sum of:
            0.08119082 = weight(_text_:22 in 1079) [ClassicSimilarity], result of:
              0.08119082 = score(doc=1079,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.5416616 = fieldWeight in 1079, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1079)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    27. 1.2023 16:22:55
  5. Morris, V.: Automated language identification of bibliographic resources (2020) 0.00
    0.0046394756 = product of:
      0.023197377 = sum of:
        0.023197377 = product of:
          0.046394754 = sum of:
            0.046394754 = weight(_text_:22 in 5749) [ClassicSimilarity], result of:
              0.046394754 = score(doc=5749,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.30952093 = fieldWeight in 5749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5749)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    2. 3.2020 19:04:22
  6. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.00
    0.0046394756 = product of:
      0.023197377 = sum of:
        0.023197377 = product of:
          0.046394754 = sum of:
            0.046394754 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.046394754 = score(doc=835,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    29.12.2022 18:22:55
  7. Rieger, F.: Lügende Computer (2023) 0.00
    0.0046394756 = product of:
      0.023197377 = sum of:
        0.023197377 = product of:
          0.046394754 = sum of:
            0.046394754 = weight(_text_:22 in 912) [ClassicSimilarity], result of:
              0.046394754 = score(doc=912,freq=2.0), product of:
                0.14989214 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042803947 = queryNorm
                0.30952093 = fieldWeight in 912, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=912)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    16. 3.2023 19:22:55