Search (282 results, page 1 of 15)

  • × theme_ss:"Computerlinguistik"
  1. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.08
    0.082663804 = sum of:
      0.06734439 = product of:
        0.26937756 = sum of:
          0.26937756 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.26937756 = score(doc=562,freq=2.0), product of:
              0.4793041 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.05653497 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.25 = coord(1/4)
      0.015319416 = product of:
        0.045958247 = sum of:
          0.045958247 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.045958247 = score(doc=562,freq=2.0), product of:
              0.19797583 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05653497 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
    
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
  2. Schwarz, C.: THESYS: Thesaurus Syntax System : a fully automatic thesaurus building aid (1988) 0.07
    0.06631558 = product of:
      0.13263115 = sum of:
        0.13263115 = sum of:
          0.052024584 = weight(_text_:c in 1361) [ClassicSimilarity], result of:
            0.052024584 = score(doc=1361,freq=2.0), product of:
              0.19501202 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.05653497 = queryNorm
              0.2667763 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
          0.026988605 = weight(_text_:h in 1361) [ClassicSimilarity], result of:
            0.026988605 = score(doc=1361,freq=2.0), product of:
              0.14045826 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05653497 = queryNorm
              0.19214681 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
          0.053617954 = weight(_text_:22 in 1361) [ClassicSimilarity], result of:
            0.053617954 = score(doc=1361,freq=2.0), product of:
              0.19797583 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05653497 = queryNorm
              0.2708308 = fieldWeight in 1361, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1361)
      0.5 = coord(1/2)
    
    Date
    6. 1.1999 10:22:07
    Source
    Wissensorganisation im Wandel: Dezimalklassifikation - Thesaurusfragen - Warenklassifikation. Proc. 11. Jahrestagung der Gesellschaft für Klassifikation, Aachen, 29.6.-1.7.1987. Hrsg.: H.-J. Hermes u. J. Hölzl
  3. Somers, H.: Example-based machine translation : Review article (1999) 0.05
    0.053737707 = product of:
      0.107475415 = sum of:
        0.107475415 = product of:
          0.16121311 = sum of:
            0.05397721 = weight(_text_:h in 6672) [ClassicSimilarity], result of:
              0.05397721 = score(doc=6672,freq=2.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.38429362 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
            0.10723591 = weight(_text_:22 in 6672) [ClassicSimilarity], result of:
              0.10723591 = score(doc=6672,freq=2.0), product of:
                0.19797583 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05653497 = queryNorm
                0.5416616 = fieldWeight in 6672, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6672)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    31. 7.1996 9:22:19
  4. Baayen, R.H.; Lieber, H.: Word frequency distributions and lexical semantics (1997) 0.05
    0.053737707 = product of:
      0.107475415 = sum of:
        0.107475415 = product of:
          0.16121311 = sum of:
            0.05397721 = weight(_text_:h in 3117) [ClassicSimilarity], result of:
              0.05397721 = score(doc=3117,freq=2.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.38429362 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
            0.10723591 = weight(_text_:22 in 3117) [ClassicSimilarity], result of:
              0.10723591 = score(doc=3117,freq=2.0), product of:
                0.19797583 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05653497 = queryNorm
                0.5416616 = fieldWeight in 3117, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3117)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    28. 2.1999 10:48:22
  5. Monnerjahn, P.: Vorsprung ohne Technik : Übersetzen: Computer und Qualität (2000) 0.05
    0.04606089 = product of:
      0.09212178 = sum of:
        0.09212178 = product of:
          0.13818267 = sum of:
            0.04626618 = weight(_text_:h in 5429) [ClassicSimilarity], result of:
              0.04626618 = score(doc=5429,freq=2.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.32939452 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
            0.091916494 = weight(_text_:22 in 5429) [ClassicSimilarity], result of:
              0.091916494 = score(doc=5429,freq=2.0), product of:
                0.19797583 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05653497 = queryNorm
                0.46428138 = fieldWeight in 5429, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5429)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.230-231
  6. Schwarz, C.; Thurmair, G.: REALIST: eine Retrievalhilfe mit informationslinguistischen Komponenten (1986) 0.05
    0.045150395 = product of:
      0.09030079 = sum of:
        0.09030079 = product of:
          0.13545118 = sum of:
            0.08918501 = weight(_text_:c in 493) [ClassicSimilarity], result of:
              0.08918501 = score(doc=493,freq=2.0), product of:
                0.19501202 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05653497 = queryNorm
                0.45733082 = fieldWeight in 493, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.09375 = fieldNorm(doc=493)
            0.04626618 = weight(_text_:h in 493) [ClassicSimilarity], result of:
              0.04626618 = score(doc=493,freq=2.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.32939452 = fieldWeight in 493, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.09375 = fieldNorm(doc=493)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Deutscher Dokumentartag 1986, Freiburg, 8.-10.10.1986: Bedarfsorientierte Fachinformation: Methoden und Techniken am Arbeitsplatz. Bearb.: H. Strohl-Goebel
  7. Lu, C.; Bu, Y.; Wang, J.; Ding, Y.; Torvik, V.; Schnaars, M.; Zhang, C.: Examining scientific writing styles from the perspective of linguistic complexity : a cross-level moderation model (2019) 0.04
    0.040493406 = sum of:
      0.0194723 = product of:
        0.0778892 = sum of:
          0.0778892 = weight(_text_:authors in 5219) [ClassicSimilarity], result of:
            0.0778892 = score(doc=5219,freq=2.0), product of:
              0.25773242 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05653497 = queryNorm
              0.30220953 = fieldWeight in 5219, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.046875 = fieldNorm(doc=5219)
        0.25 = coord(1/4)
      0.02102111 = product of:
        0.06306332 = sum of:
          0.06306332 = weight(_text_:c in 5219) [ClassicSimilarity], result of:
            0.06306332 = score(doc=5219,freq=4.0), product of:
              0.19501202 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.05653497 = queryNorm
              0.32338172 = fieldWeight in 5219, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.046875 = fieldNorm(doc=5219)
        0.33333334 = coord(1/3)
    
    Abstract
    Publishing articles in high-impact English journals is difficult for scholars around the world, especially for non-native English-speaking scholars (NNESs), most of whom struggle with proficiency in English. To uncover the differences in English scientific writing between native English-speaking scholars (NESs) and NNESs, we collected a large-scale data set containing more than 150,000 full-text articles published in PLoS between 2006 and 2015. We divided these articles into three groups according to the ethnic backgrounds of the first and corresponding authors, obtained by Ethnea, and examined the scientific writing styles in English from a two-fold perspective of linguistic complexity: (a) syntactic complexity, including measurements of sentence length and sentence complexity; and (b) lexical complexity, including measurements of lexical diversity, lexical density, and lexical sophistication. The observations suggest marginal differences between groups in syntactical and lexical complexity.
  8. Chou, C.; Chu, T.: ¬An analysis of BERT (NLP) for assisted subject indexing for Project Gutenberg (2022) 0.04
    0.04005921 = sum of:
      0.02271768 = product of:
        0.09087072 = sum of:
          0.09087072 = weight(_text_:authors in 1139) [ClassicSimilarity], result of:
            0.09087072 = score(doc=1139,freq=2.0), product of:
              0.25773242 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05653497 = queryNorm
              0.35257778 = fieldWeight in 1139, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1139)
        0.25 = coord(1/4)
      0.017341528 = product of:
        0.052024584 = sum of:
          0.052024584 = weight(_text_:c in 1139) [ClassicSimilarity], result of:
            0.052024584 = score(doc=1139,freq=2.0), product of:
              0.19501202 = queryWeight, product of:
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.05653497 = queryNorm
              0.2667763 = fieldWeight in 1139, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.4494052 = idf(docFreq=3817, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1139)
        0.33333334 = coord(1/3)
    
    Abstract
    In light of AI (Artificial Intelligence) and NLP (Natural language processing) technologies, this article examines the feasibility of using AI/NLP models to enhance the subject indexing of digital resources. While BERT (Bidirectional Encoder Representations from Transformers) models are widely used in scholarly communities, the authors assess whether BERT models can be used in machine-assisted indexing in the Project Gutenberg collection, through suggesting Library of Congress subject headings filtered by certain Library of Congress Classification subclass labels. The findings of this study are informative for further research on BERT models to assist with automatic subject indexing for digital library collections.
  9. Deventer, J.P. van; Kruger, C.J.; Johnson, R.D.: Delineating knowledge management through lexical analysis : a retrospective (2015) 0.04
    0.038988993 = sum of:
      0.030052667 = product of:
        0.12021067 = sum of:
          0.12021067 = weight(_text_:authors in 3807) [ClassicSimilarity], result of:
            0.12021067 = score(doc=3807,freq=14.0), product of:
              0.25773242 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05653497 = queryNorm
              0.46641657 = fieldWeight in 3807, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.25 = coord(1/4)
      0.008936326 = product of:
        0.026808977 = sum of:
          0.026808977 = weight(_text_:22 in 3807) [ClassicSimilarity], result of:
            0.026808977 = score(doc=3807,freq=2.0), product of:
              0.19797583 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05653497 = queryNorm
              0.1354154 = fieldWeight in 3807, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=3807)
        0.33333334 = coord(1/3)
    
    Abstract
    Purpose Academic authors tend to define terms that meet their own needs. Knowledge Management (KM) is a term that comes to mind and is examined in this study. Lexicographical research identified KM terms used by authors from 1996 to 2006 in academic outlets to define KM. Data were collected based on strict criteria which included that definitions should be unique instances. From 2006 onwards, these authors could not identify new unique instances of definitions with repetitive usage of such definition instances. Analysis revealed that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, and Process) and Contextualised Content (Information). The paper aims to discuss these issues. Design/methodology/approach The aim of this paper is to add to the body of knowledge in the KM discipline and supply KM practitioners and scholars with insight into what is commonly regarded to be KM so as to reignite the debate on what one could consider as KM. The lexicon used by KM scholars was evaluated though the application of lexicographical research methods as extended though Knowledge Discovery and Text Analysis methods. Findings By simplifying term relationships through the application of lexicographical research methods, as extended though Knowledge Discovery and Text Analysis methods, it was found that KM is directly defined by People (Person and Organisation), Processes (Codify, Share, Leverage, Process) and Contextualised Content (Information). One would therefore be able to indicate that KM, from an academic point of view, refers to people processing contextualised content.
    Research limitations/implications In total, 42 definitions were identified spanning a period of 11 years. This represented the first use of KM through the estimated apex of terms used. From 2006 onwards definitions were used in repetition, and all definitions that were considered to repeat were therefore subsequently excluded as not being unique instances. All definitions listed are by no means complete and exhaustive. The definitions are viewed outside the scope and context in which they were originally formulated and then used to review the key concepts in the definitions themselves. Social implications When the authors refer to the aforementioned discussion of KM content as well as the presentation of the method followed in this paper, the authors may have a few implications for future research in KM. First the research validates ideas presented by the OECD in 2005 pertaining to KM. It also validates that through the evolution of KM, the authors ended with a description of KM that may be seen as a standardised description. If the authors as academics and practitioners, for example, refer to KM as the same construct and/or idea, it has the potential to speculatively, distinguish between what KM may or may not be. Originality/value By simplifying the term used to define KM, by focusing on the most common definitions, the paper assist in refocusing KM by reconsidering the dimensions that is the most common in how it has been defined over time. This would hopefully assist in reigniting discussions about KM and how it may be used to the benefit of an organisation.
    Date
    20. 1.2015 18:30:22
  10. Kuhlmann, U.; Monnerjahn, P.: Sprache auf Knopfdruck : Sieben automatische Übersetzungsprogramme im Test (2000) 0.04
    0.03838408 = product of:
      0.07676816 = sum of:
        0.07676816 = product of:
          0.11515223 = sum of:
            0.038555153 = weight(_text_:h in 5428) [ClassicSimilarity], result of:
              0.038555153 = score(doc=5428,freq=2.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.27449545 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
            0.07659708 = weight(_text_:22 in 5428) [ClassicSimilarity], result of:
              0.07659708 = score(doc=5428,freq=2.0), product of:
                0.19797583 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05653497 = queryNorm
                0.38690117 = fieldWeight in 5428, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5428)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    c't. 2000, H.22, S.220-229
  11. Ali, C.B.; Haddad, H.; Slimani, Y.: Multi-word terms selection for information retrieval (2022) 0.03
    0.034531705 = sum of:
      0.028105846 = product of:
        0.11242338 = sum of:
          0.11242338 = weight(_text_:authors in 900) [ClassicSimilarity], result of:
            0.11242338 = score(doc=900,freq=6.0), product of:
              0.25773242 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05653497 = queryNorm
              0.43620193 = fieldWeight in 900, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0390625 = fieldNorm(doc=900)
        0.25 = coord(1/4)
      0.006425859 = product of:
        0.019277576 = sum of:
          0.019277576 = weight(_text_:h in 900) [ClassicSimilarity], result of:
            0.019277576 = score(doc=900,freq=2.0), product of:
              0.14045826 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05653497 = queryNorm
              0.13724773 = fieldWeight in 900, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0390625 = fieldNorm(doc=900)
        0.33333334 = coord(1/3)
    
    Abstract
    Purpose A number of approaches and algorithms have been proposed over the years as a basis for automatic indexing. Many of these approaches suffer from precision inefficiency at low recall. The choice of indexing units has a great impact on search system effectiveness. The authors dive beyond simple terms indexing to propose a framework for multi-word terms (MWT) filtering and indexing. Design/methodology/approach In this paper, the authors rely on ranking MWT to filter them, keeping the most effective ones for the indexing process. The proposed model is based on filtering MWT according to their ability to capture the document topic and distinguish between different documents from the same collection. The authors rely on the hypothesis that the best MWT are those that achieve the greatest association degree. The experiments are carried out with English and French languages data sets. Findings The results indicate that this approach achieved precision enhancements at low recall, and it performed better than more advanced models based on terms dependencies. Originality/value Using and testing different association measures to select MWT that best describe the documents to enhance the precision in the first retrieved documents.
  12. Ladewig, C.: 'Information Retrieval ohne Linguistik?' : Erwiderung zu dem Artikel von Gerda Ruge und Sebastian Goeser, Nfd 49(1998) H.6, S.361-369 (1998) 0.03
    0.03435895 = product of:
      0.0687179 = sum of:
        0.0687179 = product of:
          0.103076845 = sum of:
            0.05945667 = weight(_text_:c in 2513) [ClassicSimilarity], result of:
              0.05945667 = score(doc=2513,freq=2.0), product of:
                0.19501202 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05653497 = queryNorm
                0.3048872 = fieldWeight in 2513, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2513)
            0.043620173 = weight(_text_:h in 2513) [ClassicSimilarity], result of:
              0.043620173 = score(doc=2513,freq=4.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.31055614 = fieldWeight in 2513, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2513)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    nfd Information - Wissenschaft und Praxis. 49(1998) H.8, S.476-478
  13. Noever, D.; Ciolino, M.: ¬The Turing deception (2022) 0.03
    0.033672195 = product of:
      0.06734439 = sum of:
        0.06734439 = product of:
          0.26937756 = sum of:
            0.26937756 = weight(_text_:3a in 862) [ClassicSimilarity], result of:
              0.26937756 = score(doc=862,freq=2.0), product of:
                0.4793041 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05653497 = queryNorm
                0.56201804 = fieldWeight in 862, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.046875 = fieldNorm(doc=862)
          0.25 = coord(1/4)
      0.5 = coord(1/2)
    
    Source
    https%3A%2F%2Farxiv.org%2Fabs%2F2212.06721&usg=AOvVaw3i_9pZm9y_dQWoHi6uv0EN
  14. Wu, H.; He, J.; Pei, Y.: Scientific impact at the topic level : a case study in computational linguistics (2010) 0.03
    0.03171388 = sum of:
      0.02271768 = product of:
        0.09087072 = sum of:
          0.09087072 = weight(_text_:authors in 4103) [ClassicSimilarity], result of:
            0.09087072 = score(doc=4103,freq=2.0), product of:
              0.25773242 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05653497 = queryNorm
              0.35257778 = fieldWeight in 4103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4103)
        0.25 = coord(1/4)
      0.008996202 = product of:
        0.026988605 = sum of:
          0.026988605 = weight(_text_:h in 4103) [ClassicSimilarity], result of:
            0.026988605 = score(doc=4103,freq=2.0), product of:
              0.14045826 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05653497 = queryNorm
              0.19214681 = fieldWeight in 4103, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0546875 = fieldNorm(doc=4103)
        0.33333334 = coord(1/3)
    
    Abstract
    In this article, we propose to apply the topic model and topic-level eigenfactor (TEF) algorithm to assess the relative importance of academic entities including articles, authors, journals, and conferences. Scientific impact is measured by the biased PageRank score toward topics created by the latent topic model. The TEF metric considers the impact of an academic entity in multiple granular views as well as in a global view. Experiments on a computational linguistics corpus show that the method is a useful and promising measure to assess scientific impact.
  15. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.03
    0.03171388 = sum of:
      0.02271768 = product of:
        0.09087072 = sum of:
          0.09087072 = weight(_text_:authors in 2764) [ClassicSimilarity], result of:
            0.09087072 = score(doc=2764,freq=2.0), product of:
              0.25773242 = queryWeight, product of:
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.05653497 = queryNorm
              0.35257778 = fieldWeight in 2764, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.558814 = idf(docFreq=1258, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2764)
        0.25 = coord(1/4)
      0.008996202 = product of:
        0.026988605 = sum of:
          0.026988605 = weight(_text_:h in 2764) [ClassicSimilarity], result of:
            0.026988605 = score(doc=2764,freq=2.0), product of:
              0.14045826 = queryWeight, product of:
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.05653497 = queryNorm
              0.19214681 = fieldWeight in 2764, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                2.4844491 = idf(docFreq=10020, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2764)
        0.33333334 = coord(1/3)
    
    Abstract
    The ACL Anthology is a large collection of research papers in computational linguistics. Citation data were obtained using text extraction from a collection of PDF files with significant manual postprocessing performed to clean up the results. Manual annotation of the references was then performed to complete the citation network. We analyzed the networks of paper citations, author citations, and author collaborations in an attempt to identify the most central papers and authors. The analysis includes general network statistics, PageRank, metrics across publication years and venues, the impact factor and h-index, as well as other measures.
  16. Bager, J.: ¬Die Text-KI ChatGPT schreibt Fachtexte, Prosa, Gedichte und Programmcode (2023) 0.03
    0.03070726 = product of:
      0.06141452 = sum of:
        0.06141452 = product of:
          0.09212178 = sum of:
            0.03084412 = weight(_text_:h in 835) [ClassicSimilarity], result of:
              0.03084412 = score(doc=835,freq=2.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.21959636 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
            0.06127766 = weight(_text_:22 in 835) [ClassicSimilarity], result of:
              0.06127766 = score(doc=835,freq=2.0), product of:
                0.19797583 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05653497 = queryNorm
                0.30952093 = fieldWeight in 835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=835)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    29.12.2022 18:22:55
    Source
    c't. 2023, H.1, S.46- [https://www.heise.de/select/ct/2023/1/2233908274346530870]
  17. Kunze, C.: Lexikalisch-semantische Wortnetze in Sprachwissenschaft und Sprachtechnologie (2006) 0.03
    0.030100264 = product of:
      0.060200527 = sum of:
        0.060200527 = product of:
          0.09030079 = sum of:
            0.05945667 = weight(_text_:c in 6023) [ClassicSimilarity], result of:
              0.05945667 = score(doc=6023,freq=2.0), product of:
                0.19501202 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05653497 = queryNorm
                0.3048872 = fieldWeight in 6023, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6023)
            0.03084412 = weight(_text_:h in 6023) [ClassicSimilarity], result of:
              0.03084412 = score(doc=6023,freq=2.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.21959636 = fieldWeight in 6023, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=6023)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Information - Wissenschaft und Praxis. 57(2006) H.6/7, S.309-314
  18. Nhongkai, S.N.; Bentz, H.-J.: Bilinguale Suche mittels Konzeptnetzen (2006) 0.03
    0.030100264 = product of:
      0.060200527 = sum of:
        0.060200527 = product of:
          0.09030079 = sum of:
            0.05945667 = weight(_text_:c in 3914) [ClassicSimilarity], result of:
              0.05945667 = score(doc=3914,freq=2.0), product of:
                0.19501202 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05653497 = queryNorm
                0.3048872 = fieldWeight in 3914, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3914)
            0.03084412 = weight(_text_:h in 3914) [ClassicSimilarity], result of:
              0.03084412 = score(doc=3914,freq=2.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.21959636 = fieldWeight in 3914, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3914)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
  19. Schwarz, C.: Probleme der syntaktischen Indexierung (1986) 0.03
    0.028028145 = product of:
      0.05605629 = sum of:
        0.05605629 = product of:
          0.16816887 = sum of:
            0.16816887 = weight(_text_:c in 8180) [ClassicSimilarity], result of:
              0.16816887 = score(doc=8180,freq=4.0), product of:
                0.19501202 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.05653497 = queryNorm
                0.8623513 = fieldWeight in 8180, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.125 = fieldNorm(doc=8180)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Informationslinguistische Texterschließung. Hrsg.: C. Schwarz u. G. Thurmair
  20. Bian, G.-W.; Chen, H.-H.: Cross-language information access to multilingual collections on the Internet (2000) 0.03
    0.02622446 = product of:
      0.05244892 = sum of:
        0.05244892 = product of:
          0.07867338 = sum of:
            0.03271513 = weight(_text_:h in 4436) [ClassicSimilarity], result of:
              0.03271513 = score(doc=4436,freq=4.0), product of:
                0.14045826 = queryWeight, product of:
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.05653497 = queryNorm
                0.2329171 = fieldWeight in 4436, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  2.4844491 = idf(docFreq=10020, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
            0.045958247 = weight(_text_:22 in 4436) [ClassicSimilarity], result of:
              0.045958247 = score(doc=4436,freq=2.0), product of:
                0.19797583 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05653497 = queryNorm
                0.23214069 = fieldWeight in 4436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4436)
          0.6666667 = coord(2/3)
      0.5 = coord(1/2)
    
    Date
    16. 2.2000 14:22:39

Languages

  • e 156
  • d 122
  • m 3
  • chi 1
  • More… Less…

Types

  • a 233
  • m 29
  • el 20
  • s 19
  • x 5
  • d 2
  • p 2
  • More… Less…

Subjects

Classifications