Search (66 results, page 1 of 4)

  • × year_i:[2010 TO 2020}
  • × theme_ss:"Computerlinguistik"
  1. Huo, W.: Automatic multi-word term extraction and its application to Web-page summarization (2012) 0.20
    0.20397303 = product of:
      0.27196404 = sum of:
        0.030387878 = weight(_text_:science in 563) [ClassicSimilarity], result of:
          0.030387878 = score(doc=563,freq=4.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.24694869 = fieldWeight in 563, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.22258835 = weight(_text_:2f in 563) [ClassicSimilarity], result of:
          0.22258835 = score(doc=563,freq=2.0), product of:
            0.39605197 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0467152 = queryNorm
            0.56201804 = fieldWeight in 563, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.046875 = fieldNorm(doc=563)
        0.018987793 = product of:
          0.037975587 = sum of:
            0.037975587 = weight(_text_:22 in 563) [ClassicSimilarity], result of:
              0.037975587 = score(doc=563,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.23214069 = fieldWeight in 563, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=563)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science. Vgl. Unter: http://www.inf.ufrgs.br%2F~ceramisch%2Fdownload_files%2Fpublications%2F2009%2Fp01.pdf.
    Date
    10. 1.2013 19:22:47
  2. Lawrie, D.; Mayfield, J.; McNamee, P.; Oard, P.W.: Cross-language person-entity linking from 20 languages (2015) 0.02
    0.020237632 = product of:
      0.040475264 = sum of:
        0.021487473 = weight(_text_:science in 1848) [ClassicSimilarity], result of:
          0.021487473 = score(doc=1848,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.17461908 = fieldWeight in 1848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1848)
        0.018987793 = product of:
          0.037975587 = sum of:
            0.037975587 = weight(_text_:22 in 1848) [ClassicSimilarity], result of:
              0.037975587 = score(doc=1848,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.23214069 = fieldWeight in 1848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1848)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The goal of entity linking is to associate references to an entity that is found in unstructured natural language content to an authoritative inventory of known entities. This article describes the construction of 6 test collections for cross-language person-entity linking that together span 22 languages. Fully automated components were used together with 2 crowdsourced validation stages to affordably generate ground-truth annotations with an accuracy comparable to that of a completely manual process. The resulting test collections each contain between 642 (Arabic) and 2,361 (Romanian) person references in non-English texts for which the correct resolution in English Wikipedia is known, plus a similar number of references for which no correct resolution into English Wikipedia is believed to exist. Fully automated cross-language person-name linking experiments with 20 non-English languages yielded a resolution accuracy of between 0.84 (Serbian) and 0.98 (Romanian), which compares favorably with previously reported cross-language entity linking results for Spanish.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.6, S.1106-1123
  3. Engerer, V.: Exploring interdisciplinary relationships between linguistics and information retrieval from the 1960s to today (2017) 0.02
    0.01698734 = product of:
      0.06794936 = sum of:
        0.06794936 = weight(_text_:science in 3434) [ClassicSimilarity], result of:
          0.06794936 = score(doc=3434,freq=20.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.55219406 = fieldWeight in 3434, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=3434)
      0.25 = coord(1/4)
    
    Abstract
    This article explores how linguistics has influenced information retrieval (IR) and attempts to explain the impact of linguistics through an analysis of internal developments in information science generally, and IR in particular. It notes that information science/IR has been evolving from a case science into a fully fledged, "disciplined"/disciplinary science. The article establishes correspondences between linguistics and information science/IR using the three established IR paradigms-physical, cognitive, and computational-as a frame of reference. The current relationship between information science/IR and linguistics is elucidated through discussion of some recent information science publications dealing with linguistic topics and a novel technique, "keyword collocation analysis," is introduced. Insights from interdisciplinarity research and case theory are also discussed. It is demonstrated that the three stages of interdisciplinarity, namely multidisciplinarity, interdisciplinarity (in the narrow sense), and transdisciplinarity, can be linked to different phases of the information science/IR-linguistics relationship and connected to different ways of using linguistic theory in information science and IR.
    Source
    Journal of the Association for Information Science and Technology. 68(2017) no.3, S.660-680
  4. Kocijan, K.: Visualizing natural language resources (2015) 0.01
    0.012661615 = product of:
      0.05064646 = sum of:
        0.05064646 = weight(_text_:science in 2995) [ClassicSimilarity], result of:
          0.05064646 = score(doc=2995,freq=4.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.41158113 = fieldWeight in 2995, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.078125 = fieldNorm(doc=2995)
      0.25 = coord(1/4)
    
    Source
    Re:inventing information science in the networked society: Proceedings of the 14th International Symposium on Information Science, Zadar/Croatia, 19th-21st May 2015. Eds.: F. Pehar, C. Schloegl u. C. Wolff
  5. Perovsek, M.; Kranjca, J.; Erjaveca, T.; Cestnika, B.; Lavraca, N.: TextFlows : a visual programming platform for text mining and natural language processing (2016) 0.01
    0.00930435 = product of:
      0.0372174 = sum of:
        0.0372174 = weight(_text_:science in 2697) [ClassicSimilarity], result of:
          0.0372174 = score(doc=2697,freq=6.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.30244917 = fieldWeight in 2697, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=2697)
      0.25 = coord(1/4)
    
    Abstract
    Text mining and natural language processing are fast growing areas of research, with numerous applications in business, science and creative industries. This paper presents TextFlows, a web-based text mining and natural language processing platform supporting workflow construction, sharing and execution. The platform enables visual construction of text mining workflows through a web browser, and the execution of the constructed workflows on a processing cloud. This makes TextFlows an adaptable infrastructure for the construction and sharing of text processing workflows, which can be reused in various applications. The paper presents the implemented text mining and language processing modules, and describes some precomposed workflows. Their features are demonstrated on three use cases: comparison of document classifiers and of different part-of-speech taggers on a text categorization problem, and outlier detection in document corpora.
    Content
    Vgl.: http://www.sciencedirect.com/science/article/pii/S0167642316000113. Vgl. auch: http://textflows.org.
    Source
    Science of computer programming. In Press, 2016
  6. Clark, M.; Kim, Y.; Kruschwitz, U.; Song, D.; Albakour, D.; Dignum, S.; Beresi, U.C.; Fasli, M.; Roeck, A De: Automatically structuring domain knowledge from text : an overview of current research (2012) 0.01
    0.0083772335 = product of:
      0.033508934 = sum of:
        0.033508934 = product of:
          0.06701787 = sum of:
            0.06701787 = weight(_text_:history in 2738) [ClassicSimilarity], result of:
              0.06701787 = score(doc=2738,freq=2.0), product of:
                0.21731828 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0467152 = queryNorm
                0.3083858 = fieldWeight in 2738, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2738)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper presents an overview of automatic methods for building domain knowledge structures (domain models) from text collections. Applications of domain models have a long history within knowledge engineering and artificial intelligence. In the last couple of decades they have surfaced noticeably as a useful tool within natural language processing, information retrieval and semantic web technology. Inspired by the ubiquitous propagation of domain model structures that are emerging in several research disciplines, we give an overview of the current research landscape and some techniques and approaches. We will also discuss trade-offs between different approaches and point to some recent trends.
  7. Bowker, L.; Ciro, J.B.: Machine translation and global research : towards improved machine translation literacy in the scholarly community (2019) 0.01
    0.007898131 = product of:
      0.031592526 = sum of:
        0.031592526 = product of:
          0.06318505 = sum of:
            0.06318505 = weight(_text_:history in 5970) [ClassicSimilarity], result of:
              0.06318505 = score(doc=5970,freq=4.0), product of:
                0.21731828 = queryWeight, product of:
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.0467152 = queryNorm
                0.2907489 = fieldWeight in 5970, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.6519823 = idf(docFreq=1146, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5970)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    LCSH
    Literature / Translations / History and criticism
    Subject
    Literature / Translations / History and criticism
  8. Smalheiser, N.R.: Literature-based discovery : Beyond the ABCs (2012) 0.01
    0.0075969696 = product of:
      0.030387878 = sum of:
        0.030387878 = weight(_text_:science in 4967) [ClassicSimilarity], result of:
          0.030387878 = score(doc=4967,freq=4.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.24694869 = fieldWeight in 4967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=4967)
      0.25 = coord(1/4)
    
    Series
    Advances in information science
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.2, S.218-224
  9. Moohebat, M.; Raj, R.G.; Kareem, S.B.A.; Thorleuchter, D.: Identifying ISI-indexed articles by their lexical usage : a text analysis approach (2015) 0.01
    0.0075969696 = product of:
      0.030387878 = sum of:
        0.030387878 = weight(_text_:science in 1664) [ClassicSimilarity], result of:
          0.030387878 = score(doc=1664,freq=4.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.24694869 = fieldWeight in 1664, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=1664)
      0.25 = coord(1/4)
    
    Abstract
    This research creates an architecture for investigating the existence of probable lexical divergences between articles, categorized as Institute for Scientific Information (ISI) and non-ISI, and consequently, if such a difference is discovered, to propose the best available classification method. Based on a collection of ISI- and non-ISI-indexed articles in the areas of business and computer science, three classification models are trained. A sensitivity analysis is applied to demonstrate the impact of words in different syntactical forms on the classification decision. The results demonstrate that the lexical domains of ISI and non-ISI articles are distinguishable by machine learning techniques. Our findings indicate that the support vector machine identifies ISI-indexed articles in both disciplines with higher precision than do the Naïve Bayesian and K-Nearest Neighbors techniques.
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.3, S.501-511
  10. Lu, K.; Cai, X.; Ajiferuke, I.; Wolfram, D.: Vocabulary size and its effect on topic representation (2017) 0.01
    0.0075969696 = product of:
      0.030387878 = sum of:
        0.030387878 = weight(_text_:science in 3414) [ClassicSimilarity], result of:
          0.030387878 = score(doc=3414,freq=4.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.24694869 = fieldWeight in 3414, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=3414)
      0.25 = coord(1/4)
    
    Abstract
    This study investigates how computational overhead for topic model training may be reduced by selectively removing terms from the vocabulary of text corpora being modeled. We compare the impact of removing singly occurring terms, the top 0.5%, 1% and 5% most frequently occurring terms and both top 0.5% most frequent and singly occurring terms, along with changes in the number of topics modeled (10, 20, 30, 40, 50, 100) using three datasets. Four outcome measures are compared. The removal of singly occurring terms has little impact on outcomes for all of the measures tested. Document discriminative capacity, as measured by the document space density, is reduced by the removal of frequently occurring terms, but increases with higher numbers of topics. Vocabulary size does not greatly influence entropy, but entropy is affected by the number of topics. Finally, topic similarity, as measured by pairwise topic similarity and Jensen-Shannon divergence, decreases with the removal of frequent terms. The findings have implications for information science research in information retrieval and informetrics that makes use of topic modeling.
    Content
    Vgl.: http://www.sciencedirect.com/science/article/pii/S0306457317300298.
  11. Computerlinguistik und Sprachtechnologie : Eine Einführung (2010) 0.01
    0.0071624913 = product of:
      0.028649965 = sum of:
        0.028649965 = weight(_text_:science in 1735) [ClassicSimilarity], result of:
          0.028649965 = score(doc=1735,freq=8.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.23282544 = fieldWeight in 1735, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=1735)
      0.25 = coord(1/4)
    
    LCSH
    Computer science
    Computer science
    Subject
    Computer science
    Computer science
  12. Soo, J.; Frieder, O.: On searching misspelled collections (2015) 0.01
    0.0071624913 = product of:
      0.028649965 = sum of:
        0.028649965 = weight(_text_:science in 1862) [ClassicSimilarity], result of:
          0.028649965 = score(doc=1862,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.23282544 = fieldWeight in 1862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0625 = fieldNorm(doc=1862)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 66(2015) no.6, S.1294-1298
  13. Lezius, W.: Morphy - Morphologie und Tagging für das Deutsche (2013) 0.01
    0.006329265 = product of:
      0.02531706 = sum of:
        0.02531706 = product of:
          0.05063412 = sum of:
            0.05063412 = weight(_text_:22 in 1490) [ClassicSimilarity], result of:
              0.05063412 = score(doc=1490,freq=2.0), product of:
                0.16358867 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0467152 = queryNorm
                0.30952093 = fieldWeight in 1490, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1490)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    22. 3.2015 9:30:24
  14. Wu, H.; He, J.; Pei, Y.: Scientific impact at the topic level : a case study in computational linguistics (2010) 0.01
    0.0062671797 = product of:
      0.025068719 = sum of:
        0.025068719 = weight(_text_:science in 4103) [ClassicSimilarity], result of:
          0.025068719 = score(doc=4103,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.20372227 = fieldWeight in 4103, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4103)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.11, S.2274-2287
  15. Stoykova, V.; Petkova, E.: Automatic extraction of mathematical terms for precalculus (2012) 0.01
    0.0062671797 = product of:
      0.025068719 = sum of:
        0.025068719 = weight(_text_:science in 156) [ClassicSimilarity], result of:
          0.025068719 = score(doc=156,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.20372227 = fieldWeight in 156, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=156)
      0.25 = coord(1/4)
    
    Content
    Beitrag für: First World Conference on Innovation and Computer Sciences (INSODE 2011). Vgl.: http://www.sciencedirect.com/science/article/pii/S221201731200103X.
  16. Shen, M.; Liu, D.-R.; Huang, Y.-S.: Extracting semantic relations to enrich domain ontologies (2012) 0.01
    0.0062671797 = product of:
      0.025068719 = sum of:
        0.025068719 = weight(_text_:science in 267) [ClassicSimilarity], result of:
          0.025068719 = score(doc=267,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.20372227 = fieldWeight in 267, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=267)
      0.25 = coord(1/4)
    
    Abstract
    Domain ontologies facilitate the organization, sharing and reuse of domain knowledge, and enable various vertical domain applications to operate successfully. Most methods for automatically constructing ontologies focus on taxonomic relations, such as is-kind-of and is- part-of relations. However, much of the domain-specific semantics is ignored. This work proposes a semi-unsupervised approach for extracting semantic relations from domain-specific text documents. The approach effectively utilizes text mining and existing taxonomic relations in domain ontologies to discover candidate keywords that can represent semantic relations. A preliminary experiment on the natural science domain (Taiwan K9 education) indicates that the proposed method yields valuable recommendations. This work enriches domain ontologies by adding distilled semantics.
  17. Radev, D.R.; Joseph, M.T.; Gibson, B.; Muthukrishnan, P.: ¬A bibliometric and network analysis of the field of computational linguistics (2016) 0.01
    0.0062671797 = product of:
      0.025068719 = sum of:
        0.025068719 = weight(_text_:science in 2764) [ClassicSimilarity], result of:
          0.025068719 = score(doc=2764,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.20372227 = fieldWeight in 2764, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2764)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.3, S.683-706
  18. Babik, W.: Keywords as linguistic tools in information and knowledge organization (2017) 0.01
    0.0062671797 = product of:
      0.025068719 = sum of:
        0.025068719 = weight(_text_:science in 3510) [ClassicSimilarity], result of:
          0.025068719 = score(doc=3510,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.20372227 = fieldWeight in 3510, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3510)
      0.25 = coord(1/4)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  19. Budin, G.: Zum Entwicklungsstand der Terminologiewissenschaft (2019) 0.01
    0.0062671797 = product of:
      0.025068719 = sum of:
        0.025068719 = weight(_text_:science in 5604) [ClassicSimilarity], result of:
          0.025068719 = score(doc=5604,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.20372227 = fieldWeight in 5604, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5604)
      0.25 = coord(1/4)
    
    Series
    Kommunikation und Medienmanagement - Springer eBooks. Computer Science and Engineering
  20. Al-Shawakfa, E.; Al-Badarneh, A.; Shatnawi, S.; Al-Rabab'ah, K.; Bani-Ismail, B.: ¬A comparison study of some Arabic root finding algorithms (2010) 0.01
    0.005371868 = product of:
      0.021487473 = sum of:
        0.021487473 = weight(_text_:science in 3457) [ClassicSimilarity], result of:
          0.021487473 = score(doc=3457,freq=2.0), product of:
            0.12305341 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0467152 = queryNorm
            0.17461908 = fieldWeight in 3457, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=3457)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.5, S.1015-1024

Languages

  • e 58
  • d 8

Types

  • a 56
  • el 6
  • m 4
  • x 4
  • s 1
  • More… Less…