Search (6975 results, page 1 of 349)

  • × year_i:[2000 TO 2010}
  1. Burrell, Q.L.: "Type/Token-Taken" informetrics : Some comments and further examples (2003) 0.18
    0.18477368 = product of:
      0.27716053 = sum of:
        0.022990782 = weight(_text_:of in 2116) [ClassicSimilarity], result of:
          0.022990782 = score(doc=2116,freq=6.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2992506 = fieldWeight in 2116, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.078125 = fieldNorm(doc=2116)
        0.25416973 = product of:
          0.50833946 = sum of:
            0.50833946 = weight(_text_:informetrics in 2116) [ClassicSimilarity], result of:
              0.50833946 = score(doc=2116,freq=6.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                1.4071327 = fieldWeight in 2116, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.078125 = fieldNorm(doc=2116)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Egghe has propounded the notion of Type/Token-Taken (T/TT) informetrics. In this note we show how his ideas relate to ones that are already well known in informetrics and resolve some of the specific problems posed.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.13, S.1260-1263
  2. Egghe, L.: Expansion of the field of informetrics : the second special issue (2006) 0.18
    0.1810405 = product of:
      0.27156073 = sum of:
        0.022526272 = weight(_text_:of in 7119) [ClassicSimilarity], result of:
          0.022526272 = score(doc=7119,freq=4.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 7119, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=7119)
        0.24903446 = product of:
          0.49806893 = sum of:
            0.49806893 = weight(_text_:informetrics in 7119) [ClassicSimilarity], result of:
              0.49806893 = score(doc=7119,freq=4.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                1.3787029 = fieldWeight in 7119, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.09375 = fieldNorm(doc=7119)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Footnote
    Einführung in ein "Special Issue on Informetrics"
  3. Egghe, L.: Type/Token-Taken informetrics (2003) 0.18
    0.17690733 = product of:
      0.26536098 = sum of:
        0.022011995 = weight(_text_:of in 1608) [ClassicSimilarity], result of:
          0.022011995 = score(doc=1608,freq=22.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.28651062 = fieldWeight in 1608, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
        0.24334899 = product of:
          0.48669797 = sum of:
            0.48669797 = weight(_text_:informetrics in 1608) [ClassicSimilarity], result of:
              0.48669797 = score(doc=1608,freq=22.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                1.347227 = fieldWeight in 1608, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1608)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Type/Token-Taken informetrics is a new part of informetrics that studies the use of items rather than the items itself. Here, items are the objects that are produced by the sources (e.g., journals producing articles, authors producing papers, etc.). In linguistics a source is also called a type (e.g., a word), and an item a token (e.g., the use of words in texts). In informetrics, types that occur often, for example, in a database will also be requested often, for example, in information retrieval. The relative use of these occurrences will be higher than their relative occurrences itself; hence, the name Type/ Token-Taken informetrics. This article studies the frequency distribution of Type/Token-Taken informetrics, starting from the one of Type/Token informetrics (i.e., source-item relationships). We are also studying the average number my* of item uses in Type/Token-Taken informetrics and compare this with the classical average number my in Type/Token informetrics. We show that my* >= my always, and that my* is an increasing function of my. A method is presented to actually calculate my* from my, and a given a, which is the exponent in Lotka's frequency distribution of Type/Token informetrics. We leave open the problem of developing non-Lotkaian Type/TokenTaken informetrics.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.7, S.603-610
  4. Mingers, J.; Burrell, Q.L.: Modeling citation behavior in Management Science journals (2006) 0.16
    0.15806948 = product of:
      0.2371042 = sum of:
        0.021071399 = weight(_text_:of in 994) [ClassicSimilarity], result of:
          0.021071399 = score(doc=994,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2742677 = fieldWeight in 994, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=994)
        0.2160328 = sum of:
          0.17609395 = weight(_text_:informetrics in 994) [ClassicSimilarity], result of:
            0.17609395 = score(doc=994,freq=2.0), product of:
              0.36125907 = queryWeight, product of:
                7.3530817 = idf(docFreq=76, maxDocs=44218)
                0.049130294 = queryNorm
              0.48744506 = fieldWeight in 994, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.3530817 = idf(docFreq=76, maxDocs=44218)
                0.046875 = fieldNorm(doc=994)
          0.039938856 = weight(_text_:22 in 994) [ClassicSimilarity], result of:
            0.039938856 = score(doc=994,freq=2.0), product of:
              0.17204592 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049130294 = queryNorm
              0.23214069 = fieldWeight in 994, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=994)
      0.6666667 = coord(2/3)
    
    Abstract
    Citation rates are becoming increasingly important in judging the research quality of journals, institutions and departments, and individual faculty. This paper looks at the pattern of citations across different management science journals and over time. A stochastic model is proposed which views the generating mechanism of citations as a gamma mixture of Poisson processes generating overall a negative binomial distribution. This is tested empirically with a large sample of papers published in 1990 from six management science journals and found to fit well. The model is extended to include obsolescence, i.e., that the citation rate for a paper varies over its cited lifetime. This leads to the additional citations distribution which shows that future citations are a linear function of past citations with a time-dependent and decreasing slope. This is also verified empirically in a way that allows different obsolescence functions to be fitted to the data. Conclusions concerning the predictability of future citations, and future research in this area are discussed.
    Date
    26.12.2007 19:22:05
    Footnote
    Beitrag in einem "Special Issue on Informetrics"
  5. Bar-Ilan, J.: Informetrics (2009) 0.14
    0.14425823 = product of:
      0.21638735 = sum of:
        0.019508323 = weight(_text_:of in 3822) [ClassicSimilarity], result of:
          0.019508323 = score(doc=3822,freq=12.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.25392252 = fieldWeight in 3822, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3822)
        0.19687903 = product of:
          0.39375806 = sum of:
            0.39375806 = weight(_text_:informetrics in 3822) [ClassicSimilarity], result of:
              0.39375806 = score(doc=3822,freq=10.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                1.0899603 = fieldWeight in 3822, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3822)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Informetrics is a subfield of information science and it encompasses bibliometrics, scientometrics, cybermetrics, and webometrics. This encyclopedia entry provides an overview of informetrics and its subfields. In general, informetrics deals with quantitative aspects of information: its production, dissemination, evaluation, and use. Bibliometrics and scientometrics study scientific literature: papers, journals, patents, and citations; while in webometric studies the sources studied are Web pages and Web sites, and citations are replaced by hypertext links. The entry introduces major topics in informetrics: citation analysis and citation related studies, the journal impact factor, the recently defined h-index, citation databases, co-citation analysis, open access publications and its implications, informetric laws, techniques for mapping and visualization of informetric phenomena, the emerging subfields of webometrics, cybermetrics and link analysis, and research evaluation.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  6. Stock, W.G.; Weber, S.: Facets of informetrics : Preface (2006) 0.14
    0.1403487 = product of:
      0.21052304 = sum of:
        0.024903733 = weight(_text_:of in 76) [ClassicSimilarity], result of:
          0.024903733 = score(doc=76,freq=44.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.3241498 = fieldWeight in 76, product of:
              6.6332498 = tf(freq=44.0), with freq of:
                44.0 = termFreq=44.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.03125 = fieldNorm(doc=76)
        0.18561931 = product of:
          0.37123862 = sum of:
            0.37123862 = weight(_text_:informetrics in 76) [ClassicSimilarity], result of:
              0.37123862 = score(doc=76,freq=20.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                1.0276244 = fieldWeight in 76, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.03125 = fieldNorm(doc=76)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    According to Jean M. Tague-Sutcliffe "informetrics" is "the study of the quantitative aspects of information in any form, not just records or bibliographies, and in any social group, not just scientists" (Tague-Sutcliffe, 1992, 1). Leo Egghe also defines "informetrics" in a very broad sense. "(W)e will use the term' informetrics' as the broad term comprising all-metrics studies related to information science, including bibliometrics (bibliographies, libraries,...), scientometrics (science policy, citation analysis, research evaluation,...), webometrics (metrics of the web, the Internet or other social networks such as citation or collaboration networks), ..." (Egghe, 2005b,1311). According to Concepcion S. Wilson "informetrics" is "the quantitative study of collections of moderatesized units of potentially informative text, directed to the scientific understanding of information processes at the social level" (Wilson, 1999, 211). We should add to Wilson's units of text also digital collections of images, videos, spoken documents and music. Dietmar Wolfram divides "informetrics" into two aspects, "system-based characteristics that arise from the documentary content of IR systems and how they are indexed, and usage-based characteristics that arise how users interact with system content and the system interfaces that provide access to the content" (Wolfram, 2003, 6). We would like to follow Tague-Sutcliffe, Egghe, Wilson and Wolfram (and others, for example Björneborn & Ingwersen, 2004) and call this broad research of empirical information science "informetrics". Informetrics includes therefore all quantitative studies in information science. If a scientist performs scientific investigations empirically, e.g. on information users' behavior, on scientific impact of academic journals, on the development of the patent application activity of a company, on links of Web pages, on the temporal distribution of blog postings discussing a given topic, on availability, recall and precision of retrieval systems, on usability of Web sites, and so on, he or she contributes to informetrics. We see three subject areas in information science in which such quantitative research takes place, - information users and information usage, - evaluation of information systems, - information itself, Following Wolfram's article, we divide his system-based characteristics into the "information itself "-category and the "information system"-category. Figure 1 is a simplistic graph of subjects and research areas of informetrics as an empirical information science.
  7. Egghe, L.: Expansion of the field of informetrics : origins and consequences (2005) 0.13
    0.13241349 = product of:
      0.19862023 = sum of:
        0.022526272 = weight(_text_:of in 1910) [ClassicSimilarity], result of:
          0.022526272 = score(doc=1910,freq=4.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 1910, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=1910)
        0.17609395 = product of:
          0.3521879 = sum of:
            0.3521879 = weight(_text_:informetrics in 1910) [ClassicSimilarity], result of:
              0.3521879 = score(doc=1910,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.9748901 = fieldWeight in 1910, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.09375 = fieldNorm(doc=1910)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
  8. Wolfram, D.: Applied informetrics for information retrieval research (2003) 0.13
    0.13241349 = product of:
      0.19862023 = sum of:
        0.022526272 = weight(_text_:of in 4589) [ClassicSimilarity], result of:
          0.022526272 = score(doc=4589,freq=4.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 4589, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.09375 = fieldNorm(doc=4589)
        0.17609395 = product of:
          0.3521879 = sum of:
            0.3521879 = weight(_text_:informetrics in 4589) [ClassicSimilarity], result of:
              0.3521879 = score(doc=4589,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.9748901 = fieldWeight in 4589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4589)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The author demonstrates how informetric analysis of information retrieval system content and use provides valuable insights that have applications for the modelling, design, and evaluation of information retrieval systems.
  9. Niemi, T.; Hirvonen, L.; Järvelin, K.: Multidimensional data model and query language for informetrics (2003) 0.13
    0.13040152 = product of:
      0.19560227 = sum of:
        0.019508323 = weight(_text_:of in 1753) [ClassicSimilarity], result of:
          0.019508323 = score(doc=1753,freq=12.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.25392252 = fieldWeight in 1753, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=1753)
        0.17609395 = product of:
          0.3521879 = sum of:
            0.3521879 = weight(_text_:informetrics in 1753) [ClassicSimilarity], result of:
              0.3521879 = score(doc=1753,freq=8.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.9748901 = fieldWeight in 1753, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1753)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Multidimensional data analysis or On-line analytical processing (OLAP) offers a single subject-oriented source for analyzing summary data based an various dimensions. We demonstrate that the OLAP approach gives a promising starting point for advanced analysis and comparison among summary data in informetrics applications. At the moment there is no single precise, commonly accepted logical/conceptual model for multidimensional analysis. This is because the requirements of applications vary considerably. We develop a conceptual/logical multidimensional model for supporting the complex and unpredictable needs of informetrics. Summary data are considered with respect of some dimensions. By changing dimensions the user may construct other views an the same summary data. We develop a multidimensional query language whose basic idea is to support the definition of views in a way, which is natural and intuitive for lay users in the informetrics area. We show that this view-oriented query language has a great expressive power and its degree of declarativity is greater than in contemporary operation-oriented or SQL (Structured Query Language)-like OLAP query languages.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.10, S.939-951
  10. Näppilä, T.; Järvelin, K.; Niemi, T.: ¬A tool for data cube construction from structurally heterogeneous XML documents (2008) 0.13
    0.12991188 = product of:
      0.19486782 = sum of:
        0.014840485 = weight(_text_:of in 1369) [ClassicSimilarity], result of:
          0.014840485 = score(doc=1369,freq=10.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.19316542 = fieldWeight in 1369, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1369)
        0.18002734 = sum of:
          0.14674495 = weight(_text_:informetrics in 1369) [ClassicSimilarity], result of:
            0.14674495 = score(doc=1369,freq=2.0), product of:
              0.36125907 = queryWeight, product of:
                7.3530817 = idf(docFreq=76, maxDocs=44218)
                0.049130294 = queryNorm
              0.4062042 = fieldWeight in 1369, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                7.3530817 = idf(docFreq=76, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1369)
          0.033282384 = weight(_text_:22 in 1369) [ClassicSimilarity], result of:
            0.033282384 = score(doc=1369,freq=2.0), product of:
              0.17204592 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049130294 = queryNorm
              0.19345059 = fieldWeight in 1369, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1369)
      0.6666667 = coord(2/3)
    
    Abstract
    Data cubes for OLAP (On-Line Analytical Processing) often need to be constructed from data located in several distributed and autonomous information sources. Such a data integration process is challenging due to semantic, syntactic, and structural heterogeneity among the data. While XML (extensible markup language) is the de facto standard for data exchange, the three types of heterogeneity remain. Moreover, popular path-oriented XML query languages, such as XQuery, require the user to know in much detail the structure of the documents to be processed and are, thus, effectively impractical in many real-world data integration tasks. Several Lowest Common Ancestor (LCA)-based XML query evaluation strategies have recently been introduced to provide a more structure-independent way to access XML documents. We shall, however, show that this approach leads in the context of certain - not uncommon - types of XML documents to undesirable results. This article introduces a novel high-level data extraction primitive that utilizes the purpose-built Smallest Possible Context (SPC) query evaluation strategy. We demonstrate, through a system prototype for OLAP data cube construction and a sample application in informetrics, that our approach has real advantages in data integration.
    Date
    9. 2.2008 17:22:42
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.3, S.435-449
  11. Rousseau, R.: Robert Fairthorne and the empirical power laws (2005) 0.12
    0.11739126 = product of:
      0.17608689 = sum of:
        0.030816795 = weight(_text_:of in 4398) [ClassicSimilarity], result of:
          0.030816795 = score(doc=4398,freq=22.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.40111488 = fieldWeight in 4398, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4398)
        0.1452701 = product of:
          0.2905402 = sum of:
            0.2905402 = weight(_text_:informetrics in 4398) [ClassicSimilarity], result of:
              0.2905402 = score(doc=4398,freq=4.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.8042433 = fieldWeight in 4398, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4398)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Purpose - Aims to review Fairthorne's classic article "Empirical hyperbolic distributions (Bradford-Zipf-Mandelbrot) for bibliometric description and prediction" (Journal of Documentation, Vol. 25, pp. 319-343, 1969), as part of a series marking the Journal of Documentation's 60th anniversary. Design/methodology/approach - Analysis of article content, qualitative evaluation of its subsequent impact, citation analysis, and diffusion analysis. Findings - The content, further developments and influence on the field of informetrics of this landmark paper are explained. Originality/value - A review is given of the contents of Fairthorne's original article and its influence on the field of informetrics. Its transdisciplinary reception is measured through a diffusion analysis.
    Source
    Journal of documentation. 61(2005) no.2, S.194-202
  12. Hotho, A.; Bloehdorn, S.: Data Mining 2004 : Text classification by boosting weak learners based on terms and concepts (2004) 0.12
    0.11580994 = sum of:
      0.07803193 = product of:
        0.23409578 = sum of:
          0.23409578 = weight(_text_:3a in 562) [ClassicSimilarity], result of:
            0.23409578 = score(doc=562,freq=2.0), product of:
              0.41652718 = queryWeight, product of:
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.049130294 = queryNorm
              0.56201804 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                8.478011 = idf(docFreq=24, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.33333334 = coord(1/3)
      0.017808583 = weight(_text_:of in 562) [ClassicSimilarity], result of:
        0.017808583 = score(doc=562,freq=10.0), product of:
          0.076827854 = queryWeight, product of:
            1.5637573 = idf(docFreq=25162, maxDocs=44218)
            0.049130294 = queryNorm
          0.23179851 = fieldWeight in 562, product of:
            3.1622777 = tf(freq=10.0), with freq of:
              10.0 = termFreq=10.0
            1.5637573 = idf(docFreq=25162, maxDocs=44218)
            0.046875 = fieldNorm(doc=562)
      0.019969428 = product of:
        0.039938856 = sum of:
          0.039938856 = weight(_text_:22 in 562) [ClassicSimilarity], result of:
            0.039938856 = score(doc=562,freq=2.0), product of:
              0.17204592 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049130294 = queryNorm
              0.23214069 = fieldWeight in 562, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=562)
        0.5 = coord(1/2)
    
    Abstract
    Document representations for text classification are typically based on the classical Bag-Of-Words paradigm. This approach comes with deficiencies that motivate the integration of features on a higher semantic level than single words. In this paper we propose an enhancement of the classical document representation through concepts extracted from background knowledge. Boosting is used for actual classification. Experimental evaluations on two well known text corpora support our approach through consistent improvement of the results.
    Content
    Vgl.: http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CEAQFjAA&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.91.4940%26rep%3Drep1%26type%3Dpdf&ei=dOXrUMeIDYHDtQahsIGACg&usg=AFQjCNHFWVh6gNPvnOrOS9R3rkrXCNVD-A&sig2=5I2F5evRfMnsttSgFF9g7Q&bvm=bv.1357316858,d.Yms.
    Date
    8. 1.2013 10:22:32
    Source
    Proceedings of the 4th IEEE International Conference on Data Mining (ICDM 2004), 1-4 November 2004, Brighton, UK
  13. Burrell, Q.L.: Extending Lotkaian informetrics (2008) 0.12
    0.115715496 = product of:
      0.17357324 = sum of:
        0.021071399 = weight(_text_:of in 2126) [ClassicSimilarity], result of:
          0.021071399 = score(doc=2126,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2742677 = fieldWeight in 2126, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=2126)
        0.15250184 = product of:
          0.30500367 = sum of:
            0.30500367 = weight(_text_:informetrics in 2126) [ClassicSimilarity], result of:
              0.30500367 = score(doc=2126,freq=6.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.84427965 = fieldWeight in 2126, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2126)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The continuous version of the Lotka distribution, more generally referred to outside of informetrics as the Pareto distribution, has long enjoyed a central position in the theoretical development of informetrics despite several reported drawbacks in modelling empirical data distributions, most particularly that the inverse power form seems mainly to be evident only in the upper tails. We give a number of published examples graphically illustrating this shortcoming. In seeking to overcome this, we here draw attention to an intuitively reasonable generalization of the Pareto distribution, namely the Pareto type II distribution, of which we consider two versions. We describe its basic properties and some statistical features together with concentration aspects and argue that, at least in qualitative terms, it is better able to describe many observed informetric phenomena over the full range of the distribution. Suggestions for further investigations, including truncated and time-dependent versions, are also given.
  14. Bar-Ilan, J.; Peritz, B.C.: Evolution, continuity, and disappearance of documents on a specific topic an the Web : a longitudinal study of "informetrics" (2004) 0.11
    0.11436717 = product of:
      0.17155075 = sum of:
        0.02628065 = weight(_text_:of in 2886) [ClassicSimilarity], result of:
          0.02628065 = score(doc=2886,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.34207192 = fieldWeight in 2886, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2886)
        0.1452701 = product of:
          0.2905402 = sum of:
            0.2905402 = weight(_text_:informetrics in 2886) [ClassicSimilarity], result of:
              0.2905402 = score(doc=2886,freq=4.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.8042433 = fieldWeight in 2886, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2886)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The present paper analyzes the changes that occurred to a set of Web pages related to "informetrics" over a period of 5 years between June 1998 and June 2003. Four times during this time span, in 1998,1999, 2002, and 2003, we monitored previously located pages and searched for new ones related to the topic. Thus, we were able to study the growth of the topic, white analyzing the rates of change and disappearance. The results indicate that modification, disappearance, and resurfacing cannot be ignored when studying the structure and development of the Web.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.11, S.980-990
  15. Aström, F.: Changes in the LIS research front : time-sliced cocitation analyses of LIS journal articles, 1990-2004 (2007) 0.11
    0.109536305 = product of:
      0.16430445 = sum of:
        0.017559499 = weight(_text_:of in 329) [ClassicSimilarity], result of:
          0.017559499 = score(doc=329,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.22855641 = fieldWeight in 329, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0390625 = fieldNorm(doc=329)
        0.14674495 = product of:
          0.2934899 = sum of:
            0.2934899 = weight(_text_:informetrics in 329) [ClassicSimilarity], result of:
              0.2934899 = score(doc=329,freq=8.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.8124084 = fieldWeight in 329, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=329)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Based on articles published in 1990-2004 in 21 library and information science (LIS) journals, a set of cocitation analyses was performed to study changes in research fronts over the last 15 years, where LIS is at now, and to discuss where it is heading. To study research fronts, here defined as current and influential cocited articles, a citations among documents methodology was applied; and to study changes, the analyses were time-sliced into three 5-year periods. The results show a stable structure of two distinct research fields: informetrics and information seeking and retrieval (ISR). However, experimental retrieval research and user oriented research have merged into one ISR field; and IR and informetrics also show signs of coming closer together, sharing research interests and methodologies, making informetrics research more visible in mainstream LIS research. Furthermore, the focus on the Internet, both in ISR research and in informetrics-where webometrics quickly has become a dominating research area-is an important change. The future is discussed in terms of LIS dependency on technology, how integration of research areas as well as technical systems can be expected to continue to characterize LIS research, and how webometrics will continue to develop and find applications.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.7, S.947-957
  16. Bar-Ilan, J.: ¬The Web as an information source on informetrics? : A content analysis (2000) 0.10
    0.10215514 = product of:
      0.15323271 = sum of:
        0.028715475 = weight(_text_:of in 4587) [ClassicSimilarity], result of:
          0.028715475 = score(doc=4587,freq=26.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.37376386 = fieldWeight in 4587, product of:
              5.0990195 = tf(freq=26.0), with freq of:
                26.0 = termFreq=26.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=4587)
        0.12451723 = product of:
          0.24903446 = sum of:
            0.24903446 = weight(_text_:informetrics in 4587) [ClassicSimilarity], result of:
              0.24903446 = score(doc=4587,freq=4.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.68935144 = fieldWeight in 4587, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4587)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article addresses the question of whether the Web can serve as an information source for research. Specifically, it analyzes by way of content analysis the Web pages retrieved by the major search engines on a particular date (June 7, 1998), as a result of the query 'informetrics OR informetric'. In 807 out of the 942 retrieved pages, the search terms were mentioned in the context of information science. Over 70% of the pages contained only indirect information on the topic, in the form of hypertext links and bibliographical references without annotation. The bibliographical references extracted from the Web pages were analyzed, and lists of most productive authors, most cited authors, works, and sources were compiled. The list of reference obtained from the Web was also compared to data retrieved from commercial databases. For most cases, the list of references extracted from the Web outperformed the commercial, bibliographic databases. The results of these comparisons indicate that valuable, freely available data is hidden in the Web waiting to be extracted from the millions of Web pages
    Source
    Journal of the American Society for Information Science. 51(2000) no.5, S.432-443
  17. Hertzel, D.H.: Bibliometric research: history (2009) 0.10
    0.09950195 = product of:
      0.14925292 = sum of:
        0.031856958 = weight(_text_:of in 3807) [ClassicSimilarity], result of:
          0.031856958 = score(doc=3807,freq=18.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.41465375 = fieldWeight in 3807, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=3807)
        0.11739596 = product of:
          0.23479192 = sum of:
            0.23479192 = weight(_text_:informetrics in 3807) [ClassicSimilarity], result of:
              0.23479192 = score(doc=3807,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.6499267 = fieldWeight in 3807, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3807)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Hertzel marshals a vast amount of information on the origins and development of one of the core areas of information science research-bibliometrics, or, as it is also known, informetrics. The study of the statistical properties of the domain of recorded information is a large field with an extensive body of research results.
    Source
    Encyclopedia of library and information sciences. 3rd ed. Ed.: M.J. Bates
  18. Egghe, L.: Relations between the continuous and the discrete Lotka power function (2005) 0.10
    0.098029 = product of:
      0.1470435 = sum of:
        0.022526272 = weight(_text_:of in 3464) [ClassicSimilarity], result of:
          0.022526272 = score(doc=3464,freq=16.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.2932045 = fieldWeight in 3464, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.046875 = fieldNorm(doc=3464)
        0.12451723 = product of:
          0.24903446 = sum of:
            0.24903446 = weight(_text_:informetrics in 3464) [ClassicSimilarity], result of:
              0.24903446 = score(doc=3464,freq=4.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.68935144 = fieldWeight in 3464, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3464)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The discrete Lotka power function describes the number of sources (e.g., authors) with n = 1, 2, 3, ... items (e.g., publications). As in econometrics, informetrics theory requires functions of a continuous variable j, replacing the discrete variable n. Now j represents item densities instead of number of items. The continuous Lotka power function describes the density of sources with item density j. The discrete Lotka function one obtains from data, obtained empirically; the continuous Lotka function is the one needed when one wants to apply Lotkaian informetrics, i.e., to determine properties that can be derived from the (continuous) model. It is, hence, important to know the relations between the two models. We show that the exponents of the discrete Lotka function (if not too high, i.e., within limits encountered in practice) and of the continuous Lotka function are approximately the same. This is important to know in applying theoretical results (from the continuous model), derived from practical data.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.7, S.664-668
  19. Huber, J.C.; Wagner-Döbler, R.: Using the Mann-Whitney test on informetric data (2003) 0.10
    0.0969941 = product of:
      0.14549115 = sum of:
        0.028095199 = weight(_text_:of in 1686) [ClassicSimilarity], result of:
          0.028095199 = score(doc=1686,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.36569026 = fieldWeight in 1686, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=1686)
        0.11739596 = product of:
          0.23479192 = sum of:
            0.23479192 = weight(_text_:informetrics in 1686) [ClassicSimilarity], result of:
              0.23479192 = score(doc=1686,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.6499267 = fieldWeight in 1686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1686)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    The fields of informetrics and scientometrics have suffered from the lack of a powerful test to detect the differences between two samples. We show that the Mann-Whitney test is a good test an the publication productivity of journals and of authors. Its main limitation is a lack of Power on small samples that have small differences. This is not the fault of the test, but rather reflects the fact that small, similar samples have little to distinguish between them.
    Source
    Journal of the American Society for Information Science and technology. 54(2003) no.8, S.798-801
  20. Bookstein, A.; Moed, H.; Yitzahki, M.: Measures of international collaboration in scientific literature : part I (2006) 0.10
    0.0969941 = product of:
      0.14549115 = sum of:
        0.028095199 = weight(_text_:of in 985) [ClassicSimilarity], result of:
          0.028095199 = score(doc=985,freq=14.0), product of:
            0.076827854 = queryWeight, product of:
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.049130294 = queryNorm
            0.36569026 = fieldWeight in 985, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.5637573 = idf(docFreq=25162, maxDocs=44218)
              0.0625 = fieldNorm(doc=985)
        0.11739596 = product of:
          0.23479192 = sum of:
            0.23479192 = weight(_text_:informetrics in 985) [ClassicSimilarity], result of:
              0.23479192 = score(doc=985,freq=2.0), product of:
                0.36125907 = queryWeight, product of:
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.049130294 = queryNorm
                0.6499267 = fieldWeight in 985, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  7.3530817 = idf(docFreq=76, maxDocs=44218)
                  0.0625 = fieldNorm(doc=985)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Research evaluating models of scientific productivity require coherent metrics that quantify various key relations among papers as revealed by patterns of citation. This paper focuses on the various conceptual problems inherent in measuring the degree to which papers tend to cite other papers written by authors of the same nationality. We suggest that measures can be given a degree of assurance of coherence by being based on mathematical models describing the citation process. A number of such models are developed.
    Footnote
    Beitrag in einem "Special Issue on Informetrics"

Authors

Languages

Types

  • a 5970
  • m 602
  • el 481
  • s 214
  • x 44
  • b 40
  • i 27
  • r 26
  • n 18
  • p 16
  • More… Less…

Themes

Subjects

Classifications