Search (127 results, page 1 of 7)

  • × theme_ss:"Informetrie"
  1. Ridenour, L.: Boundary objects : measuring gaps and overlap between research areas (2016) 0.11
    0.10963168 = product of:
      0.21926336 = sum of:
        0.1677682 = weight(_text_:objects in 2835) [ClassicSimilarity], result of:
          0.1677682 = score(doc=2835,freq=4.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.49828792 = fieldWeight in 2835, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=2835)
        0.05149517 = weight(_text_:22 in 2835) [ClassicSimilarity], result of:
          0.05149517 = score(doc=2835,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.23214069 = fieldWeight in 2835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.046875 = fieldNorm(doc=2835)
      0.5 = coord(2/4)
    
    Abstract
    The aim of this paper is to develop methodology to determine conceptual overlap between research areas. It investigates patterns of terminology usage in scientific abstracts as boundary objects between research specialties. Research specialties were determined by high-level classifications assigned by Thomson Reuters in their Essential Science Indicators file, which provided a strictly hierarchical classification of journals into 22 categories. Results from the query "network theory" were downloaded from the Web of Science. From this file, two top-level groups, economics and social sciences, were selected and topically analyzed to provide a baseline of similarity on which to run an informetric analysis. The Places & Spaces Map of Science (Klavans and Boyack 2007) was used to determine the proximity of disciplines to one another in order to select the two disciplines use in the analysis. Groups analyzed share common theories and goals; however, groups used different language to describe their research. It was found that 61% of term words were shared between the two groups.
  2. Koehler, W.: Web page change and persistence : a four-year longitudinal study (2002) 0.06
    0.05931501 = product of:
      0.23726004 = sum of:
        0.23726004 = weight(_text_:objects in 203) [ClassicSimilarity], result of:
          0.23726004 = score(doc=203,freq=8.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.7046855 = fieldWeight in 203, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=203)
      0.25 = coord(1/4)
    
    Abstract
    Changes in the topography of the Web can be expressed in at least four ways: (1) more sites on more servers in more places, (2) more pages and objects added to existing sites and pages, (3) changes in traffic, and (4) modifications to existing text, graphic, and other Web objects. This article does not address the first three factors (more sites, more pages, more traffic) in the growth of the Web. It focuses instead on changes to an existing set of Web documents. The article documents changes to an aging set of Web pages, first identified and "collected" in December 1996 and followed weekly thereafter. Results are reported through February 2001. The article addresses two related phenomena: (1) the life cycle of Web objects, and (2) changes to Web objects. These data reaffirm that the half-life of a Web page is approximately 2 years. There is variation among Web pages by top-level domain and by page type (navigation, content). Web page content appears to stabilize over time; aging pages change less often than once they did
  3. Haustein, S.; Sugimoto, C.; Larivière, V.: Social media in scholarly communication : Guest editorial (2015) 0.04
    0.042531297 = product of:
      0.08506259 = sum of:
        0.05931501 = weight(_text_:objects in 3809) [ClassicSimilarity], result of:
          0.05931501 = score(doc=3809,freq=2.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.17617138 = fieldWeight in 3809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
        0.025747584 = weight(_text_:22 in 3809) [ClassicSimilarity], result of:
          0.025747584 = score(doc=3809,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.116070345 = fieldWeight in 3809, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.0234375 = fieldNorm(doc=3809)
      0.5 = coord(2/4)
    
    Abstract
    There will soon be a critical mass of web-based digital objects and usage statistics on which to model scholars' communication behaviors - publishing, posting, blogging, scanning, reading, downloading, glossing, linking, citing, recommending, acknowledging - and with which to track their scholarly influence and impact, broadly conceived and broadly felt (Cronin, 2005, p. 196). A decade after Cronin's prediction and five years after the coining of altmetrics, the time seems ripe to reflect upon the role of social media in scholarly communication. This Special Issue does so by providing an overview of current research on the indicators and metrics grouped under the umbrella term of altmetrics, on their relationships with traditional indicators of scientific activity, and on the uses that are made of the various social media platforms - on which these indicators are based - by scientists of various disciplines.
    Date
    20. 1.2015 18:30:22
  4. Xu, C.; Ma, B.; Chen, X.; Ma, F.: Social tagging in the scholarly world (2013) 0.03
    0.034951705 = product of:
      0.13980682 = sum of:
        0.13980682 = weight(_text_:objects in 1091) [ClassicSimilarity], result of:
          0.13980682 = score(doc=1091,freq=4.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.41523993 = fieldWeight in 1091, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1091)
      0.25 = coord(1/4)
    
    Abstract
    The number of research studies on social tagging has increased rapidly in the past years, but few of them highlight the characteristics and research trends in social tagging. A set of 862 academic documents relating to social tagging and published from 2005 to 2011 was thus examined using bibliometric analysis as well as the social network analysis technique. The results show that social tagging, as a research area, develops rapidly and attracts an increasing number of new entrants. There are no key authors, publication sources, or research groups that dominate the research domain of social tagging. Research on social tagging appears to focus mainly on the following three aspects: (a) components and functions of social tagging (e.g., tags, tagging objects, and tagging network), (b) taggers' behaviors and interface design, and (c) tags' organization and usage in social tagging. The trend suggest that more researchers turn to the latter two integrated with human computer interface and information retrieval, although the first aspect is the fundamental one in social tagging. Also, more studies relating to social tagging pay attention to multimedia tagging objects and not only text tagging. Previous research on social tagging was limited to a few subject domains such as information science and computer science. As an interdisciplinary research area, social tagging is anticipated to attract more researchers from different disciplines. More practical applications, especially in high-tech companies, is an encouraging research trend in social tagging.
  5. Nicholls, P.T.: Empirical validation of Lotka's law (1986) 0.03
    0.034330115 = product of:
      0.13732046 = sum of:
        0.13732046 = weight(_text_:22 in 5509) [ClassicSimilarity], result of:
          0.13732046 = score(doc=5509,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.61904186 = fieldWeight in 5509, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=5509)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 22(1986), S.417-419
  6. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.034330115 = product of:
      0.13732046 = sum of:
        0.13732046 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
          0.13732046 = score(doc=6091,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.61904186 = fieldWeight in 6091, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=6091)
      0.25 = coord(1/4)
    
    Date
    13. 7.2008 19:53:22
  7. Fiala, J.: Information flood : fiction and reality (1987) 0.03
    0.034330115 = product of:
      0.13732046 = sum of:
        0.13732046 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
          0.13732046 = score(doc=1080,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.61904186 = fieldWeight in 1080, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.125 = fieldNorm(doc=1080)
      0.25 = coord(1/4)
    
    Source
    Thermochimica acta. 110(1987), S.11-22
  8. Schneider, J.W.; Borlund, P.: Matrix comparison, part 1 : motivation and important issues for measuring the resemblance between proximity measures or ordination results (2007) 0.03
    0.03424554 = product of:
      0.13698216 = sum of:
        0.13698216 = weight(_text_:objects in 584) [ClassicSimilarity], result of:
          0.13698216 = score(doc=584,freq=6.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.4068504 = fieldWeight in 584, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=584)
      0.25 = coord(1/4)
    
    Abstract
    The present two-part article introduces matrix comparison as a formal means of evaluation in informetric studies such as cocitation analysis. In this first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such comparisons, are introduced and discussed. The motivation is spurred by the recent debate on choice of proximity measures and their potential influence upon clustering and ordination results. The two important issues discussed here are matrix generation and the composition of proximity measures. The approach to matrix generation is demonstrated for the same data set, i.e., how data is represented and transformed in a matrix, evidently determines the behavior of proximity measures. Two different matrix generation approaches, in all probability, will lead to different proximity rankings of objects, which further lead to different ordination and clustering results for the same set of objects. Further, a resemblance in the composition of formulas indicates whether two proximity measures may produce similar ordination and clustering results. However, as shown in the case of the angular correlation and cosine measures, a small deviation in otherwise similar formulas can lead to different rankings depending on the contour of the data matrix transformed. Eventually, the behavior of proximity measures, that is whether they produce similar rankings of objects, is more or less data-specific. Consequently, the authors recommend the use of empirical matrix comparison techniques for individual studies to investigate the degree of resemblance between proximity measures or their ordination results. In part two of the article, the authors introduce and demonstrate two related statistical matrix comparison techniques the Mantel test and Procrustes analysis, respectively. These techniques can compare and evaluate the degree of monotonicity between different proximity measures or their ordination results. As such, the Mantel test and Procrustes analysis can be used as statistical validation tools in informetric studies and thus help choosing suitable proximity measures.
  9. Su, Y.; Han, L.-F.: ¬A new literature growth model : variable exponential growth law of literature (1998) 0.03
    0.030343821 = product of:
      0.121375285 = sum of:
        0.121375285 = weight(_text_:22 in 3690) [ClassicSimilarity], result of:
          0.121375285 = score(doc=3690,freq=4.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.54716086 = fieldWeight in 3690, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3690)
      0.25 = coord(1/4)
    
    Date
    22. 5.1999 19:22:35
  10. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.03
    0.030343821 = product of:
      0.121375285 = sum of:
        0.121375285 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
          0.121375285 = score(doc=3925,freq=4.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.54716086 = fieldWeight in 3925, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.078125 = fieldNorm(doc=3925)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 15:22:28
  11. Diodato, V.: Dictionary of bibliometrics (1994) 0.03
    0.03003885 = product of:
      0.1201554 = sum of:
        0.1201554 = weight(_text_:22 in 5666) [ClassicSimilarity], result of:
          0.1201554 = score(doc=5666,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.5416616 = fieldWeight in 5666, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=5666)
      0.25 = coord(1/4)
    
    Footnote
    Rez. in: Journal of library and information science 22(1996) no.2, S.116-117 (L.C. Smith)
  12. Bookstein, A.: Informetric distributions : I. Unified overview (1990) 0.03
    0.03003885 = product of:
      0.1201554 = sum of:
        0.1201554 = weight(_text_:22 in 6902) [ClassicSimilarity], result of:
          0.1201554 = score(doc=6902,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.5416616 = fieldWeight in 6902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=6902)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:55:29
  13. Bookstein, A.: Informetric distributions : II. Resilience to ambiguity (1990) 0.03
    0.03003885 = product of:
      0.1201554 = sum of:
        0.1201554 = weight(_text_:22 in 4689) [ClassicSimilarity], result of:
          0.1201554 = score(doc=4689,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.5416616 = fieldWeight in 4689, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.109375 = fieldNorm(doc=4689)
      0.25 = coord(1/4)
    
    Date
    22. 7.2006 18:55:55
  14. Esler, S.L.; Nelson, M.L.: Evolution of scientific and technical information distribution (1998) 0.03
    0.029657505 = product of:
      0.11863002 = sum of:
        0.11863002 = weight(_text_:objects in 332) [ClassicSimilarity], result of:
          0.11863002 = score(doc=332,freq=2.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.35234275 = fieldWeight in 332, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=332)
      0.25 = coord(1/4)
    
    Abstract
    WWW and related information technologies are transforming the distribution of scientific and technical information (STI). We examine 11 recent, functioning digital libraries focusing on the distribution of STI publications, including journal articles, conference papers, and technical reports. We introduce 4 main categories of digital library projects: based on the architecture (distributed vs. centralized) and the contributor (traditional publisher vs. authoring individual / organization). Many digital library prototypes merely automate existing publishing practices or focus solely on the digitization of the publishing practices cycle output, not sampling and capturing elements of the input. Still others do not consider for distribution the large body of 'gray literature'. We address these deficiencies in the current model of STI exchange by suggesting methods for expanding the scope and target of digital libraries by focusing on a greater source of technical publications and using 'buckets', an object-oriented construct for grouping logically related information objects, to include holdings other than technical publications
  15. Marshakova-Shaikevich, I.: Bibliometric maps of field of science (2005) 0.03
    0.029657505 = product of:
      0.11863002 = sum of:
        0.11863002 = weight(_text_:objects in 1069) [ClassicSimilarity], result of:
          0.11863002 = score(doc=1069,freq=2.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.35234275 = fieldWeight in 1069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.046875 = fieldNorm(doc=1069)
      0.25 = coord(1/4)
    
    Abstract
    The present paper is devoted to two directions in algorithmic classificatory procedures: the journal co-citation analysis as an example of citation networks and lexical analysis of keywords in the titles and texts. What is common to those approaches is the general idea of normalization of deviations of the observed data from the mathematical expectation. The application of the same formula leads to discovery of statistically significant links between objects (journals in one case, keywords - in the other). The results of the journal co-citation analysis are reflected in tables and map for field "Women's Studies" and for field "Information Science and Library Science". An experimental attempt at establishing textual links between words was carried out on two samples from SSCI Data base: (1) EDUCATION and (2) ETHICS. The EDUCATION file included 2180 documents (of which 751 had abstracts); the ETHICS file included 807 documents (289 abstracts). Some examples of the results of this pilot study are given in tabular form . The binary links between words discovered in this way may form triplets or other groups with more than two member words.
  16. Schneider, J.W.; Borlund, P.: Matrix comparison, part 2 : measuring the resemblance between proximity measures or ordination results by use of the mantel and procrustes statistics (2007) 0.03
    0.027961366 = product of:
      0.11184546 = sum of:
        0.11184546 = weight(_text_:objects in 582) [ClassicSimilarity], result of:
          0.11184546 = score(doc=582,freq=4.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.33219194 = fieldWeight in 582, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.03125 = fieldNorm(doc=582)
      0.25 = coord(1/4)
    
    Abstract
    The present two-part article introduces matrix comparison as a formal means for evaluation purposes in informetric studies such as cocitation analysis. In the first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such comparisons, matrix generation, and the composition of proximity measures, are introduced and discussed. In this second part, the authors introduce and thoroughly demonstrate two related matrix comparison techniques the Mantel test and Procrustes analysis, respectively. These techniques can compare and evaluate the degree of monotonicity between different proximity measures or their ordination results. In common with these techniques is the application of permutation procedures to test hypotheses about matrix resemblances. The choice of technique is related to the validation at hand. In the case of the Mantel test, the degree of resemblance between two measures forecast their potentially different affect upon ordination and clustering results. In principle, two proximity measures with a very strong resemblance most likely produce identical results, thus, choice of measure between the two becomes less important. Alternatively, or as a supplement, Procrustes analysis compares the actual ordination results without investigating the underlying proximity measures, by matching two configurations of the same objects in a multidimensional space. An advantage of the Procrustes analysis though, is the graphical solution provided by the superimposition plot and the resulting decomposition of variance components. Accordingly, the Procrustes analysis provides not only a measure of general fit between configurations, but also values for individual objects enabling more elaborate validations. As such, the Mantel test and Procrustes analysis can be used as statistical validation tools in informetric studies and thus help choosing suitable proximity measures.
  17. Lewison, G.: ¬The work of the Bibliometrics Research Group (City University) and associates (2005) 0.03
    0.025747584 = product of:
      0.10299034 = sum of:
        0.10299034 = weight(_text_:22 in 4890) [ClassicSimilarity], result of:
          0.10299034 = score(doc=4890,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.46428138 = fieldWeight in 4890, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=4890)
      0.25 = coord(1/4)
    
    Date
    20. 1.2007 17:02:22
  18. Marx, W.; Bornmann, L.: On the problems of dealing with bibliometric data (2014) 0.03
    0.025747584 = product of:
      0.10299034 = sum of:
        0.10299034 = weight(_text_:22 in 1239) [ClassicSimilarity], result of:
          0.10299034 = score(doc=1239,freq=2.0), product of:
            0.22182742 = queryWeight, product of:
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.06334615 = queryNorm
            0.46428138 = fieldWeight in 1239, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5018296 = idf(docFreq=3622, maxDocs=44218)
              0.09375 = fieldNorm(doc=1239)
      0.25 = coord(1/4)
    
    Date
    18. 3.2014 19:13:22
  19. Egghe, L.: Type/Token-Taken informetrics (2003) 0.02
    0.02471459 = product of:
      0.09885836 = sum of:
        0.09885836 = weight(_text_:objects in 1608) [ClassicSimilarity], result of:
          0.09885836 = score(doc=1608,freq=2.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.29361898 = fieldWeight in 1608, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1608)
      0.25 = coord(1/4)
    
    Abstract
    Type/Token-Taken informetrics is a new part of informetrics that studies the use of items rather than the items itself. Here, items are the objects that are produced by the sources (e.g., journals producing articles, authors producing papers, etc.). In linguistics a source is also called a type (e.g., a word), and an item a token (e.g., the use of words in texts). In informetrics, types that occur often, for example, in a database will also be requested often, for example, in information retrieval. The relative use of these occurrences will be higher than their relative occurrences itself; hence, the name Type/ Token-Taken informetrics. This article studies the frequency distribution of Type/Token-Taken informetrics, starting from the one of Type/Token informetrics (i.e., source-item relationships). We are also studying the average number my* of item uses in Type/Token-Taken informetrics and compare this with the classical average number my in Type/Token informetrics. We show that my* >= my always, and that my* is an increasing function of my. A method is presented to actually calculate my* from my, and a given a, which is the exponent in Lotka's frequency distribution of Type/Token informetrics. We leave open the problem of developing non-Lotkaian Type/TokenTaken informetrics.
  20. Tüür-Fröhlich, T.: Blackbox SSCI : Datenerfassung und Datenverarbeitung bei der kommerziellen Indexierung von Zitaten (2019) 0.02
    0.02471459 = product of:
      0.09885836 = sum of:
        0.09885836 = weight(_text_:objects in 5779) [ClassicSimilarity], result of:
          0.09885836 = score(doc=5779,freq=2.0), product of:
            0.33668926 = queryWeight, product of:
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.06334615 = queryNorm
            0.29361898 = fieldWeight in 5779, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.315071 = idf(docFreq=590, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5779)
      0.25 = coord(1/4)
    
    Abstract
    Zahlreiche Autoren, Autorinnen und kritische Initiativen (z. B. DORA) kritisieren den zu hohen und schädlichen Einfluss quantitativer Daten, welche akademische Instanzen für Evaluationszwecke heranziehen. Wegen des großen Einflusses der globalen Zitatdatenbanken von Thomson Reuters (bzw. Clarivate Analytics) auf die Bewertung der wissenschaftlichen Leistungen von Forscherinnen und Forschern habe ich extensive qualitative und quantitative Fallstudien zur Datenqualität des Social Sciences Citation Index (SSCI) durchgeführt, d. h. die Originaleinträge mit den SSCI-Datensätzen verglichen. Diese Fallstudien zeigten schwerste - nie in der Literatur erwähnte - Fehler, Verstümmelungen, Phantomautoren, Phantomwerke (Fehlerrate in der Fallstudie zu Beebe 2010, Harvard Law Review: 99 Prozent). Über die verwendeten Datenerfassungs- und Indexierungsverfahren von TR bzw. Clarivate Analytics ist nur wenig bekannt. Ein Ergebnis meiner Untersuchungen: Bei der Indexierung von Verweisen in Fußnoten (wie in den Rechtswissenschaften, gerade auch der USA, vorgeschrieben) scheinen die verwendeten Textanalyse-Anwendungen und -Algorithmen völlig überfordert. Eine Qualitätskontrolle scheint nicht stattzufinden. Damit steht der Anspruch des SSCI als einer multidisziplinären Datenbank zur Debatte. Korrekte Zitate in den Fußnoten des Originals können zu Phantom-Autoren, Phantom-Werken und Phantom-Referenzen degenerieren. Das bedeutet: Sämtliche Zeitschriften und Disziplinen, deren Zeitschriften und Büchern dieses oder ähnliche Zitierverfahren verwenden (Oxford-Style), laufen Gefahr, aufgrund starker Zitatverluste falsch, d. h. unterbewertet, zu werden. Wie viele UBOs (Unidentifiable Bibliographic Objects) sich in den Datenbanken SCI, SSCI und AHCI befinden, wäre nur mit sehr aufwändigen Prozeduren zu klären. Unabhängig davon handelt es sich, wie bei fast allen in meinen Untersuchungen gefundenen fatalen Fehlern, eindeutig um endogene Fehler in den Datenbanken, die nicht, wie oft behauptet, angeblich falsch zitierenden Autorinnen und Autoren zugeschrieben werden können, sondern erst im Laufe der Dateneingabe und -verarbeitung entstehen.

Years

Languages

  • e 117
  • d 9
  • ro 1
  • More… Less…

Types

  • a 125
  • m 2
  • el 1
  • s 1
  • More… Less…