Search (401 results, page 1 of 21)

  • × theme_ss:"Informetrie"
  1. Kopcsa, A.; Schiebel, E.: Science and technology mapping : a new iteration model for representing multidimensional relationships (1998) 0.05
    0.047684394 = product of:
      0.14305317 = sum of:
        0.11347422 = weight(_text_:graphic in 326) [ClassicSimilarity], result of:
          0.11347422 = score(doc=326,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.43896174 = fieldWeight in 326, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=326)
        0.029578956 = product of:
          0.05915791 = sum of:
            0.05915791 = weight(_text_:methods in 326) [ClassicSimilarity], result of:
              0.05915791 = score(doc=326,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.37691376 = fieldWeight in 326, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=326)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Much effort has been done to develop more objective quantitative methods to analyze and integrate survey information for understanding research trends and research structures. Co-word analysis is one class of techniques that exploits the use of co-occurences of items in written information. However, there are some bottlenecks in using statistical methods to produce mappings of reduced information in a comfortable manner. On one hand, often used statistical software for PCs has restrictions for the amount for calculable data; on the other hand, the results of the mufltidimensional scaling routines are not quite satisfying. Therefore, this article introduces a new iteration model for the calculation of co-word maps that eases the problem. The iteration model is for positioning the words in the two-dimensional plane due to their connections to each other, and its consists of a quick and stabile algorithm that has been implemented with software for personal computers. A graphic module represents the data in well-known 'technology maps'
  2. Herb, U.; Beucke, D.: ¬Die Zukunft der Impact-Messung : Social Media, Nutzung und Zitate im World Wide Web (2013) 0.04
    0.041336328 = product of:
      0.24801797 = sum of:
        0.24801797 = weight(_text_:2f in 2188) [ClassicSimilarity], result of:
          0.24801797 = score(doc=2188,freq=2.0), product of:
            0.3309742 = queryWeight, product of:
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.03903913 = queryNorm
            0.7493574 = fieldWeight in 2188, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              8.478011 = idf(docFreq=24, maxDocs=44218)
              0.0625 = fieldNorm(doc=2188)
      0.16666667 = coord(1/6)
    
    Content
    Vgl. unter: https://www.leibniz-science20.de%2Fforschung%2Fprojekte%2Faltmetrics-in-verschiedenen-wissenschaftsdisziplinen%2F&ei=2jTgVaaXGcK4Udj1qdgB&usg=AFQjCNFOPdONj4RKBDf9YDJOLuz3lkGYlg&sig2=5YI3KWIGxBmk5_kv0P_8iQ.
  3. Burrell, Q.L.: Predicting future citation behavior (2003) 0.04
    0.036027804 = product of:
      0.108083405 = sum of:
        0.089570984 = sum of:
          0.052210055 = weight(_text_:theory in 3837) [ClassicSimilarity], result of:
            0.052210055 = score(doc=3837,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.32160926 = fieldWeight in 3837, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3837)
          0.03736093 = weight(_text_:29 in 3837) [ClassicSimilarity], result of:
            0.03736093 = score(doc=3837,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.27205724 = fieldWeight in 3837, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3837)
        0.018512422 = product of:
          0.037024844 = sum of:
            0.037024844 = weight(_text_:22 in 3837) [ClassicSimilarity], result of:
              0.037024844 = score(doc=3837,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.2708308 = fieldWeight in 3837, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3837)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    In this article we further develop the theory for a stochastic model for the citation process in the presence of obsolescence to predict the future citation pattern of individual papers in a collection. More precisely, we investigate the conditional distribution-and its mean- of the number of citations to a paper after time t, given the number of citations it has received up to time t. In an important parametric case it is shown that the expected number of future citations is a linear function of the current number, this being interpretable as an example of a success-breeds-success phenomenon.
    Date
    29. 3.2003 19:22:48
  4. Egghe, L.; Rousseau, R.: Introduction to informetrics : quantitative methods in library, documentation and information science (1990) 0.03
    0.027746826 = product of:
      0.08324048 = sum of:
        0.018680464 = product of:
          0.03736093 = sum of:
            0.03736093 = weight(_text_:29 in 1515) [ClassicSimilarity], result of:
              0.03736093 = score(doc=1515,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.27205724 = fieldWeight in 1515, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1515)
          0.5 = coord(1/2)
        0.06456002 = product of:
          0.12912004 = sum of:
            0.12912004 = weight(_text_:methods in 1515) [ClassicSimilarity], result of:
              0.12912004 = score(doc=1515,freq=14.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.8226646 = fieldWeight in 1515, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1515)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    29. 2.2008 19:02:46
    LCSH
    Information science / Statistical methods
    Documentation / Statistical methods
    Library science / Statistical methods
    Subject
    Information science / Statistical methods
    Documentation / Statistical methods
    Library science / Statistical methods
  5. Bensman, S.J.; Smolinsky, L.J.: Lotka's inverse square law of scientific productivity : its methods and statistics (2017) 0.03
    0.026574671 = product of:
      0.079724014 = sum of:
        0.045215234 = product of:
          0.09043047 = sum of:
            0.09043047 = weight(_text_:theory in 3698) [ClassicSimilarity], result of:
              0.09043047 = score(doc=3698,freq=6.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.55704355 = fieldWeight in 3698, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3698)
          0.5 = coord(1/2)
        0.034508783 = product of:
          0.06901757 = sum of:
            0.06901757 = weight(_text_:methods in 3698) [ClassicSimilarity], result of:
              0.06901757 = score(doc=3698,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.43973273 = fieldWeight in 3698, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3698)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    This brief communication analyzes the statistics and methods Lotka used to derive his inverse square law of scientific productivity from the standpoint of modern theory. It finds that he violated the norms of this theory by extremely truncating his data on the right. It also proves that Lotka himself played an important role in establishing the commonly used method of identifying power-law behavior by the R2 fit to a regression line on a log-log plot that modern theory considers unreliable by basing the derivation of his law on this very method.
  6. Bookstein, A.: Informetric distributions : I. Unified overview (1990) 0.02
    0.02479526 = product of:
      0.07438578 = sum of:
        0.03736093 = product of:
          0.07472186 = sum of:
            0.07472186 = weight(_text_:29 in 6902) [ClassicSimilarity], result of:
              0.07472186 = score(doc=6902,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.5441145 = fieldWeight in 6902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6902)
          0.5 = coord(1/2)
        0.037024844 = product of:
          0.07404969 = sum of:
            0.07404969 = weight(_text_:22 in 6902) [ClassicSimilarity], result of:
              0.07404969 = score(doc=6902,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.5416616 = fieldWeight in 6902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6902)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Date
    22. 7.2006 18:55:29
  7. Bar-Ilan, J.; Peritz, B.C.: Informetric theories and methods for exploring the Internet : an analytical survey of recent research literature (2002) 0.02
    0.024491679 = product of:
      0.07347503 = sum of:
        0.031644072 = product of:
          0.063288145 = sum of:
            0.063288145 = weight(_text_:theory in 813) [ClassicSimilarity], result of:
              0.063288145 = score(doc=813,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.3898493 = fieldWeight in 813, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=813)
          0.5 = coord(1/2)
        0.041830957 = product of:
          0.083661914 = sum of:
            0.083661914 = weight(_text_:methods in 813) [ClassicSimilarity], result of:
              0.083661914 = score(doc=813,freq=8.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.53303653 = fieldWeight in 813, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=813)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The Internet, and more specifically the World Wide Web, is quickly becoming one of our main information sources. Systematic evaluation and analysis can help us understand how this medium works, grows, and changes, and how it influences our lives and research. New approaches in informetrics can provide an appropriate means towards achieving the above goals, and towards establishing a sound theory. This paper presents a selective review of research based on the Internet, using bibliometric and informetric methods and tools. Some of these studies clearly show the applicability of bibliometric laws to the Internet, while others establish new definitions and methods based on the respective definitions for printed sources. Both informetrics and Internet research can gain from these additional methods.
    Footnote
    Artikel in einem Themenheft "Current theory in library and information science"
  8. Alimohammadi, D.: Webliometrics : a new horizon in information research (2006) 0.02
    0.019534139 = product of:
      0.058602415 = sum of:
        0.022375738 = product of:
          0.044751476 = sum of:
            0.044751476 = weight(_text_:theory in 621) [ClassicSimilarity], result of:
              0.044751476 = score(doc=621,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.27566507 = fieldWeight in 621, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.046875 = fieldNorm(doc=621)
          0.5 = coord(1/2)
        0.036226675 = product of:
          0.07245335 = sum of:
            0.07245335 = weight(_text_:methods in 621) [ClassicSimilarity], result of:
              0.07245335 = score(doc=621,freq=6.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.4616232 = fieldWeight in 621, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.046875 = fieldNorm(doc=621)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose - During the second half of the past century, the field of library and information science (LIS) has frequently used the research methods of the social sciences. In particular, quantitative assessment research methodologies, together with one of its associated concepts, quantitative assessment metrics, have also been used in the information field, out of which more specific bibliometric, scientometric, informetric and webometric research instruments have been developed. This brief communication tries to use the metrics system to coin a new concept in information science metrical studies, namely, webliometrics. Design/methodology/approach - An overview of the webliography is presented, while webliometrics as a type of research method in LIS is defined. Webliometrics' functions are enumerated and webliometric research methods are sketched out. Findings - That webliometrics is worthy of further clarification and development, both in theory and practice. Research limitations/implications - Webliometrics potentially offer a powerful and rigorous new research tool for LIS researchers. Practical implications - The research outputs of webliometrics, although theoretically and statistically rigorous, are of immediate practical value. Originality/value - This paper aims to increase the knowledge of an original thought as yet under-utilised approach to research methods.
  9. Koehler, W.: Web page change and persistence : a four-year longitudinal study (2002) 0.02
    0.018912371 = product of:
      0.11347422 = sum of:
        0.11347422 = weight(_text_:graphic in 203) [ClassicSimilarity], result of:
          0.11347422 = score(doc=203,freq=2.0), product of:
            0.25850594 = queryWeight, product of:
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.03903913 = queryNorm
            0.43896174 = fieldWeight in 203, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.6217136 = idf(docFreq=159, maxDocs=44218)
              0.046875 = fieldNorm(doc=203)
      0.16666667 = coord(1/6)
    
    Abstract
    Changes in the topography of the Web can be expressed in at least four ways: (1) more sites on more servers in more places, (2) more pages and objects added to existing sites and pages, (3) changes in traffic, and (4) modifications to existing text, graphic, and other Web objects. This article does not address the first three factors (more sites, more pages, more traffic) in the growth of the Web. It focuses instead on changes to an existing set of Web documents. The article documents changes to an aging set of Web pages, first identified and "collected" in December 1996 and followed weekly thereafter. Results are reported through February 2001. The article addresses two related phenomena: (1) the life cycle of Web objects, and (2) changes to Web objects. These data reaffirm that the half-life of a Web page is approximately 2 years. There is variation among Web pages by top-level domain and by page type (navigation, content). Web page content appears to stabilize over time; aging pages change less often than once they did
  10. Avramescu, A.: Teoria difuziei informatiei stiintifice (1997) 0.02
    0.018476836 = product of:
      0.05543051 = sum of:
        0.036918085 = product of:
          0.07383617 = sum of:
            0.07383617 = weight(_text_:theory in 3030) [ClassicSimilarity], result of:
              0.07383617 = score(doc=3030,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.45482418 = fieldWeight in 3030, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3030)
          0.5 = coord(1/2)
        0.018512422 = product of:
          0.037024844 = sum of:
            0.037024844 = weight(_text_:22 in 3030) [ClassicSimilarity], result of:
              0.037024844 = score(doc=3030,freq=2.0), product of:
                0.1367084 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03903913 = queryNorm
                0.2708308 = fieldWeight in 3030, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3030)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The theory of diffusion can be successfully applied to scientific information dissemination by identifying space with a series of successive authors, and potential (temperature) with the interest of new authors towards earlier published papers, measured by the number of citations. As the total number of citation equals the number of references, the conservation law is fulfilled and Fourier's parabolic differential equation can be applied
    Date
    22. 2.1999 16:16:11
    Footnote
    Übers. des Titels: Scientific information diffusion theory
  11. Debackere, K.; Clarysse, B.: Advanced bibliometric methods to model the relationship between entry behavior and networking in emerging technological communities (1998) 0.02
    0.016835473 = product of:
      0.05050642 = sum of:
        0.026105028 = product of:
          0.052210055 = sum of:
            0.052210055 = weight(_text_:theory in 330) [ClassicSimilarity], result of:
              0.052210055 = score(doc=330,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.32160926 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=330)
          0.5 = coord(1/2)
        0.024401393 = product of:
          0.048802786 = sum of:
            0.048802786 = weight(_text_:methods in 330) [ClassicSimilarity], result of:
              0.048802786 = score(doc=330,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.31093797 = fieldWeight in 330, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=330)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Organizational ecology and social network theory are used to explain entries in technological communities. Using bibliometric data on 411 organizations in the field of plant biotechnology, we test several hypotheses that entry is not only influenced by the density of the field, but also by the structure of the R&D network within the community. The empirical findings point to the usefulness of bibliometric data in mapping change and evolution in technological communities, as well as to the effects of networking on entry behavior
  12. Schneider, J.W.; Costas, R.: Identifying potential "breakthrough" publications using refined citation analyses : three related explorative approaches (2017) 0.02
    0.016067442 = product of:
      0.04820232 = sum of:
        0.01334319 = product of:
          0.02668638 = sum of:
            0.02668638 = weight(_text_:29 in 3436) [ClassicSimilarity], result of:
              0.02668638 = score(doc=3436,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.19432661 = fieldWeight in 3436, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3436)
          0.5 = coord(1/2)
        0.034859132 = product of:
          0.069718264 = sum of:
            0.069718264 = weight(_text_:methods in 3436) [ClassicSimilarity], result of:
              0.069718264 = score(doc=3436,freq=8.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.4441971 = fieldWeight in 3436, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3436)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    The article presents three advanced citation-based methods used to detect potential breakthrough articles among very highly cited articles. We approach the detection of such articles from three different perspectives in order to provide different typologies of breakthrough articles. In all three cases we use the hierarchical classification of scientific publications developed at CWTS based on direct citation relationships. We assume that such contextualized articles focus on similar research interests. We utilize the characteristics scores and scales (CSS) approach to partition citation distributions and implement a specific filtering algorithm to sort out potential highly-cited "followers," articles not considered breakthroughs. After invoking thresholds and filtering, three methods are explored: A very exclusive one where only the highest cited article in a micro-cluster is considered as a potential breakthrough article (M1); as well as two conceptually different methods, one that detects potential breakthrough articles among the 2% highest cited articles according to CSS (M2a), and finally a more restrictive version where, in addition to the CSS 2% filter, knowledge diffusion is also considered (M2b). The advance citation-based methods are explored and evaluated using validated publication sets linked to different Danish funding instruments including centers of excellence.
    Date
    16.11.2017 13:29:02
  13. Riechert, M.; Schmitz, J.: Qualitätssicherung von Forschungsinformationen durch visuelle Repräsentation : das Fallbeispiel des "Informationssystems Promotionsnoten" (2017) 0.01
    0.014928497 = product of:
      0.089570984 = sum of:
        0.089570984 = sum of:
          0.052210055 = weight(_text_:theory in 3724) [ClassicSimilarity], result of:
            0.052210055 = score(doc=3724,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.32160926 = fieldWeight in 3724, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3724)
          0.03736093 = weight(_text_:29 in 3724) [ClassicSimilarity], result of:
            0.03736093 = score(doc=3724,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.27205724 = fieldWeight in 3724, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3724)
      0.16666667 = coord(1/6)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  14. Tunger, D.: Bibliometrie : quo vadis? (2017) 0.01
    0.014928497 = product of:
      0.089570984 = sum of:
        0.089570984 = sum of:
          0.052210055 = weight(_text_:theory in 3519) [ClassicSimilarity], result of:
            0.052210055 = score(doc=3519,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.32160926 = fieldWeight in 3519, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3519)
          0.03736093 = weight(_text_:29 in 3519) [ClassicSimilarity], result of:
            0.03736093 = score(doc=3519,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.27205724 = fieldWeight in 3519, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3519)
      0.16666667 = coord(1/6)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  15. Möller, T.: Woher stammt das Wissen über die Halbwertzeiten des Wissens? (2017) 0.01
    0.014928497 = product of:
      0.089570984 = sum of:
        0.089570984 = sum of:
          0.052210055 = weight(_text_:theory in 3520) [ClassicSimilarity], result of:
            0.052210055 = score(doc=3520,freq=2.0), product of:
              0.16234003 = queryWeight, product of:
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.03903913 = queryNorm
              0.32160926 = fieldWeight in 3520, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.1583924 = idf(docFreq=1878, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3520)
          0.03736093 = weight(_text_:29 in 3520) [ClassicSimilarity], result of:
            0.03736093 = score(doc=3520,freq=2.0), product of:
              0.13732746 = queryWeight, product of:
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.03903913 = queryNorm
              0.27205724 = fieldWeight in 3520, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5176873 = idf(docFreq=3565, maxDocs=44218)
                0.0546875 = fieldNorm(doc=3520)
      0.16666667 = coord(1/6)
    
    Source
    Theorie, Semantik und Organisation von Wissen: Proceedings der 13. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und dem 13. Internationalen Symposium der Informationswissenschaft der Higher Education Association for Information Science (HI) Potsdam (19.-20.03.2013): 'Theory, Information and Organization of Knowledge' / Proceedings der 14. Tagung der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) und Natural Language & Information Systems (NLDB) Passau (16.06.2015): 'Lexical Resources for Knowledge Organization' / Proceedings des Workshops der Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) auf der SEMANTICS Leipzig (1.09.2014): 'Knowledge Organization and Semantic Web' / Proceedings des Workshops der Polnischen und Deutschen Sektion der Internationalen Gesellschaft für Wissensorganisation (ISKO) Cottbus (29.-30.09.2011): 'Economics of Knowledge Production and Organization'. Hrsg. von W. Babik, H.P. Ohly u. K. Weber
  16. Simkin, M.V.; Roychowdhury, V.P.: ¬A mathematical theory of citing (2007) 0.01
    0.014599875 = product of:
      0.043799624 = sum of:
        0.02637006 = product of:
          0.05274012 = sum of:
            0.05274012 = weight(_text_:theory in 589) [ClassicSimilarity], result of:
              0.05274012 = score(doc=589,freq=4.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.3248744 = fieldWeight in 589, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=589)
          0.5 = coord(1/2)
        0.017429566 = product of:
          0.034859132 = sum of:
            0.034859132 = weight(_text_:methods in 589) [ClassicSimilarity], result of:
              0.034859132 = score(doc=589,freq=2.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.22209854 = fieldWeight in 589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=589)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Recently we proposed a model in which when a scientist writes a manuscript, he picks up several random papers, cites them, and also copies a fraction of their references. The model was stimulated by our finding that a majority of scientific citations are copied from the lists of references used in other papers. It accounted quantitatively for several properties of empirically observed distribution of citations; however, important features such as power-law distributions of citations to papers published during the same year and the fact that the average rate of citing decreases with aging of a paper were not accounted for by that model. Here, we propose a modified model: When a scientist writes a manuscript, he picks up several random recent papers, cites them, and also copies some of their references. The difference with the original model is the word recent. We solve the model using methods of the theory of branching processes, and find that it can explain the aforementioned features of citation distribution, which our original model could not account for. The model also can explain sleeping beauties in science; that is, papers that are little cited for a decade or so and later awaken and get many citations. Although much can be understood from purely random models, we find that to obtain a good quantitative agreement with empirical citation data, one must introduce Darwinian fitness parameter for the papers.
  17. Xu, L.: Research synthesis methods and library and information science : shared problems, limited diffusion (2016) 0.01
    0.014510695 = product of:
      0.043532085 = sum of:
        0.01334319 = product of:
          0.02668638 = sum of:
            0.02668638 = weight(_text_:29 in 3057) [ClassicSimilarity], result of:
              0.02668638 = score(doc=3057,freq=2.0), product of:
                0.13732746 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03903913 = queryNorm
                0.19432661 = fieldWeight in 3057, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3057)
          0.5 = coord(1/2)
        0.030188894 = product of:
          0.060377788 = sum of:
            0.060377788 = weight(_text_:methods in 3057) [ClassicSimilarity], result of:
              0.060377788 = score(doc=3057,freq=6.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.384686 = fieldWeight in 3057, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3057)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Interests of researchers who engage with research synthesis methods (RSM) intersect with library and information science (LIS) research and practice. This intersection is described by a summary of conceptualizations of research synthesis in a diverse set of research fields and in the context of Swanson's (1986) discussion of undiscovered public knowledge. Through a selective literature review, research topics that intersect with LIS and RSM are outlined. Topics identified include open access, information retrieval, bias and research information ethics, referencing practices, citation patterns, and data science. Subsequently, bibliometrics and topic modeling are used to present a systematic overview of the visibility of RSM in LIS. This analysis indicates that RSM became visible in LIS in the 1980s. Overall, LIS research has drawn substantially from general and internal medicine, the field's own literature, and business; and is drawn on by health and medical sciences, computing, and business. Through this analytical overview, it is confirmed that research synthesis is more visible in the health and medical literature in LIS; but suggests that, LIS, as a meta-science, has the potential to make substantive contributions to a broader variety of fields in the context of topics related to research synthesis methods.
    Date
    21. 7.2016 19:23:29
  18. Liu, D.-R.; Shih, M.-J.: Hybrid-patent classification based on patent-network analysis (2011) 0.01
    0.014470684 = product of:
      0.086824104 = sum of:
        0.086824104 = sum of:
          0.060377788 = weight(_text_:methods in 4189) [ClassicSimilarity], result of:
            0.060377788 = score(doc=4189,freq=6.0), product of:
              0.15695344 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03903913 = queryNorm
              0.384686 = fieldWeight in 4189, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4189)
          0.026446318 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
            0.026446318 = score(doc=4189,freq=2.0), product of:
              0.1367084 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03903913 = queryNorm
              0.19345059 = fieldWeight in 4189, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4189)
      0.16666667 = coord(1/6)
    
    Abstract
    Effective patent management is essential for organizations to maintain their competitive advantage. The classification of patents is a critical part of patent management and industrial analysis. This study proposes a hybrid-patent-classification approach that combines a novel patent-network-based classification method with three conventional classification methods to analyze query patents and predict their classes. The novel patent network contains various types of nodes that represent different features extracted from patent documents. The nodes are connected based on the relationship metrics derived from the patent metadata. The proposed classification method predicts a query patent's class by analyzing all reachable nodes in the patent network and calculating their relevance to the query patent. It then classifies the query patent with a modified k-nearest neighbor classifier. To further improve the approach, we combine it with content-based, citation-based, and metadata-based classification methods to develop a hybrid-classification approach. We evaluate the performance of the hybrid approach on a test dataset of patent documents obtained from the U.S. Patent and Trademark Office, and compare its performance with that of the three conventional methods. The results demonstrate that the proposed patent-network-based approach yields more accurate class predictions than the patent network-based approach.
    Date
    22. 1.2011 13:04:21
  19. Chang, Y.-W.; Huang, M.-H.: ¬A study of the evolution of interdisciplinarity in library and information science : using three bibliometric methods (2012) 0.01
    0.014470684 = product of:
      0.086824104 = sum of:
        0.086824104 = sum of:
          0.060377788 = weight(_text_:methods in 4959) [ClassicSimilarity], result of:
            0.060377788 = score(doc=4959,freq=6.0), product of:
              0.15695344 = queryWeight, product of:
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.03903913 = queryNorm
              0.384686 = fieldWeight in 4959, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                4.0204134 = idf(docFreq=2156, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4959)
          0.026446318 = weight(_text_:22 in 4959) [ClassicSimilarity], result of:
            0.026446318 = score(doc=4959,freq=2.0), product of:
              0.1367084 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03903913 = queryNorm
              0.19345059 = fieldWeight in 4959, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4959)
      0.16666667 = coord(1/6)
    
    Abstract
    This study uses three bibliometric methods: direct citation, bibliographic coupling, and co-authorship analysis, to investigate interdisciplinary changes in library and information science (LIS) from 1978 to 2007. The results reveal that LIS researchers most frequently cite publications in their own discipline. In addition, half of all co-authors of LIS articles are affiliated with LIS-related institutes. The results confirm that the degree of interdisciplinarity within LIS has increased, particularly co-authorship. However, the study found sources of direct citations in LIS articles are widely distributed across 30 disciplines, but co-authors of LIS articles are distributed across only 25 disciplines. The degree of interdisciplinarity was found ranging from 0.61 to 0.82 with citation to references in all articles being the highest and that of co-authorship being the lowest. Percentages of contribution attributable to LIS show a decreasing tendency based on the results of direct citation and co-authorship analysis, but an increasing tendency based on those of bibliographic coupling analysis. Such differences indicate each of the three bibliometric methods has its strength and provides insights respectively for viewing various aspects of interdisciplinarity, suggesting the use of no single bibliometric method can reveal all aspects of interdisciplinarity due to its multifaceted nature.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.1, S.22-33
  20. Leydesdorff, L.; Vaughan, L.: Co-occurrence matrices and their applications in information science : extending ACA to the Web environment (2006) 0.01
    0.014431859 = product of:
      0.043295577 = sum of:
        0.018646449 = product of:
          0.037292898 = sum of:
            0.037292898 = weight(_text_:theory in 6113) [ClassicSimilarity], result of:
              0.037292898 = score(doc=6113,freq=2.0), product of:
                0.16234003 = queryWeight, product of:
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.03903913 = queryNorm
                0.2297209 = fieldWeight in 6113, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.1583924 = idf(docFreq=1878, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6113)
          0.5 = coord(1/2)
        0.024649128 = product of:
          0.049298257 = sum of:
            0.049298257 = weight(_text_:methods in 6113) [ClassicSimilarity], result of:
              0.049298257 = score(doc=6113,freq=4.0), product of:
                0.15695344 = queryWeight, product of:
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.03903913 = queryNorm
                0.31409478 = fieldWeight in 6113, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.0204134 = idf(docFreq=2156, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6113)
          0.5 = coord(1/2)
      0.33333334 = coord(2/6)
    
    Abstract
    Co-occurrence matrices, such as cocitation, coword, and colink matrices, have been used widely in the information sciences. However, confusion and controversy have hindered the proper statistical analysis of these data. The underlying problem, in our opinion, involved understanding the nature of various types of matrices. This article discusses the difference between a symmetrical cocitation matrix and an asymmetrical citation matrix as well as the appropriate statistical techniques that can be applied to each of these matrices, respectively. Similarity measures (such as the Pearson correlation coefficient or the cosine) should not be applied to the symmetrical cocitation matrix but can be applied to the asymmetrical citation matrix to derive the proximity matrix. The argument is illustrated with examples. The study then extends the application of co-occurrence matrices to the Web environment, in which the nature of the available data and thus data collection methods are different from those of traditional databases such as the Science Citation Index. A set of data collected with the Google Scholar search engine is analyzed by using both the traditional methods of multivariate analysis and the new visualization software Pajek, which is based on social network analysis and graph theory.

Years

Languages

  • e 376
  • d 22
  • sp 2
  • ro 1
  • More… Less…

Types

  • a 388
  • m 9
  • el 5
  • r 3
  • s 3
  • b 1
  • More… Less…