Search (83 results, page 1 of 5)

  • × author_ss:"Leydesdorff, L."
  1. Leydesdorff, L.; Ivanova, I.A.: Mutual redundancies in interhuman communication systems : steps toward a calculus of processing meaning (2014) 0.00
    0.0042066295 = product of:
      0.016826518 = sum of:
        0.016826518 = weight(_text_:information in 1211) [ClassicSimilarity], result of:
          0.016826518 = score(doc=1211,freq=16.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27429342 = fieldWeight in 1211, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1211)
      0.25 = coord(1/4)
    
    Abstract
    The study of interhuman communication requires a more complex framework than Claude E. Shannon's (1948) mathematical theory of communication because "information" is defined in the latter case as meaningless uncertainty. Assuming that meaning cannot be communicated, we extend Shannon's theory by defining mutual redundancy as a positional counterpart of the relational communication of information. Mutual redundancy indicates the surplus of meanings that can be provided to the exchanges in reflexive communications. The information is redundant because it is based on "pure sets" (i.e., without subtraction of mutual information in the overlaps). We show that in the three-dimensional case (e.g., of a triple helix of university-industry-government relations), mutual redundancy is equal to mutual information (Rxyz = Txyz); but when the dimensionality is even, the sign is different. We generalize to the measurement in N dimensions and proceed to the interpretation. Using Niklas Luhmann's (1984-1995) social systems theory and/or Anthony Giddens's (1979, 1984) structuration theory, mutual redundancy can be provided with an interpretation in the sociological case: Different meaning-processing structures code and decode with other algorithms. A surplus of ("absent") options can then be generated that add to the redundancy. Luhmann's "functional (sub)systems" of expectations or Giddens's "rule-resource sets" are positioned mutually, but coupled operationally in events or "instantiated" in actions. Shannon-type information is generated by the mediation, but the "structures" are (re-)positioned toward one another as sets of (potentially counterfactual) expectations. The structural differences among the coding and decoding algorithms provide a source of additional options in reflexive and anticipatory communications.
    Source
    Journal of the Association for Information Science and Technology. 65(2014) no.2, S.386-399
    Theme
    Information
  2. Leydesdorff, L.: Similarity measures, author cocitation Analysis, and information theory (2005) 0.00
    0.004164351 = product of:
      0.016657405 = sum of:
        0.016657405 = weight(_text_:information in 3471) [ClassicSimilarity], result of:
          0.016657405 = score(doc=3471,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.27153665 = fieldWeight in 3471, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3471)
      0.25 = coord(1/4)
    
    Abstract
    The use of Pearson's correlation coefficient in Author Cocitation Analysis was compared with Salton's cosine measure in a number of recent contributions. Unlike the Pearson correlation, the cosine is insensitive to the number of zeros. However, one has the option of applying a logarithmic transformation in correlation analysis. Information caiculus is based an both the logarithmic transformation and provides a non-parametric statistics. Using this methodology, one can cluster a document set in a precise way and express the differences in terms of bits of information. The algorithm is explained and used an the data set, which was made the subject of this discussion.
    Source
    Journal of the American Society for Information Science and Technology. 56(2005) no.7, S.769-772
  3. Leydesdorff, L.: Should co-occurrence data be normalized : a rejoinder (2007) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 627) [ClassicSimilarity], result of:
          0.014277775 = score(doc=627,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 627, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=627)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.14, S.2411-2413
  4. Leydesdorff, L.: ¬The communication of meaning and the structuration of expectations : Giddens' "structuration theory" and Luhmann's "self-organization" (2010) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 4004) [ClassicSimilarity], result of:
          0.014277775 = score(doc=4004,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 4004, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=4004)
      0.25 = coord(1/4)
    
    Abstract
    The communication of meaning as distinct from (Shannon-type) information is central to Luhmann's social systems theory and Giddens' structuration theory of action. These theories share an emphasis on reflexivity, but focus on meaning along a divide between interhuman communication and intentful action as two different systems of reference. Recombining these two theories into a theory about the structuration of expectations, interactions, organization, and self-organization of intentional communications can be simulated based on algorithms from the computation of anticipatory systems. The self-organizing and organizing layers remain rooted in the double contingency of the human encounter, which provides the variation. Organization and self-organization of communication are reflexive upon and therefore reconstructive of each other. Using mutual information in three dimensions, the imprint of meaning processing in the modeling system on the historical organization of uncertainty in the modeled system can be measured. This is shown empirically in the case of intellectual organization as "structurating" structure in the textual domain of scientific articles.
    Source
    Journal of the American Society for Information Science and Technology. 61(2010) no.10, S.2138-2150
    Theme
    Information
  5. Leydesdorff, L.: Accounting for the uncertainty in the evaluation of percentile ranks (2012) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 447) [ClassicSimilarity], result of:
          0.014277775 = score(doc=447,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 447, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=447)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.11, S.2349-2350
  6. Bornmann, L.; Leydesdorff, L.: Statistical tests and research assessments : a comment on Schneider (2012) (2013) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 752) [ClassicSimilarity], result of:
          0.014277775 = score(doc=752,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 752, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=752)
      0.25 = coord(1/4)
    
    Source
    Journal of the American Society for Information Science and Technology. 64(2013) no.6, S.1306-1308
  7. Leydesdorff, L.; Bornmann, L.: ¬The operationalization of "fields" as WoS subject categories (WCs) in evaluative bibliometrics : the cases of "library and information science" and "science & technology studies" (2016) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 2779) [ClassicSimilarity], result of:
          0.014277775 = score(doc=2779,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 2779, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2779)
      0.25 = coord(1/4)
    
    Abstract
    Normalization of citation scores using reference sets based on Web of Science subject categories (WCs) has become an established ("best") practice in evaluative bibliometrics. For example, the Times Higher Education World University Rankings are, among other things, based on this operationalization. However, WCs were developed decades ago for the purpose of information retrieval and evolved incrementally with the database; the classification is machine-based and partially manually corrected. Using the WC "information science & library science" and the WCs attributed to journals in the field of "science and technology studies," we show that WCs do not provide sufficient analytical clarity to carry bibliometric normalization in evaluation practices because of "indexer effects." Can the compliance with "best practices" be replaced with an ambition to develop "best possible practices"? New research questions can then be envisaged.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.3, S.707-714
  8. Leydesdorff, L.; Wagner, C,; Bornmann, L.: Replicability and the public/private divide (2016) 0.00
    0.0035694437 = product of:
      0.014277775 = sum of:
        0.014277775 = weight(_text_:information in 3023) [ClassicSimilarity], result of:
          0.014277775 = score(doc=3023,freq=2.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.23274569 = fieldWeight in 3023, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=3023)
      0.25 = coord(1/4)
    
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.7, S.1777-1778
  9. Leydesdorff, L.; Bornmann, L.: Integrated impact indicators compared with impact factors : an alternative research design with policy implications (2011) 0.00
    0.0033256328 = product of:
      0.013302531 = sum of:
        0.013302531 = weight(_text_:information in 4919) [ClassicSimilarity], result of:
          0.013302531 = score(doc=4919,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.21684799 = fieldWeight in 4919, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4919)
      0.25 = coord(1/4)
    
    Abstract
    In bibliometrics, the association of "impact" with central-tendency statistics is mistaken. Impacts add up, and citation curves therefore should be integrated instead of averaged. For example, the journals MIS Quarterly and Journal of the American Society for Information Science and Technology differ by a factor of 2 in terms of their respective impact factors (IF), but the journal with the lower IF has the higher impact. Using percentile ranks (e.g., top-1%, top-10%, etc.), an Integrated Impact Indicator (I3) can be based on integration of the citation curves, but after normalization of the citation curves to the same scale. The results across document sets can be compared as percentages of the total impact of a reference set. Total number of citations, however, should not be used instead because the shape of the citation curves is then not appreciated. I3 can be applied to any document set and any citation window. The results of the integration (summation) are fully decomposable in terms of journals or institutional units such as nations, universities, and so on because percentile ranks are determined at the paper level. In this study, we first compare I3 with IFs for the journals in two Institute for Scientific Information subject categories ("Information Science & Library Science" and "Multidisciplinary Sciences"). The library and information science set is additionally decomposed in terms of nations. Policy implications of this possible paradigm shift in citation impact analysis are specified.
    Source
    Journal of the American Society for Information Science and Technology. 62(2011) no.11, S.2133-2146
  10. Leydesdorff, L.; Johnson, M.W.; Ivanova, I.: Toward a calculus of redundancy : signification, codification, and anticipation in cultural evolution (2018) 0.00
    0.0033256328 = product of:
      0.013302531 = sum of:
        0.013302531 = weight(_text_:information in 4463) [ClassicSimilarity], result of:
          0.013302531 = score(doc=4463,freq=10.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.21684799 = fieldWeight in 4463, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4463)
      0.25 = coord(1/4)
    
    Abstract
    This article considers the relationships among meaning generation, selection, and the dynamics of discourse from a variety of perspectives ranging from information theory and biology to sociology. Following Husserl's idea of a horizon of meanings in intersubjective communication, we propose a way in which, using Shannon's equations, the generation and selection of meanings from a horizon of possibilities can be considered probabilistically. The information-theoretical dynamics we articulate considers a process of meaning generation within cultural evolution: information is imbued with meaning, and through this process, the number of options for the selection of meaning in discourse proliferates. The redundancy of possible meanings contributes to a codification of expectations within the discourse. Unlike hardwired DNA, the codes of nonbiological systems can coevolve with the variations. Spanning horizons of meaning, the codes structure the communications as selection environments that shape discourses. Discursive knowledge can be considered as meta-coded communication that enables us to translate among differently coded communications. The dynamics of discursive knowledge production can thus infuse the historical dynamics with a cultural evolution by adding options, that is, by increasing redundancy. A calculus of redundancy is presented as an indicator whereby these dynamics of discourse and meaning may be explored empirically.
    Source
    Journal of the Association for Information Science and Technology. 69(2018) no.10, S.1181-1192
    Theme
    Information
  11. Leydesdorff, L.; Bornmann, L.: Mapping (USPTO) patent data using overlays to Google Maps (2012) 0.00
    0.003091229 = product of:
      0.012364916 = sum of:
        0.012364916 = weight(_text_:information in 288) [ClassicSimilarity], result of:
          0.012364916 = score(doc=288,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.20156369 = fieldWeight in 288, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=288)
      0.25 = coord(1/4)
    
    Abstract
    A technique is developed using patent information available online (at the U.S. Patent and Trademark Office) for the generation of Google Maps. The overlays indicate both the quantity and the quality of patents at the city level. This information is relevant for research questions in technology analysis, innovation studies, and evolutionary economics, as well as economic geography. The resulting maps can also be relevant for technological innovation policies and research and development management, because the U.S. market can be considered the leading market for patenting and patent competition. In addition to the maps, the routines provide quantitative data about the patents for statistical analysis. The cities on the map are colored according to the results of significance tests. The overlays are explored for the Netherlands as a "national system of innovations" and further elaborated in two cases of emerging technologies: ribonucleic acid interference (RNAi) and nanotechnology.
    Source
    Journal of the American Society for Information Science and Technology. 63(2012) no.7, S.1442-1458
  12. Bauer, J.; Leydesdorff, L.; Bornmann, L.: Highly cited papers in Library and Information Science (LIS) : authors, institutions, and network structures (2016) 0.00
    0.0029745363 = product of:
      0.011898145 = sum of:
        0.011898145 = weight(_text_:information in 3231) [ClassicSimilarity], result of:
          0.011898145 = score(doc=3231,freq=8.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.19395474 = fieldWeight in 3231, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3231)
      0.25 = coord(1/4)
    
    Abstract
    As a follow-up to the highly cited authors list published by Thomson Reuters in June 2014, we analyzed the top 1% most frequently cited papers published between 2002 and 2012 included in the Web of Science (WoS) subject category "Information Science & Library Science." In all, 798 authors contributed to 305 top 1% publications; these authors were employed at 275 institutions. The authors at Harvard University contributed the largest number of papers, when the addresses are whole-number counted. However, Leiden University leads the ranking if fractional counting is used. Twenty-three of the 798 authors were also listed as most highly cited authors by Thomson Reuters in June 2014 (http://highlycited.com/). Twelve of these 23 authors were involved in publishing 4 or more of the 305 papers under study. Analysis of coauthorship relations among the 798 highly cited scientists shows that coauthorships are based on common interests in a specific topic. Three topics were important between 2002 and 2012: (a) collection and exploitation of information in clinical practices; (b) use of the Internet in public communication and commerce; and (c) scientometrics.
    Source
    Journal of the Association for Information Science and Technology. 67(2016) no.12, S.3095-3100
  13. Leydesdorff, L.: Dynamic and evolutionary updates of classificatory schemes in scientific journal structures (2002) 0.00
    0.0029446408 = product of:
      0.011778563 = sum of:
        0.011778563 = weight(_text_:information in 1249) [ClassicSimilarity], result of:
          0.011778563 = score(doc=1249,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1920054 = fieldWeight in 1249, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1249)
      0.25 = coord(1/4)
    
    Abstract
    Can the inclusion of new journals in the Science Citation Index be used for the indication of structural change in the database, and how can this change be compared with reorganizations of reiations among previously included journals? Change in the number of journals (n) is distinguished from change in the number of journal categories (m). Although the number of journals can be considered as a given at each moment in time, the number of journal categories is based an a reconstruction that is time-stamped ex post. The reflexive reconstruction is in need of an update when new information becomes available in a next year. Implications of this shift towards an evolutionary perspective are specified.
    Source
    Journal of the American Society for Information Science and technology. 53(2002) no.12, S.987-994
  14. Leydesdorff, L.; Zhou, P.: Co-word analysis using the Chinese character set (2008) 0.00
    0.0029446408 = product of:
      0.011778563 = sum of:
        0.011778563 = weight(_text_:information in 1970) [ClassicSimilarity], result of:
          0.011778563 = score(doc=1970,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.1920054 = fieldWeight in 1970, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1970)
      0.25 = coord(1/4)
    
    Abstract
    Until recently, Chinese texts could not be studied using co-word analysis because the words are not separated by spaces in Chinese (and Japanese). A word can be composed of one or more characters. The online availability of programs that separate Chinese texts makes it possible to analyze them using semantic maps. Chinese characters contain not only information but also meaning. This may enhance the readability of semantic maps. In this study, we analyze 58 words which occur 10 or more times in the 1,652 journal titles of the China Scientific and Technical Papers and Citations Database. The word-occurrence matrix is visualized and factor-analyzed.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.9, S.1528-1530
  15. Leydesdorff, L.; Vaughan, L.: Co-occurrence matrices and their applications in information science : extending ACA to the Web environment (2006) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 6113) [ClassicSimilarity], result of:
          0.010304097 = score(doc=6113,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 6113, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6113)
      0.25 = coord(1/4)
    
    Abstract
    Co-occurrence matrices, such as cocitation, coword, and colink matrices, have been used widely in the information sciences. However, confusion and controversy have hindered the proper statistical analysis of these data. The underlying problem, in our opinion, involved understanding the nature of various types of matrices. This article discusses the difference between a symmetrical cocitation matrix and an asymmetrical citation matrix as well as the appropriate statistical techniques that can be applied to each of these matrices, respectively. Similarity measures (such as the Pearson correlation coefficient or the cosine) should not be applied to the symmetrical cocitation matrix but can be applied to the asymmetrical citation matrix to derive the proximity matrix. The argument is illustrated with examples. The study then extends the application of co-occurrence matrices to the Web environment, in which the nature of the available data and thus data collection methods are different from those of traditional databases such as the Science Citation Index. A set of data collected with the Google Scholar search engine is analyzed by using both the traditional methods of multivariate analysis and the new visualization software Pajek, which is based on social network analysis and graph theory.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.12, S.1616-1628
  16. Lucio-Arias, D.; Leydesdorff, L.: Main-path analysis and path-dependent transitions in HistCite(TM)-based historiograms (2008) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 2373) [ClassicSimilarity], result of:
          0.010304097 = score(doc=2373,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 2373, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2373)
      0.25 = coord(1/4)
    
    Abstract
    With the program HistCite(TM) it is possible to generate and visualize the most relevant papers in a set of documents retrieved from the Science Citation Index. Historical reconstructions of scientific developments can be represented chronologically as developments in networks of citation relations extracted from scientific literature. This study aims to go beyond the historical reconstruction of scientific knowledge, enriching the output of HistCiteTM with algorithms from social-network analysis and information theory. Using main-path analysis, it is possible to highlight the structural backbone in the development of a scientific field. The expected information value of the message can be used to indicate whether change in the distribution (of citations) has occurred to such an extent that a path-dependency is generated. This provides us with a measure of evolutionary change between subsequent documents. The forgetting and rewriting of historically prior events at the research front can thus be indicated. These three methods - HistCite, main path and path dependent transitions - are applied to a set of documents related to fullerenes and the fullerene-like structures known as nanotubes.
    Source
    Journal of the American Society for Information Science and Technology. 59(2008) no.12, S.1948-1962
  17. Leydesdorff, L.; Ivanova, I.: ¬The measurement of "interdisciplinarity" and "synergy" in scientific and extra-scientific collaborations (2021) 0.00
    0.0025760243 = product of:
      0.010304097 = sum of:
        0.010304097 = weight(_text_:information in 208) [ClassicSimilarity], result of:
          0.010304097 = score(doc=208,freq=6.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16796975 = fieldWeight in 208, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=208)
      0.25 = coord(1/4)
    
    Abstract
    Problem solving often requires crossing boundaries, such as those between disciplines. When policy-makers call for "interdisciplinarity," however, they often mean "synergy." Synergy is generated when the whole offers more possibilities than the sum of its parts. An increase in the number of options above the sum of the options in subsets can be measured as redundancy; that is, the number of not-yet-realized options. The number of options available to an innovation system for realization can be as decisive for the system's survival as the historically already-realized innovations. Unlike "interdisciplinarity," "synergy" can also be generated in sectorial or geographical collaborations. The measurement of "synergy," however, requires a methodology different from the measurement of "interdisciplinarity." In this study, we discuss recent advances in the operationalization and measurement of "interdisciplinarity," and propose a methodology for measuring "synergy" based on information theory. The sharing of meanings attributed to information from different perspectives can increase redundancy. Increasing redundancy reduces the relative uncertainty, for example, in niches. The operationalization of the two concepts-"interdisciplinarity" and "synergy"-as different and partly overlapping indicators allows for distinguishing between the effects and the effectiveness of science-policy interventions in research priorities.
    Source
    Journal of the Association for Information Science and Technology. 72(2021) no.4, S.387-402
  18. Leydesdorff, L.: ¬The university-industry knowledge relationship : analyzing patents and the science base of technologies (2004) 0.00
    0.0025239778 = product of:
      0.010095911 = sum of:
        0.010095911 = weight(_text_:information in 2887) [ClassicSimilarity], result of:
          0.010095911 = score(doc=2887,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16457605 = fieldWeight in 2887, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=2887)
      0.25 = coord(1/4)
    
    Abstract
    Via the Internet, information scientists can obtain costfree access to large databases in the "hidden" or "deep Web." These databases are often structured far more than the Internet domains themselves. The patent database of the U.S. Patent and Trade Office is used in this study to examine the science base of patents in terms of the literature references in these patents. Universitybased patents at the global level are compared with results when using the national economy of the Netherlands as a system of reference. Methods for accessing the online databases and for the visualization of the results are specified. The conclusion is that "biotechnology" has historically generated a model for theorizing about university-industry relations that cannot easily be generalized to other sectors and disciplines.
    Source
    Journal of the American Society for Information Science and Technology. 55(2004) no.11, S.991-1001
  19. Leydesdorff, L.; Bensman, S.: Classification and Powerlaws : the logarithmic transformation (2006) 0.00
    0.0025239778 = product of:
      0.010095911 = sum of:
        0.010095911 = weight(_text_:information in 6007) [ClassicSimilarity], result of:
          0.010095911 = score(doc=6007,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16457605 = fieldWeight in 6007, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6007)
      0.25 = coord(1/4)
    
    Abstract
    Logarithmic transformation of the data has been recommended by the literature in the case of highly skewed distributions such as those commonly found in information science. The purpose of the transformation is to make the data conform to the lognormal law of error for inferential purposes. How does this transformation affect the analysis? We factor analyze and visualize the citation environment of the Journal of the American Chemical Society (JACS) before and after a logarithmic transformation. The transformation strongly reduces the variance necessary for classificatory purposes and therefore is counterproductive to the purposes of the descriptive statistics. We recommend against the logarithmic transformation when sets cannot be defined unambiguously. The intellectual organization of the sciences is reflected in the curvilinear parts of the citation distributions while negative powerlaws fit excellently to the tails of the distributions.
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.11, S.1470-1486
  20. Leydesdorff, L.: Betweenness centrality as an indicator of the interdisciplinarity of scientific journals (2007) 0.00
    0.0025239778 = product of:
      0.010095911 = sum of:
        0.010095911 = weight(_text_:information in 453) [ClassicSimilarity], result of:
          0.010095911 = score(doc=453,freq=4.0), product of:
            0.06134496 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.034944877 = queryNorm
            0.16457605 = fieldWeight in 453, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=453)
      0.25 = coord(1/4)
    
    Abstract
    In addition to science citation indicators of journals like impact and immediacy, social network analysis provides a set of centrality measures like degree, betweenness, and closeness centrality. These measures are first analyzed for the entire set of 7,379 journals included in the Journal Citation Reports of the Science Citation Index and the Social Sciences Citation Index 2004 (Thomson ISI, Philadelphia, PA), and then also in relation to local citation environments that can be considered as proxies of specialties and disciplines. Betweenness centrality is shown to be an indicator of the interdisciplinarity of journals, but only in local citation environments and after normalization; otherwise, the influence of degree centrality (size) overshadows the betweenness-centrality measure. The indicator is applied to a variety of citation environments, including policy-relevant ones like biotechnology and nanotechnology. The values of the indicator remain sensitive to the delineations of the set because of the indicator's local character. Maps showing interdisciplinarity of journals in terms of betweenness centrality can be drawn using information about journal citation environments, which is available online.
    Source
    Journal of the American Society for Information Science and Technology. 58(2007) no.9, S.1303-1319