Search (145 results, page 1 of 8)

  • × theme_ss:"Informetrie"
  1. Zitt, M.; Lelu, A.; Bassecoulard, E.: Hybrid citation-word representations in science mapping : Portolan charts of research fields? (2011) 0.08
    0.07588121 = product of:
      0.15176243 = sum of:
        0.15176243 = sum of:
          0.11537294 = weight(_text_:word in 4130) [ClassicSimilarity], result of:
            0.11537294 = score(doc=4130,freq=4.0), product of:
              0.28165168 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.05371688 = queryNorm
              0.40962988 = fieldWeight in 4130, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4130)
          0.036389478 = weight(_text_:22 in 4130) [ClassicSimilarity], result of:
            0.036389478 = score(doc=4130,freq=2.0), product of:
              0.18810736 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05371688 = queryNorm
              0.19345059 = fieldWeight in 4130, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4130)
      0.5 = coord(1/2)
    
    Abstract
    The mapping of scientific fields, based on principles established in the seventies, has recently shown a remarkable development and applications are now booming with progress in computing efficiency. We examine here the convergence of two thematic mapping approaches, citation-based and word-based, which rely on quite different sociological backgrounds. A corpus in the nanoscience field was broken down into research themes, using the same clustering technique on the 2 networks separately. The tool for comparison is the table of intersections of the M clusters (here M=50) built on either side. A classical visual exploitation of such contingency tables is based on correspondence analysis. We investigate a rearrangement of the intersection table (block modeling), resulting in pseudo-map. The interest of this representation for confronting the two breakdowns is discussed. The amount of convergence found is, in our view, a strong argument in favor of the reliability of bibliometric mapping. However, the outcomes are not convergent at the degree where they can be substituted for each other. Differences highlight the complementarity between approaches based on different networks. In contrast with the strong informetric posture found in recent literature, where lexical and citation markers are considered as miscible tokens, the framework proposed here does not mix the two elements at an early stage, in compliance with their contrasted logic.
    Date
    8. 1.2011 18:22:50
  2. Zhu, Q.; Kong, X.; Hong, S.; Li, J.; He, Z.: Global ontology research progress : a bibliometric analysis (2015) 0.07
    0.066521734 = product of:
      0.13304347 = sum of:
        0.13304347 = sum of:
          0.08158098 = weight(_text_:word in 2590) [ClassicSimilarity], result of:
            0.08158098 = score(doc=2590,freq=2.0), product of:
              0.28165168 = queryWeight, product of:
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.05371688 = queryNorm
              0.28965205 = fieldWeight in 2590, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.2432623 = idf(docFreq=634, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2590)
          0.05146249 = weight(_text_:22 in 2590) [ClassicSimilarity], result of:
            0.05146249 = score(doc=2590,freq=4.0), product of:
              0.18810736 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05371688 = queryNorm
              0.27358043 = fieldWeight in 2590, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2590)
      0.5 = coord(1/2)
    
    Abstract
    Purpose - The purpose of this paper is to analyse the global scientific outputs of ontology research, an important emerging discipline that has huge potential to improve information understanding, organization, and management. Design/methodology/approach - This study collected literature published during 1900-2012 from the Web of Science database. The bibliometric analysis was performed from authorial, institutional, national, spatiotemporal, and topical aspects. Basic statistical analysis, visualization of geographic distribution, co-word analysis, and a new index were applied to the selected data. Findings - Characteristics of publication outputs suggested that ontology research has entered into the soaring stage, along with increased participation and collaboration. The authors identified the leading authors, institutions, nations, and articles in ontology research. Authors were more from North America, Europe, and East Asia. The USA took the lead, while China grew fastest. Four major categories of frequently used keywords were identified: applications in Semantic Web, applications in bioinformatics, philosophy theories, and common supporting technology. Semantic Web research played a core role, and gene ontology study was well-developed. The study focus of ontology has shifted from philosophy to information science. Originality/value - This is the first study to quantify global research patterns and trends in ontology, which might provide a potential guide for the future research. The new index provides an alternative way to evaluate the multidisciplinary influence of researchers.
    Date
    20. 1.2015 18:30:22
    17. 9.2018 18:22:23
  3. Egghe, L.: On the law of Zipf-Mandelbrot for multi-word phrases (1999) 0.07
    0.06526479 = product of:
      0.13052958 = sum of:
        0.13052958 = product of:
          0.26105917 = sum of:
            0.26105917 = weight(_text_:word in 3058) [ClassicSimilarity], result of:
              0.26105917 = score(doc=3058,freq=8.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.92688656 = fieldWeight in 3058, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3058)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article studies the probabilities of the occurence of multi-word (m-word) phrases (m=2,3,...) in relation to the probabilities of occurence of the single words. It is well known that, in the latter case, the lae of Zipf is valid (i.e., a power law). We prove that in the case of m-word phrases (m>=2), this is not the case. We present 2 independent proof of this
  4. Leginus, M.; Zhai, C.X.; Dolog, P.: Personalized generation of word clouds from tweets (2016) 0.06
    0.06475291 = product of:
      0.12950581 = sum of:
        0.12950581 = product of:
          0.25901163 = sum of:
            0.25901163 = weight(_text_:word in 2886) [ClassicSimilarity], result of:
              0.25901163 = score(doc=2886,freq=14.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.9196168 = fieldWeight in 2886, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2886)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Active users of Twitter are often overwhelmed with the vast amount of tweets. In this work we attempt to help users browsing a large number of accumulated posts. We propose a personalized word cloud generation as a means for users' navigation. Various user past activities such as user published tweets, retweets, and seen but not retweeted tweets are leveraged for enhanced personalization of word clouds. The best personalization results are attained with user past retweets. However, users' own past tweets are not as useful as retweets for personalization. Negative preferences derived from seen but not retweeted tweets further enhance personalized word cloud generation. The ranking combination method outperforms the preranking approach and provides a general framework for combined ranking of various user past information for enhanced word cloud generation. To better capture subtle differences of generated word clouds, we propose an evaluation of word clouds with a mean average precision measure.
  5. He, Q.: Knowledge discovery through co-word analysis (1999) 0.06
    0.05710669 = product of:
      0.11421338 = sum of:
        0.11421338 = product of:
          0.22842675 = sum of:
            0.22842675 = weight(_text_:word in 6082) [ClassicSimilarity], result of:
              0.22842675 = score(doc=6082,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.81102574 = fieldWeight in 6082, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6082)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  6. Ahonen, H.: Knowledge discovery in documents by extracting frequent word sequences (1999) 0.06
    0.05710669 = product of:
      0.11421338 = sum of:
        0.11421338 = product of:
          0.22842675 = sum of:
            0.22842675 = weight(_text_:word in 6088) [ClassicSimilarity], result of:
              0.22842675 = score(doc=6088,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.81102574 = fieldWeight in 6088, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6088)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  7. Leydesdorff, L.; Nerghes, A.: Co-word maps and topic modeling : a comparison using small and medium-sized corpora (N?<?1.000) (2017) 0.05
    0.049957946 = product of:
      0.09991589 = sum of:
        0.09991589 = product of:
          0.19983178 = sum of:
            0.19983178 = weight(_text_:word in 3538) [ClassicSimilarity], result of:
              0.19983178 = score(doc=3538,freq=12.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.7094997 = fieldWeight in 3538, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3538)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Induced by "big data," "topic modeling" has become an attractive alternative to mapping co-words in terms of co-occurrences and co-absences using network techniques. Does topic modeling provide an alternative for co-word mapping in research practices using moderately sized document collections? We return to the word/document matrix using first a single text with a strong argument ("The Leiden Manifesto") and then upscale to a sample of moderate size (n?=?687) to study the pros and cons of the two approaches in terms of the resulting possibilities for making semantic maps that can serve an argument. The results from co-word mapping (using two different routines) versus topic modeling are significantly uncorrelated. Whereas components in the co-word maps can easily be designated, the topic models provide sets of words that are very differently organized. In these samples, the topic models seem to reveal similarities other than semantic ones (e.g., linguistic ones). In other words, topic modeling does not replace co-word mapping in small and medium-sized sets; but the paper leaves open the possibility that topic modeling would work well for the semantic mapping of large sets.
  8. Rotto, E.; Morgan, R.P.: ¬An exploration of expert based text analysis techniques for assessing industrial relevance in US engineering dissertation abstracts (1997) 0.05
    0.04945585 = product of:
      0.0989117 = sum of:
        0.0989117 = product of:
          0.1978234 = sum of:
            0.1978234 = weight(_text_:word in 465) [ClassicSimilarity], result of:
              0.1978234 = score(doc=465,freq=6.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.702369 = fieldWeight in 465, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=465)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Describes exploratory research into the application of computerized text anaylsis techniques to all US engineering doctoral dissertation abstracts dated 1981, 1986 and 1991. Experts categorized abstracts by industrial relevance, and identified appropriate non technology specific word indicators within the abstracts. Word frequency and cluster analysis techniques were also explored for their potential utility in identifying technology related word indicators of industrial relevance. Results suggest that text analysis of engineering dissertation abstracts holds potential utility for identifying industrially relevant university based engineering research, when used in conjunction with expert input and feedback
  9. Ding, Y.; Chowdhury, G.C.; Foo, S.: Bibliometric cartography of information retrieval research by using co-word analysis (2001) 0.05
    0.04894859 = product of:
      0.09789718 = sum of:
        0.09789718 = product of:
          0.19579436 = sum of:
            0.19579436 = weight(_text_:word in 6487) [ClassicSimilarity], result of:
              0.19579436 = score(doc=6487,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.6951649 = fieldWeight in 6487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6487)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  10. He, Q.: ¬A study of the strength indexes in co-word analysis (2000) 0.05
    0.04894859 = product of:
      0.09789718 = sum of:
        0.09789718 = product of:
          0.19579436 = sum of:
            0.19579436 = weight(_text_:word in 111) [ClassicSimilarity], result of:
              0.19579436 = score(doc=111,freq=8.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.6951649 = fieldWeight in 111, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=111)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Co-word analysis is a technique for detecting the knowledge structure of scientific literature and mapping the dynamics in a research field. It is used to count the co-occurrences of term pairs, compute the strength between term pairs, and map the research field by inserting terms and their linkages into a graphical structure according to the strength values. In previous co-word studies, there are two indexes used to measure the strength between term pairs in order to identify the major areas in a research field - the inclusion index (I) and the equivalence index (E). This study will conduct two co-word analysis experiments using the two indexes, respectively, and compare the results from the two experiments. The results show, due to the difference in their computation, index I is more likely to identify general subject areas in a research field while index E is more likely to identify subject areas at more specific levels
  11. Ferrer-i-Cancho, R.; Vitevitch, M.S.: ¬The origins of Zipf's meaning-frequency law (2018) 0.04
    0.042390727 = product of:
      0.08478145 = sum of:
        0.08478145 = product of:
          0.1695629 = sum of:
            0.1695629 = weight(_text_:word in 4546) [ClassicSimilarity], result of:
              0.1695629 = score(doc=4546,freq=6.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.6020305 = fieldWeight in 4546, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4546)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In his pioneering research, G.K. Zipf observed that more frequent words tend to have more meanings, and showed that the number of meanings of a word grows as the square root of its frequency. He derived this relationship from two assumptions: that words follow Zipf's law for word frequencies (a power law dependency between frequency and rank) and Zipf's law of meaning distribution (a power law dependency between number of meanings and rank). Here we show that a single assumption on the joint probability of a word and a meaning suffices to infer Zipf's meaning-frequency law or relaxed versions. Interestingly, this assumption can be justified as the outcome of a biased random walk in the process of mental exploration.
  12. Lu, K.; Wolfram, D.: Measuring author research relatedness : a comparison of word-based, topic-based, and author cocitation approaches (2012) 0.04
    0.04079049 = product of:
      0.08158098 = sum of:
        0.08158098 = product of:
          0.16316196 = sum of:
            0.16316196 = weight(_text_:word in 453) [ClassicSimilarity], result of:
              0.16316196 = score(doc=453,freq=8.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.5793041 = fieldWeight in 453, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=453)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Relationships between authors based on characteristics of published literature have been studied for decades. Author cocitation analysis using mapping techniques has been most frequently used to study how closely two authors are thought to be in intellectual space based on how members of the research community co-cite their works. Other approaches exist to study author relatedness based more directly on the text of their published works. In this study we present static and dynamic word-based approaches using vector space modeling, as well as a topic-based approach based on latent Dirichlet allocation for mapping author research relatedness. Vector space modeling is used to define an author space consisting of works by a given author. Outcomes for the two word-based approaches and a topic-based approach for 50 prolific authors in library and information science are compared with more traditional author cocitation analysis using multidimensional scaling and hierarchical cluster analysis. The two word-based approaches produced similar outcomes except where two authors were frequent co-authors for the majority of their articles. The topic-based approach produced the most distinctive map.
  13. Kopcsa, A.; Schiebel, E.: Science and technology mapping : a new iteration model for representing multidimensional relationships (1998) 0.03
    0.03461188 = product of:
      0.06922376 = sum of:
        0.06922376 = product of:
          0.13844752 = sum of:
            0.13844752 = weight(_text_:word in 326) [ClassicSimilarity], result of:
              0.13844752 = score(doc=326,freq=4.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.49155584 = fieldWeight in 326, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=326)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Much effort has been done to develop more objective quantitative methods to analyze and integrate survey information for understanding research trends and research structures. Co-word analysis is one class of techniques that exploits the use of co-occurences of items in written information. However, there are some bottlenecks in using statistical methods to produce mappings of reduced information in a comfortable manner. On one hand, often used statistical software for PCs has restrictions for the amount for calculable data; on the other hand, the results of the mufltidimensional scaling routines are not quite satisfying. Therefore, this article introduces a new iteration model for the calculation of co-word maps that eases the problem. The iteration model is for positioning the words in the two-dimensional plane due to their connections to each other, and its consists of a quick and stabile algorithm that has been implemented with software for personal computers. A graphic module represents the data in well-known 'technology maps'
  14. Chang, Y.-W.: Influence of human behavior and the principle of least effort on library and information science research (2016) 0.03
    0.03461188 = product of:
      0.06922376 = sum of:
        0.06922376 = product of:
          0.13844752 = sum of:
            0.13844752 = weight(_text_:word in 2973) [ClassicSimilarity], result of:
              0.13844752 = score(doc=2973,freq=4.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.49155584 = fieldWeight in 2973, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2973)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    General graph random walk has been successfully applied in multi-document summarization, but it has some limitations to process documents by this way. In this paper, we propose a novel hypergraph based vertex-reinforced random walk framework for multi-document summarization. The framework first exploits the Hierarchical Dirichlet Process (HDP) topic model to learn a word-topic probability distribution in sentences. Then the hypergraph is used to capture both cluster relationship based on the word-topic probability distribution and pairwise similarity among sentences. Finally, a time-variant random walk algorithm for hypergraphs is developed to rank sentences which ensures sentence diversity by vertex-reinforcement in summaries. Experimental results on the public available dataset demonstrate the effectiveness of our framework.
  15. Coulter, N.; Monarch, I.; Konda, S.: Software engineering as seen through its research literature : a study in co-word analysis (1998) 0.03
    0.032632396 = product of:
      0.06526479 = sum of:
        0.06526479 = product of:
          0.13052958 = sum of:
            0.13052958 = weight(_text_:word in 2161) [ClassicSimilarity], result of:
              0.13052958 = score(doc=2161,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46344328 = fieldWeight in 2161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2161)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  16. Lindsay, R.K.; Gordon, M.D.: Literature-based discovery by lexical statistics (1999) 0.03
    0.032632396 = product of:
      0.06526479 = sum of:
        0.06526479 = product of:
          0.13052958 = sum of:
            0.13052958 = weight(_text_:word in 3544) [ClassicSimilarity], result of:
              0.13052958 = score(doc=3544,freq=2.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.46344328 = fieldWeight in 3544, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3544)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    We report experiments that use lexical statistics, such as word frequency counts, to discover hidden connections in the medical literature. Hidden connections are those that are unlikely to be found by examination of bibliographic citations or the use of standard indexing methods and yet establish a relationship between topics that might profitably by explored by scientific research. Our experiments were conducted with the MEDLINE medical literature database and follow and extend the work of Swanson
  17. Nicholls, P.T.: Empirical validation of Lotka's law (1986) 0.03
    0.029111583 = product of:
      0.058223166 = sum of:
        0.058223166 = product of:
          0.11644633 = sum of:
            0.11644633 = weight(_text_:22 in 5509) [ClassicSimilarity], result of:
              0.11644633 = score(doc=5509,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.61904186 = fieldWeight in 5509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5509)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986), S.417-419
  18. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.029111583 = product of:
      0.058223166 = sum of:
        0.058223166 = product of:
          0.11644633 = sum of:
            0.11644633 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.11644633 = score(doc=6091,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
  19. Fiala, J.: Information flood : fiction and reality (1987) 0.03
    0.029111583 = product of:
      0.058223166 = sum of:
        0.058223166 = product of:
          0.11644633 = sum of:
            0.11644633 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
              0.11644633 = score(doc=1080,freq=2.0), product of:
                0.18810736 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05371688 = queryNorm
                0.61904186 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=1080)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Thermochimica acta. 110(1987), S.11-22
  20. Milojevic, S.; Sugimoto, C.R.; Yan, E.; Ding, Y.: ¬The cognitive structure of Library and Information Science : analysis of article title words (2011) 0.03
    0.028843235 = product of:
      0.05768647 = sum of:
        0.05768647 = product of:
          0.11537294 = sum of:
            0.11537294 = weight(_text_:word in 4608) [ClassicSimilarity], result of:
              0.11537294 = score(doc=4608,freq=4.0), product of:
                0.28165168 = queryWeight, product of:
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.05371688 = queryNorm
                0.40962988 = fieldWeight in 4608, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.2432623 = idf(docFreq=634, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4608)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This study comprises a suite of analyses of words in article titles in order to reveal the cognitive structure of Library and Information Science (LIS). The use of title words to elucidate the cognitive structure of LIS has been relatively neglected. The present study addresses this gap by performing (a) co-word analysis and hierarchical clustering, (b) multidimensional scaling, and (c) determination of trends in usage of terms. The study is based on 10,344 articles published between 1988 and 2007 in 16 LIS journals. Methodologically, novel aspects of this study are: (a) its large scale, (b) removal of non-specific title words based on the "word concentration" measure (c) identification of the most frequent terms that include both single words and phrases, and (d) presentation of the relative frequencies of terms using "heatmaps". Conceptually, our analysis reveals that LIS consists of three main branches: the traditionally recognized library-related and information-related branches, plus an equally distinct bibliometrics/scientometrics branch. The three branches focus on: libraries, information, and science, respectively. In addition, our study identifies substructures within each branch. We also tentatively identify "information seeking behavior" as a branch that is establishing itself separate from the three main branches. Furthermore, we find that cognitive concepts in LIS evolve continuously, with no stasis since 1992. The most rapid development occurred between 1998 and 2001, influenced by the increased focus on the Internet. The change in the cognitive landscape is found to be driven by the emergence of new information technologies, and the retirement of old ones.

Years

Languages

  • e 136
  • d 8
  • ro 1
  • More… Less…

Types

  • a 143
  • m 2
  • el 1
  • s 1
  • More… Less…