Search (7 results, page 1 of 1)

  • × year_i:[1980 TO 1990}
  • × theme_ss:"Informetrie"
  1. Rees-Potter, L.K.: Dynamic thesaural systems : a bibliometric study of terminological and conceptual change in sociology and economics with application to the design of dynamic thesaural systems (1989) 0.02
    0.01556018 = product of:
      0.04668054 = sum of:
        0.034289423 = weight(_text_:retrieval in 5059) [ClassicSimilarity], result of:
          0.034289423 = score(doc=5059,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.26736724 = fieldWeight in 5059, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=5059)
        0.012391115 = product of:
          0.037173342 = sum of:
            0.037173342 = weight(_text_:system in 5059) [ClassicSimilarity], result of:
              0.037173342 = score(doc=5059,freq=2.0), product of:
                0.13353272 = queryWeight, product of:
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.042397358 = queryNorm
                0.27838376 = fieldWeight in 5059, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.1495528 = idf(docFreq=5152, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5059)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Thesauri have been used in the library and information science field to provide a standard descriptor language for indexers or searchers to use in an informations storage and retrieval system. One difficulty has been the maintenance and updating of thesauri since terms used to describe concepts change over time and vary between users. This study investigates a mechanism by which thesauri can be updated and maintained using citation, co-citation analysis and citation context analysis.
  2. Henzler, R.G.: Informetrische Auswertungen bei der Online-Retrieval-Praxis (1982) 0.01
    0.010001082 = product of:
      0.06000649 = sum of:
        0.06000649 = weight(_text_:retrieval in 3054) [ClassicSimilarity], result of:
          0.06000649 = score(doc=3054,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.46789268 = fieldWeight in 3054, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=3054)
      0.16666667 = coord(1/6)
    
  3. Davies, R.: Q-analysis : a methodology for librarianship and information science (1985) 0.01
    0.009196021 = product of:
      0.055176124 = sum of:
        0.055176124 = weight(_text_:wide in 589) [ClassicSimilarity], result of:
          0.055176124 = score(doc=589,freq=2.0), product of:
            0.18785246 = queryWeight, product of:
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.042397358 = queryNorm
            0.29372054 = fieldWeight in 589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4307585 = idf(docFreq=1430, maxDocs=44218)
              0.046875 = fieldNorm(doc=589)
      0.16666667 = coord(1/6)
    
    Abstract
    Q-analysis is a methodology for investigating a wide range of structural phenomena. Strutures are defined in terms of relations between members of sets and their salient features are revealed using techniques of algebraic topology. However, the basic method can be mastered by non-mathematicians. Q-analysis has been applied to problems as diverse as discovering the rules for the diagnosis of a rare disease and the study of tactics in a football match. Other applications include some of interest to librarians and information scientists. In bibliometrics, Q-analysis has proved capable of emulating techniques such as bibliographic coupling, co-citation analysis and co-word analysis. It has also been used to produce a classification scheme for television programmes based on different principles from most bibliographic classifications. This paper introduces the basic ideas of Q-analysis. Applications relevant to librarianship and information science are reviewed and present limitations of the approach described. New theoretical advances including some in other fields such as planning and design theory and artificial intelligence may lead to a still more powerful method of investigating structure
  4. Vasiljev, A.: ¬The law of requisite variety as applied to subject indexing and retrieval (1989) 0.01
    0.008572357 = product of:
      0.051434137 = sum of:
        0.051434137 = weight(_text_:retrieval in 5069) [ClassicSimilarity], result of:
          0.051434137 = score(doc=5069,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.40105087 = fieldWeight in 5069, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=5069)
      0.16666667 = coord(1/6)
    
  5. Nicholls, P.T.: Empirical validation of Lotka's law (1986) 0.01
    0.005106006 = product of:
      0.030636035 = sum of:
        0.030636035 = product of:
          0.091908105 = sum of:
            0.091908105 = weight(_text_:22 in 5509) [ClassicSimilarity], result of:
              0.091908105 = score(doc=5509,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.61904186 = fieldWeight in 5509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5509)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    Information processing and management. 22(1986), S.417-419
  6. Fiala, J.: Information flood : fiction and reality (1987) 0.01
    0.005106006 = product of:
      0.030636035 = sum of:
        0.030636035 = product of:
          0.091908105 = sum of:
            0.091908105 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
              0.091908105 = score(doc=1080,freq=2.0), product of:
                0.14846832 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.042397358 = queryNorm
                0.61904186 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=1080)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Source
    Thermochimica acta. 110(1987), S.11-22
  7. Stock, W.G.: ¬Die Wichtigkeit wissenschaftlicher Dokumente relativ zu gegebenen Thematiken (1981) 0.01
    0.005000541 = product of:
      0.030003246 = sum of:
        0.030003246 = weight(_text_:retrieval in 13) [ClassicSimilarity], result of:
          0.030003246 = score(doc=13,freq=2.0), product of:
            0.12824841 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.042397358 = queryNorm
            0.23394634 = fieldWeight in 13, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=13)
      0.16666667 = coord(1/6)
    
    Abstract
    Scientific documents are more or less important in relation to give subjects and this importance can be measured. An empirical investigation into philosophical information was carried out using a weighting algorithm developed by N. Henrichs which results in a distribution by weighting of documents on an average philosophical subject. With the aid of statistical methods a threshold value can be obtained that separates the important and unimportant documents on a subject. The knowledge of theis threshold value is important for various practical and theoretic questions: providing new possibilities for research strategy in information retrieval; evaluation of the 'titleworthiness' of subjects by comparison of document titles and themes for which the document at hand is important; and making available data on thematic trends for scientific results

Languages