Search (139 results, page 1 of 7)

  • × theme_ss:"Informetrie"
  1. Stock, W.G.: ¬Die Wichtigkeit wissenschaftlicher Dokumente relativ zu gegebenen Thematiken (1981) 0.12
    0.1218436 = sum of:
      0.062763214 = product of:
        0.18828964 = sum of:
          0.18828964 = weight(_text_:themes in 13) [ClassicSimilarity], result of:
            0.18828964 = score(doc=13,freq=2.0), product of:
              0.37868488 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.058902346 = queryNorm
              0.49721986 = fieldWeight in 13, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0546875 = fieldNorm(doc=13)
        0.33333334 = coord(1/3)
      0.059080385 = product of:
        0.11816077 = sum of:
          0.11816077 = weight(_text_:dokumente in 13) [ClassicSimilarity], result of:
            0.11816077 = score(doc=13,freq=2.0), product of:
              0.2999863 = queryWeight, product of:
                5.092943 = idf(docFreq=737, maxDocs=44218)
                0.058902346 = queryNorm
              0.39388722 = fieldWeight in 13, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.092943 = idf(docFreq=737, maxDocs=44218)
                0.0546875 = fieldNorm(doc=13)
        0.5 = coord(1/2)
    
    Abstract
    Scientific documents are more or less important in relation to give subjects and this importance can be measured. An empirical investigation into philosophical information was carried out using a weighting algorithm developed by N. Henrichs which results in a distribution by weighting of documents on an average philosophical subject. With the aid of statistical methods a threshold value can be obtained that separates the important and unimportant documents on a subject. The knowledge of theis threshold value is important for various practical and theoretic questions: providing new possibilities for research strategy in information retrieval; evaluation of the 'titleworthiness' of subjects by comparison of document titles and themes for which the document at hand is important; and making available data on thematic trends for scientific results
  2. Zitt, M.; Lelu, A.; Bassecoulard, E.: Hybrid citation-word representations in science mapping : Portolan charts of research fields? (2011) 0.06
    0.06478201 = sum of:
      0.044830866 = product of:
        0.13449259 = sum of:
          0.13449259 = weight(_text_:themes in 4130) [ClassicSimilarity], result of:
            0.13449259 = score(doc=4130,freq=2.0), product of:
              0.37868488 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.058902346 = queryNorm
              0.35515702 = fieldWeight in 4130, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4130)
        0.33333334 = coord(1/3)
      0.019951139 = product of:
        0.039902277 = sum of:
          0.039902277 = weight(_text_:22 in 4130) [ClassicSimilarity], result of:
            0.039902277 = score(doc=4130,freq=2.0), product of:
              0.20626599 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058902346 = queryNorm
              0.19345059 = fieldWeight in 4130, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4130)
        0.5 = coord(1/2)
    
    Abstract
    The mapping of scientific fields, based on principles established in the seventies, has recently shown a remarkable development and applications are now booming with progress in computing efficiency. We examine here the convergence of two thematic mapping approaches, citation-based and word-based, which rely on quite different sociological backgrounds. A corpus in the nanoscience field was broken down into research themes, using the same clustering technique on the 2 networks separately. The tool for comparison is the table of intersections of the M clusters (here M=50) built on either side. A classical visual exploitation of such contingency tables is based on correspondence analysis. We investigate a rearrangement of the intersection table (block modeling), resulting in pseudo-map. The interest of this representation for confronting the two breakdowns is discussed. The amount of convergence found is, in our view, a strong argument in favor of the reliability of bibliometric mapping. However, the outcomes are not convergent at the degree where they can be substituted for each other. Differences highlight the complementarity between approaches based on different networks. In contrast with the strong informetric posture found in recent literature, where lexical and citation markers are considered as miscible tokens, the framework proposed here does not mix the two elements at an early stage, in compliance with their contrasted logic.
    Date
    8. 1.2011 18:22:50
  3. Norris, M.; Oppenheim, C.: ¬The h-index : a broad review of a new bibliometric indicator (2010) 0.06
    0.06478201 = sum of:
      0.044830866 = product of:
        0.13449259 = sum of:
          0.13449259 = weight(_text_:themes in 4147) [ClassicSimilarity], result of:
            0.13449259 = score(doc=4147,freq=2.0), product of:
              0.37868488 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.058902346 = queryNorm
              0.35515702 = fieldWeight in 4147, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4147)
        0.33333334 = coord(1/3)
      0.019951139 = product of:
        0.039902277 = sum of:
          0.039902277 = weight(_text_:22 in 4147) [ClassicSimilarity], result of:
            0.039902277 = score(doc=4147,freq=2.0), product of:
              0.20626599 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058902346 = queryNorm
              0.19345059 = fieldWeight in 4147, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4147)
        0.5 = coord(1/2)
    
    Abstract
    Purpose - This review aims to show, broadly, how the h-index has become a subject of widespread debate, how it has spawned many variants and diverse applications since first introduced in 2005 and some of the issues in its use. Design/methodology/approach - The review drew on a range of material published in 1990 or so sources published since 2005. From these sources, a number of themes were identified and discussed ranging from the h-index's advantages to which citation database might be selected for its calculation. Findings - The analysis shows how the h-index has quickly established itself as a major subject of interest in the field of bibliometrics. Study of the index ranges from its mathematical underpinning to a range of variants perceived to address the indexes' shortcomings. The review illustrates how widely the index has been applied but also how care must be taken in its application. Originality/value - The use of bibliometric indicators to measure research performance continues, with the h-index as its latest addition. The use of the h-index, its variants and many applications to which it has been put are still at the exploratory stage. The review shows the breadth and diversity of this research and the need to verify the veracity of the h-index by more studies.
    Date
    8. 1.2011 19:22:13
  4. Castanha, R.C.G.; Wolfram, D.: ¬The domain of knowledge organization : a bibliometric analysis of prolific authors and their intellectual space (2018) 0.06
    0.06478201 = sum of:
      0.044830866 = product of:
        0.13449259 = sum of:
          0.13449259 = weight(_text_:themes in 4150) [ClassicSimilarity], result of:
            0.13449259 = score(doc=4150,freq=2.0), product of:
              0.37868488 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.058902346 = queryNorm
              0.35515702 = fieldWeight in 4150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4150)
        0.33333334 = coord(1/3)
      0.019951139 = product of:
        0.039902277 = sum of:
          0.039902277 = weight(_text_:22 in 4150) [ClassicSimilarity], result of:
            0.039902277 = score(doc=4150,freq=2.0), product of:
              0.20626599 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058902346 = queryNorm
              0.19345059 = fieldWeight in 4150, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=4150)
        0.5 = coord(1/2)
    
    Abstract
    The domain of knowledge organization (KO) represents a foundational area of information science. One way to better understand the intellectual structure of the KO domain is to apply bibliometric methods to key contributors to the literature. This study analyzes the most prolific contributing authors to the journal Knowledge Organization, the sources they cite and the citations they receive for the period 1993 to 2016. The analyses were conducted using visualization outcomes of citation, co-citation and author bibliographic coupling analysis to reveal theoretical points of reference among authors and the most prominent research themes that constitute this scientific community. Birger Hjørland was the most cited author, and was situated at or near the middle of each of the maps based on different citation relationships. The proximities between authors resulting from the different citation relationships demonstrate how authors situate themselves intellectually through the citations they give and how other authors situate them through the citations received. There is a consistent core of theoretical references as well among the most productive authors. We observed a close network of scholarly communication between the authors cited in this core, which indicates the actual role of the journal Knowledge Organization as a space for knowledge construction in the area of knowledge organization.
    Source
    Knowledge organization. 45(2018) no.1, S.13-22
  5. Thelwall, M.; Thelwall, S.: ¬A thematic analysis of highly retweeted early COVID-19 tweets : consensus, information, dissent and lockdown life (2020) 0.06
    0.06478201 = sum of:
      0.044830866 = product of:
        0.13449259 = sum of:
          0.13449259 = weight(_text_:themes in 178) [ClassicSimilarity], result of:
            0.13449259 = score(doc=178,freq=2.0), product of:
              0.37868488 = queryWeight, product of:
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.058902346 = queryNorm
              0.35515702 = fieldWeight in 178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.429029 = idf(docFreq=193, maxDocs=44218)
                0.0390625 = fieldNorm(doc=178)
        0.33333334 = coord(1/3)
      0.019951139 = product of:
        0.039902277 = sum of:
          0.039902277 = weight(_text_:22 in 178) [ClassicSimilarity], result of:
            0.039902277 = score(doc=178,freq=2.0), product of:
              0.20626599 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.058902346 = queryNorm
              0.19345059 = fieldWeight in 178, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=178)
        0.5 = coord(1/2)
    
    Abstract
    Purpose Public attitudes towards COVID-19 and social distancing are critical in reducing its spread. It is therefore important to understand public reactions and information dissemination in all major forms, including on social media. This article investigates important issues reflected on Twitter in the early stages of the public reaction to COVID-19. Design/methodology/approach A thematic analysis of the most retweeted English-language tweets mentioning COVID-19 during March 10-29, 2020. Findings The main themes identified for the 87 qualifying tweets accounting for 14 million retweets were: lockdown life; attitude towards social restrictions; politics; safety messages; people with COVID-19; support for key workers; work; and COVID-19 facts/news. Research limitations/implications Twitter played many positive roles, mainly through unofficial tweets. Users shared social distancing information, helped build support for social distancing, criticised government responses, expressed support for key workers and helped each other cope with social isolation. A few popular tweets not supporting social distancing show that government messages sometimes failed. Practical implications Public health campaigns in future may consider encouraging grass roots social web activity to support campaign goals. At a methodological level, analysing retweet counts emphasised politics and ignored practical implementation issues. Originality/value This is the first qualitative analysis of general COVID-19-related retweeting.
    Date
    20. 1.2015 18:30:22
  6. Kracker, J.; Pollio, H.R.: ¬The experience of libraries across time : thematic analysis of undergraduate recollections of library experiences (2003) 0.04
    0.03882467 = product of:
      0.07764934 = sum of:
        0.07764934 = product of:
          0.23294802 = sum of:
            0.23294802 = weight(_text_:themes in 1869) [ClassicSimilarity], result of:
              0.23294802 = score(doc=1869,freq=6.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.61515003 = fieldWeight in 1869, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1869)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    To understand the human experience of libraries and the implications this understanding has for library use and service, education, and design, 118 undergraduate students were asked to list three personally memorable incidents concerning library use. Following this, they were asked to write a short narrative of one of these experiences. Incidents reported by participants ranged from preschool to college age, and content analysis indicated that a majority took place at two or more grade levels, sometimes as early as the participant's first (preschool) visit to a library. Phenomenological analysis of individual narratives produced a thematic structure for each of the four grade levels represented in the data: elementary school and younger, middle school, high school, and college/adult. Themes common across all four levels include Atmosphere, Size and Abundance, Organization/Rules and Their Effects an Me, and What I Do in the Library. A theme of Memories was unique to narratives that took place during elementary and younger age levels. Although all remaining themes were noted across age levels, the relative importance of various themes and subthemes was different at different ages. Implications of the thematic structure for library practice are discussed.
  7. McCain, K.W.: Assessing an author's influence using time series historiographic mapping : the oeuvre of Conrad Hal Waddington (2008) 0.04
    0.03804025 = product of:
      0.0760805 = sum of:
        0.0760805 = product of:
          0.2282415 = sum of:
            0.2282415 = weight(_text_:themes in 1375) [ClassicSimilarity], result of:
              0.2282415 = score(doc=1375,freq=4.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.60272145 = fieldWeight in 1375, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1375)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    A modified approach to algorithmic historiography is used to investigate the changing influence of the work of Conrad Hal Waddington over the period 1945-2004. Overall, Waddington's publications were cited by almost 5,500 source items in the Web of Science (Thomson Scientific, formerly Thomson ISI, Philadelphia, PA). Rather than simply analyzing the data set as a whole, older works by Waddington are incorporated into a series of historiographic maps (networks of highly cited documents), which show long-term and short-term research themes grounded in Waddington's work. Analysis by 10-20-year periods and the use of social network analysis soft- ware reveals structures - thematic networks and subnetworks - that are hidden in a mapping of the entire 60-year period. Two major Waddington-related themes emerge - canalization/genetic assimilation and embryonic induction. The first persists over the 60 years studied while active, visible research in the second appears to have declined markedly between 1965 and 1984, only to reappear in conjunction with the emergence of a new research field - Evolutionary Developmental Biology.
  8. Mayr, P.: Re-Ranking auf Basis von Bradfordizing für die verteilte Suche in Digitalen Bibliotheken (2009) 0.04
    0.037745077 = product of:
      0.075490154 = sum of:
        0.075490154 = product of:
          0.15098031 = sum of:
            0.15098031 = weight(_text_:dokumente in 4302) [ClassicSimilarity], result of:
              0.15098031 = score(doc=4302,freq=10.0), product of:
                0.2999863 = queryWeight, product of:
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.058902346 = queryNorm
                0.50329065 = fieldWeight in 4302, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4302)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Trotz großer Dokumentmengen für datenbankübergreifende Literaturrecherchen erwarten akademische Nutzer einen möglichst hohen Anteil an relevanten und qualitativen Dokumenten in den Trefferergebnissen. Insbesondere die Reihenfolge und Struktur der gelisteten Ergebnisse (Ranking) spielt, neben dem direkten Volltextzugriff auf die Dokumente, inzwischen eine entscheidende Rolle beim Design von Suchsystemen. Nutzer erwarten weiterhin flexible Informationssysteme, die es unter anderem zulassen, Einfluss auf das Ranking der Dokumente zu nehmen bzw. alternative Rankingverfahren zu verwenden. In dieser Arbeit werden zwei Mehrwertverfahren für Suchsysteme vorgestellt, die die typischen Probleme bei der Recherche nach wissenschaftlicher Literatur behandeln und damit die Recherchesituation messbar verbessern können. Die beiden Mehrwertdienste semantische Heterogenitätsbehandlung am Beispiel Crosskonkordanzen und Re-Ranking auf Basis von Bradfordizing, die in unterschiedlichen Phasen der Suche zum Einsatz kommen, werden hier ausführlich beschrieben und im empirischen Teil der Arbeit bzgl. der Effektivität für typische fachbezogene Recherchen evaluiert. Vorrangiges Ziel der Promotion ist es, zu untersuchen, ob das hier vorgestellte alternative Re-Rankingverfahren Bradfordizing im Anwendungsbereich bibliographischer Datenbanken zum einen operabel ist und zum anderen voraussichtlich gewinnbringend in Informationssystemen eingesetzt und dem Nutzer angeboten werden kann. Für die Tests wurden Fragestellungen und Daten aus zwei Evaluationsprojekten (CLEF und KoMoHe) verwendet. Die intellektuell bewerteten Dokumente stammen aus insgesamt sieben wissenschaftlichen Fachdatenbanken der Fächer Sozialwissenschaften, Politikwissenschaft, Wirtschaftswissenschaften, Psychologie und Medizin. Die Evaluation der Crosskonkordanzen (insgesamt 82 Fragestellungen) zeigt, dass sich die Retrievalergebnisse signifikant für alle Crosskonkordanzen verbessern; es zeigt sich zudem, dass interdisziplinäre Crosskonkordanzen den stärksten (positiven) Effekt auf die Suchergebnisse haben. Die Evaluation des Re-Ranking nach Bradfordizing (insgesamt 164 Fragestellungen) zeigt, dass die Dokumente der Kernzone (Kernzeitschriften) für die meisten Testreihen eine signifikant höhere Precision als Dokumente der Zone 2 und Zone 3 (Peripheriezeitschriften) ergeben. Sowohl für Zeitschriften als auch für Monographien kann dieser Relevanzvorteil nach Bradfordizing auf einer sehr breiten Basis von Themen und Fragestellungen an zwei unabhängigen Dokumentkorpora empirisch nachgewiesen werden.
  9. Coulter, N.; Monarch, I.; Konda, S.: Software engineering as seen through its research literature : a study in co-word analysis (1998) 0.04
    0.035864696 = product of:
      0.07172939 = sum of:
        0.07172939 = product of:
          0.21518816 = sum of:
            0.21518816 = weight(_text_:themes in 2161) [ClassicSimilarity], result of:
              0.21518816 = score(doc=2161,freq=2.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.56825125 = fieldWeight in 2161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2161)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This empirical research demonstrates the effectiveness of content analysis to map the research literature of the software engineering discipline. The results suggest that certain research themes in software engineering have remained constant, but with changing thrusts
  10. White, H.D.: Bibliometric overview of information science (2009) 0.04
    0.035864696 = product of:
      0.07172939 = sum of:
        0.07172939 = product of:
          0.21518816 = sum of:
            0.21518816 = weight(_text_:themes in 3753) [ClassicSimilarity], result of:
              0.21518816 = score(doc=3753,freq=2.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.56825125 = fieldWeight in 3753, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3753)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This entry presents an account of the core concerns of information science through such means as definitional sketches, identification of themes, historical notes, and bibliometric evidence, including a citation-based map of 121 prominent information scientists of the twentieth century. The attempt throughout is to give concrete and pithy descriptions, to provide numerous specific examples, and to take a critical view of certain received language and ideas in library and information science.
  11. Mayr, P.: Bradfordizing als Re-Ranking-Ansatz in Literaturinformationssystemen (2011) 0.04
    0.035808124 = product of:
      0.07161625 = sum of:
        0.07161625 = product of:
          0.1432325 = sum of:
            0.1432325 = weight(_text_:dokumente in 4292) [ClassicSimilarity], result of:
              0.1432325 = score(doc=4292,freq=4.0), product of:
                0.2999863 = queryWeight, product of:
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.058902346 = queryNorm
                0.47746342 = fieldWeight in 4292, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4292)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    In diesem Artikel wird ein Re-Ranking-Ansatz für Suchsysteme vorgestellt, der die Recherche nach wissenschaftlicher Literatur messbar verbessern kann. Das nichttextorientierte Rankingverfahren Bradfordizing wird eingeführt und anschließend im empirischen Teil des Artikels bzgl. der Effektivität für typische fachbezogene Recherche-Topics evaluiert. Dem Bradford Law of Scattering (BLS), auf dem Bradfordizing basiert, liegt zugrunde, dass sich die Literatur zu einem beliebigen Fachgebiet bzw. -thema in Zonen unterschiedlicher Dokumentenkonzentration verteilt. Dem Kernbereich mit hoher Konzentration der Literatur folgen Bereiche mit mittlerer und geringer Konzentration. Bradfordizing sortiert bzw. rankt eine Dokumentmenge damit nach den sogenannten Kernzeitschriften. Der Retrievaltest mit 164 intellektuell bewerteten Fragestellungen in Fachdatenbanken aus den Bereichen Sozial- und Politikwissenschaften, Wirtschaftswissenschaften, Psychologie und Medizin zeigt, dass die Dokumente der Kernzeitschriften signifikant häufiger relevant bewertet werden als Dokumente der zweiten Dokumentzone bzw. den Peripherie-Zeitschriften. Die Implementierung von Bradfordizing und weiteren Re-Rankingverfahren liefert unmittelbare Mehrwerte für den Nutzer.
  12. Tavakolizadeh-Ravari, M.: Analysis of the long term dynamics in thesaurus developments and its consequences (2017) 0.03
    0.03376022 = product of:
      0.06752044 = sum of:
        0.06752044 = product of:
          0.13504088 = sum of:
            0.13504088 = weight(_text_:dokumente in 3081) [ClassicSimilarity], result of:
              0.13504088 = score(doc=3081,freq=8.0), product of:
                0.2999863 = queryWeight, product of:
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.058902346 = queryNorm
                0.45015684 = fieldWeight in 3081, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  5.092943 = idf(docFreq=737, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3081)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die Arbeit analysiert die dynamische Entwicklung und den Gebrauch von Thesaurusbegriffen. Zusätzlich konzentriert sie sich auf die Faktoren, die die Zahl von Indexbegriffen pro Dokument oder Zeitschrift beeinflussen. Als Untersuchungsobjekt dienten der MeSH und die entsprechende Datenbank "MEDLINE". Die wichtigsten Konsequenzen sind: 1. Der MeSH-Thesaurus hat sich durch drei unterschiedliche Phasen jeweils logarithmisch entwickelt. Solch einen Thesaurus sollte folgenden Gleichung folgen: "T = 3.076,6 Ln (d) - 22.695 + 0,0039d" (T = Begriffe, Ln = natürlicher Logarithmus und d = Dokumente). Um solch einen Thesaurus zu konstruieren, muss man demnach etwa 1.600 Dokumente von unterschiedlichen Themen des Bereiches des Thesaurus haben. Die dynamische Entwicklung von Thesauri wie MeSH erfordert die Einführung eines neuen Begriffs pro Indexierung von 256 neuen Dokumenten. 2. Die Verteilung der Thesaurusbegriffe erbrachte drei Kategorien: starke, normale und selten verwendete Headings. Die letzte Gruppe ist in einer Testphase, während in der ersten und zweiten Kategorie die neu hinzukommenden Deskriptoren zu einem Thesauruswachstum führen. 3. Es gibt ein logarithmisches Verhältnis zwischen der Zahl von Index-Begriffen pro Aufsatz und dessen Seitenzahl für die Artikeln zwischen einer und einundzwanzig Seiten. 4. Zeitschriftenaufsätze, die in MEDLINE mit Abstracts erscheinen erhalten fast zwei Deskriptoren mehr. 5. Die Findablity der nicht-englisch sprachigen Dokumente in MEDLINE ist geringer als die englische Dokumente. 6. Aufsätze der Zeitschriften mit einem Impact Factor 0 bis fünfzehn erhalten nicht mehr Indexbegriffe als die der anderen von MEDINE erfassten Zeitschriften. 7. In einem Indexierungssystem haben unterschiedliche Zeitschriften mehr oder weniger Gewicht in ihrem Findability. Die Verteilung der Indexbegriffe pro Seite hat gezeigt, dass es bei MEDLINE drei Kategorien der Publikationen gibt. Außerdem gibt es wenige stark bevorzugten Zeitschriften."
  13. Nicholls, P.T.: Empirical validation of Lotka's law (1986) 0.03
    0.03192182 = product of:
      0.06384364 = sum of:
        0.06384364 = product of:
          0.12768728 = sum of:
            0.12768728 = weight(_text_:22 in 5509) [ClassicSimilarity], result of:
              0.12768728 = score(doc=5509,freq=2.0), product of:
                0.20626599 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.058902346 = queryNorm
                0.61904186 = fieldWeight in 5509, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=5509)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Information processing and management. 22(1986), S.417-419
  14. Nicolaisen, J.: Citation analysis (2007) 0.03
    0.03192182 = product of:
      0.06384364 = sum of:
        0.06384364 = product of:
          0.12768728 = sum of:
            0.12768728 = weight(_text_:22 in 6091) [ClassicSimilarity], result of:
              0.12768728 = score(doc=6091,freq=2.0), product of:
                0.20626599 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.058902346 = queryNorm
                0.61904186 = fieldWeight in 6091, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=6091)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    13. 7.2008 19:53:22
  15. Fiala, J.: Information flood : fiction and reality (1987) 0.03
    0.03192182 = product of:
      0.06384364 = sum of:
        0.06384364 = product of:
          0.12768728 = sum of:
            0.12768728 = weight(_text_:22 in 1080) [ClassicSimilarity], result of:
              0.12768728 = score(doc=1080,freq=2.0), product of:
                0.20626599 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.058902346 = queryNorm
                0.61904186 = fieldWeight in 1080, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=1080)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Thermochimica acta. 110(1987), S.11-22
  16. Shen, J.; Yao, L.; Li, Y.; Clarke, M.; Wang, L.; Li, D.: Visualizing the history of evidence-based medicine : a bibliometric analysis (2013) 0.03
    0.03170021 = product of:
      0.06340042 = sum of:
        0.06340042 = product of:
          0.19020125 = sum of:
            0.19020125 = weight(_text_:themes in 1090) [ClassicSimilarity], result of:
              0.19020125 = score(doc=1090,freq=4.0), product of:
                0.37868488 = queryWeight, product of:
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.058902346 = queryNorm
                0.5022679 = fieldWeight in 1090, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  6.429029 = idf(docFreq=193, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1090)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The aim of this paper is to visualize the history of evidence-based medicine (EBM) and to examine the characteristics of EBM development in China and the West. We searched the Web of Science and the Chinese National Knowledge Infrastructure database for papers related to EBM. We applied information visualization techniques, citation analysis, cocitation analysis, cocitation cluster analysis, and network analysis to construct historiographies, themes networks, and chronological theme maps regarding EBM in China and the West. EBM appeared to develop in 4 stages: incubation (1972-1992 in the West vs. 1982-1999 in China), initiation (1992-1993 vs. 1999-2000), rapid development (1993-2000 vs. 2000-2004), and stable distribution (2000 onwards vs. 2004 onwards). Although there was a lag in EBM initiation in China compared with the West, the pace of development appeared similar. Our study shows that important differences exist in research themes, domain structures, and development depth, and in the speed of adoption between China and the West. In the West, efforts in EBM have shifted from education to practice, and from the quality of evidence to its translation. In China, there was a similar shift from education to practice, and from production of evidence to its translation. In addition, this concept has diffused to other healthcare areas, leading to the development of evidence-based traditional Chinese medicine, evidence-based nursing, and evidence-based policy making.
  17. Su, Y.; Han, L.-F.: ¬A new literature growth model : variable exponential growth law of literature (1998) 0.03
    0.028215168 = product of:
      0.056430336 = sum of:
        0.056430336 = product of:
          0.11286067 = sum of:
            0.11286067 = weight(_text_:22 in 3690) [ClassicSimilarity], result of:
              0.11286067 = score(doc=3690,freq=4.0), product of:
                0.20626599 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.058902346 = queryNorm
                0.54716086 = fieldWeight in 3690, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3690)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 5.1999 19:22:35
  18. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.03
    0.028215168 = product of:
      0.056430336 = sum of:
        0.056430336 = product of:
          0.11286067 = sum of:
            0.11286067 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.11286067 = score(doc=3925,freq=4.0), product of:
                0.20626599 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.058902346 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  19. Diodato, V.: Dictionary of bibliometrics (1994) 0.03
    0.027931591 = product of:
      0.055863183 = sum of:
        0.055863183 = product of:
          0.111726366 = sum of:
            0.111726366 = weight(_text_:22 in 5666) [ClassicSimilarity], result of:
              0.111726366 = score(doc=5666,freq=2.0), product of:
                0.20626599 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.058902346 = queryNorm
                0.5416616 = fieldWeight in 5666, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=5666)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: Journal of library and information science 22(1996) no.2, S.116-117 (L.C. Smith)
  20. Bookstein, A.: Informetric distributions : I. Unified overview (1990) 0.03
    0.027931591 = product of:
      0.055863183 = sum of:
        0.055863183 = product of:
          0.111726366 = sum of:
            0.111726366 = weight(_text_:22 in 6902) [ClassicSimilarity], result of:
              0.111726366 = score(doc=6902,freq=2.0), product of:
                0.20626599 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.058902346 = queryNorm
                0.5416616 = fieldWeight in 6902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6902)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 18:55:29

Years

Languages

  • e 123
  • d 15
  • ro 1
  • More… Less…

Types

  • a 134
  • m 3
  • el 2
  • x 2
  • s 1
  • More… Less…