Search (407 results, page 1 of 21)

  • × type_ss:"el"
  1. Tay, A.: ¬The next generation discovery citation indexes : a review of the landscape in 2020 (2020) 0.11
    0.10835254 = product of:
      0.16252881 = sum of:
        0.03873757 = weight(_text_:science in 40) [ClassicSimilarity], result of:
          0.03873757 = score(doc=40,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.2881068 = fieldWeight in 40, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=40)
        0.12379125 = sum of:
          0.075381085 = weight(_text_:index in 40) [ClassicSimilarity], result of:
            0.075381085 = score(doc=40,freq=2.0), product of:
              0.22304957 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.05104385 = queryNorm
              0.33795667 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
          0.04841016 = weight(_text_:22 in 40) [ClassicSimilarity], result of:
            0.04841016 = score(doc=40,freq=2.0), product of:
              0.17874686 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05104385 = queryNorm
              0.2708308 = fieldWeight in 40, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=40)
      0.6666667 = coord(2/3)
    
    Abstract
    Conclusion There is a reason why Google Scholar and Web of Science/Scopus are kings of the hills in their various arenas. They have strong brand recogniton, a head start in development and a mass of eyeballs and users that leads to an almost virtious cycle of improvement. Competing against such well established competitors is not easy even when one has deep pockets (Microsoft) or a killer idea (scite). It will be interesting to see how the landscape will look like in 2030. Stay tuned for part II where I review each particular index.
    Date
    17.11.2020 12:22:59
    Object
    Web of Science
  2. Calculating the h-index : Web of Science, Scopus or Google Scholar? (2011) 0.10
    0.09906621 = product of:
      0.14859931 = sum of:
        0.055339385 = weight(_text_:science in 854) [ClassicSimilarity], result of:
          0.055339385 = score(doc=854,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.41158113 = fieldWeight in 854, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.078125 = fieldNorm(doc=854)
        0.09325992 = product of:
          0.18651985 = sum of:
            0.18651985 = weight(_text_:index in 854) [ClassicSimilarity], result of:
              0.18651985 = score(doc=854,freq=6.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.836226 = fieldWeight in 854, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.078125 = fieldNorm(doc=854)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Gegenüberstellung der Berechnung des h-Index in den drei Tools mit Beispiel Stephen Hawking (WoS: 59, Scopus: 19, Google Scholar: 76)
    Object
    h-index
    Web of Science
  3. Harzing, A.-W.: Comparing the Google Scholar h-index with the ISI Journal Impact Factor (2008) 0.07
    0.06851512 = product of:
      0.10277268 = sum of:
        0.027391598 = weight(_text_:science in 855) [ClassicSimilarity], result of:
          0.027391598 = score(doc=855,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20372227 = fieldWeight in 855, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=855)
        0.075381085 = product of:
          0.15076217 = sum of:
            0.15076217 = weight(_text_:index in 855) [ClassicSimilarity], result of:
              0.15076217 = score(doc=855,freq=8.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.67591333 = fieldWeight in 855, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=855)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Publication in academic journals is a key criterion for appointment, tenure and promotion in universities. Many universities weigh publications according to the quality or impact of the journal. Traditionally, journal quality has been assessed through the ISI Journal Impact Factor (JIF). This paper proposes an alternative metric - Hirsch's h-index - and data source - Google Scholar - to assess journal impact. Using a systematic comparison between the Google Scholar h-index and the ISI JIF for a sample of 838 journals in Economics & Business, we argue that the former provides a more accurate and comprehensive measure of journal impact.
    Object
    h-index
    Web of Science
  4. Mitchell, J.S.: DDC 22 : an introduction (2003) 0.06
    0.061209828 = product of:
      0.18362948 = sum of:
        0.18362948 = sum of:
          0.075381085 = weight(_text_:index in 1936) [ClassicSimilarity], result of:
            0.075381085 = score(doc=1936,freq=2.0), product of:
              0.22304957 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.05104385 = queryNorm
              0.33795667 = fieldWeight in 1936, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1936)
          0.108248405 = weight(_text_:22 in 1936) [ClassicSimilarity], result of:
            0.108248405 = score(doc=1936,freq=10.0), product of:
              0.17874686 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05104385 = queryNorm
              0.6055961 = fieldWeight in 1936, product of:
                3.1622777 = tf(freq=10.0), with freq of:
                  10.0 = termFreq=10.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1936)
      0.33333334 = coord(1/3)
    
    Abstract
    Dewey Decimal Classification and Relative Index, Edition 22 (DDC 22) will be issued simultaneously in print and web versions in July 2003. The new edition is the first full print update to the Dewey Decimal Classification system in seven years-it includes several significant updates and many new numbers and topics. DDC 22 also features some fundamental structural changes that have been introduced with the goals of promoting classifier efficiency and improving the DDC for use in a variety of applications in the web environment. Most importantly, the content of the new edition has been shaped by the needs and recommendations of Dewey users around the world. The worldwide user community has an important role in shaping the future of the DDC.
    Object
    DDC-22
  5. Scientometrics pioneer Eugene Garfield dies : Eugene Garfield, founder of the Institute for Scientific Information and The Scientist, has passed away at age 91 (2017) 0.05
    0.052249998 = product of:
      0.078375 = sum of:
        0.03623568 = weight(_text_:science in 3460) [ClassicSimilarity], result of:
          0.03623568 = score(doc=3460,freq=14.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.26949924 = fieldWeight in 3460, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.02734375 = fieldNorm(doc=3460)
        0.042139314 = product of:
          0.08427863 = sum of:
            0.08427863 = weight(_text_:index in 3460) [ClassicSimilarity], result of:
              0.08427863 = score(doc=3460,freq=10.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.37784708 = fieldWeight in 3460, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=3460)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Content
    Vgl. auch Open Password, Nr.167 vom 01.03.2017 :"Eugene Garfield, Begründer und Pionier der Zitationsindexierung und der Ziationsanalyse, ohne den die Informationswissenschaft heute anders aussähe, ist im Alter von 91 Jahren gestorben. Er hinterlässt Frau, drei Söhne, eine Tochter, eine Stieftochter, zwei Enkeltöchter und zwei Großelternkinder. Garfield machte seinen ersten Abschluss als Bachelor in Chemie an der Columbia University in New York City im Jahre 1949. 1954 sattelte er einen Abschluss in Bibliothekswissenschaft drauf. 1961 sollte er im Fach strukturelle Linguistik promovieren. Als Chemie-Student war er nach eigenen Angaben weder besonders gut noch besonders glücklich. Sein "Erweckungserlebnis" hatte er auf einer Tagung der American Chemical Society, als er entdeckte, dass sich mit der Suche nach Literatur womöglich ein Lebensunterhalt bestreiten lasse. "So I went to the Chairman of the meeting and said: "How do you get a job in this racket?" Ab 1955 war Garfield zunächst als Berater für pharmazeutische Unternehmen tätig. Dort spezialisierte er sich auf Fachinformationen, indem er Inhalte relevanter Fachzeitschriften erarbeitete. 1955 schlug er in "Science" seine bahnbrechende Idee vor, Zitationen wissenschaftlicher Veröffentlichungen systematisch zu erfassen und Zusammenhänge zwischen Zitaten deutlich zu machen. 1960 gründete Garfield das Institute für Scientific Informationen, dessen CEO er bis 1992 blieb. 1964 brachte er den Scientific Information Index heraus. Weitere Maßgrößen wie der Social Science Index (ab 1973), der Arts and Humanities Citation Index (ab 1978) und der Journal Citation Index folgten. Diese Verzeichnisse wurden in dem "Web of Science" zusammengefasst und als Datenbank elektronisch zugänglich gemacht. Damit wurde es den Forschern ermöglich, die für sie relevante Literatur "at their fingertips" zu finden und sich in ihr zurechtzufinden. Darüber hinaus wurde es mit Hilfe der Rankings von Garfields Messgrößen möglich, die relative wissenschaftliche Bedeutung wissenschaftlicher Beiträge, Autoren, wissenschaftlicher Einrichtungen, Regionen und Länder zu messen.
    Garfield wandte sich im Zusammenhang mit seinen Messgrößen gegen "Bibliographic Negligence" und "Citation Amnesia", Er schrieb 2002: "There will never be a perfect solution to the problem of acknowledging intellectual debts. But a beginning can be made if journal editors will demand a signed pledge from authors that they have searched Medline, Science Citation Index, or other appropriate print and electronic databases." Er warnte aber auch vor einen unsachgemäßen Umgang mit seinen Messgößen und vor übertriebenen Erwartungen an sie in Zusammenhang mit Karriereentscheidungen über Wissenschaftler und Überlebensentscheidungen für wissenschaftliche Einrichtungen. 1982 übernahm die Thomson Corporation ISI für 210 Millionen Dollar. In der heutigen Nachfolgeorganisation Clarivate Analytics sind mehr als 4000 Mitarbeitern in über hundert Ländern beschäftigt. Garfield gründete auch eine Zeitung für Wissenschaftler, speziell für Biowissenschaftler, "The Scientist", die weiterbesteht und als kostenfreier Pushdienst bezogen werden kann. In seinen Beiträgen zur Wissenschaftspolitik kritisierte er beispielsweise die Wissenschaftsberater von Präsident Reagen 1986 als "Advocats of the administration´s science policies, rather than as objective conduits for communication between the president and the science community." Seinen Beitrag, mit dem er darum warb, die Förderung von UNESCO-Forschungsprogrammen fortzusetzen, gab er den Titel: "Let´s stand up für Global Science". Das ist auch in Trump-Zeiten ein guter Titel, da die US-Regierung den Wahrheitsbegriff, auf der Wissenschaft basiert, als bedeutungslos verwirft und sich auf Nationalismus und Abschottung statt auf internationale Kommunikation, Kooperation und gemeinsame Ausschöpfung von Interessen fokussiert."
  6. Stumpf, G.: "Kerngeschäft" Sacherschließung in neuer Sicht : was gezielte intellektuelle Arbeit und maschinelle Verfahren gemeinsam bewirken können (2015) 0.05
    0.051671706 = product of:
      0.15501511 = sum of:
        0.15501511 = sum of:
          0.106604956 = weight(_text_:index in 1703) [ClassicSimilarity], result of:
            0.106604956 = score(doc=1703,freq=4.0), product of:
              0.22304957 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.05104385 = queryNorm
              0.4779429 = fieldWeight in 1703, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1703)
          0.04841016 = weight(_text_:22 in 1703) [ClassicSimilarity], result of:
            0.04841016 = score(doc=1703,freq=2.0), product of:
              0.17874686 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05104385 = queryNorm
              0.2708308 = fieldWeight in 1703, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=1703)
      0.33333334 = coord(1/3)
    
    Content
    Es handelt sich um den leicht überarbeiteten Text eines Vortrags bei der VDB-Fortbildungsveranstaltung "Wandel als Konstante: neue Aufgaben und Herausforderungen für sozialwissenschaftliche Bibliotheken" am 22./23. Januar 2015 in Berlin.
    Source
    https://opus.bibliothek.uni-augsburg.de/opus4/frontdoor/index/index/docId/3002
  7. Atkins, H.: ¬The ISI® Web of Science® - links and electronic journals : how links work today in the Web of Science, and the challenges posed by electronic journals (1999) 0.05
    0.045865875 = product of:
      0.06879881 = sum of:
        0.038340252 = weight(_text_:science in 1246) [ClassicSimilarity], result of:
          0.038340252 = score(doc=1246,freq=12.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.28515178 = fieldWeight in 1246, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=1246)
        0.03045856 = product of:
          0.06091712 = sum of:
            0.06091712 = weight(_text_:index in 1246) [ClassicSimilarity], result of:
              0.06091712 = score(doc=1246,freq=4.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.27311024 = fieldWeight in 1246, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1246)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Since their inception in the early 1960s the strength and unique aspect of the ISI citation indexes has been their ability to illustrate the conceptual relationships between scholarly documents. When authors create reference lists for their papers, they make explicit links between their own, current work and the prior work of others. The exact nature of these links may not be expressed in the references themselves, and the motivation behind them may vary (this has been the subject of much discussion over the years), but the links embodied in references do exist. Over the past 30+ years, technology has allowed ISI to make the presentation of citation searching increasingly accessible to users of our products. Citation searching and link tracking moved from being rather cumbersome in print, to being direct and efficient (albeit non-intuitive) online, to being somewhat more user-friendly in CD format. But it is the confluence of the hypertext link and development of Web browsers that has enabled us to present to users a new form of citation product -- the Web of Science -- that is intuitive and makes citation indexing conceptually accessible. A cited reference search begins with a known, important (or at least relevant) document used as the search term. The search allows one to identify subsequent articles that have cited that document. This feature adds the dimension of prospective searching to the usual retrospective searching that all bibliographic indexes provide. Citation indexing is a prime example of a concept before its time - important enough to be used in the meantime by those sufficiently motivated, but just waiting for the right technology to come along to expand its use. While it was possible to follow citation links in earlier citation index formats, this required a level of effort on the part of users that was often just too much to ask of the casual user. In the citation indexes as presented in the Web of Science, the relationship between citing and cited documents is evident to users, and a click of the mouse is all it takes to follow a citation link. Citation connections are established between the published papers being indexed from the 8,000+ journals ISI covers and the items their reference lists contain during the data capture process. It is the standardized capture of each of the references included with these documents that enables us to provide the citation searching feature in all the citation index formats, as well as both internal and external links in the Web of Science.
    Object
    Web of Science
  8. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.05
    0.045039535 = product of:
      0.1351186 = sum of:
        0.1351186 = product of:
          0.4053558 = sum of:
            0.4053558 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.4053558 = score(doc=1826,freq=2.0), product of:
                0.4327503 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05104385 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  9. Knoll, A.: Kompetenzprofil von Information Professionals in Unternehmen (2016) 0.04
    0.044290036 = product of:
      0.13287011 = sum of:
        0.13287011 = sum of:
          0.09137568 = weight(_text_:index in 3069) [ClassicSimilarity], result of:
            0.09137568 = score(doc=3069,freq=4.0), product of:
              0.22304957 = queryWeight, product of:
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.05104385 = queryNorm
              0.40966535 = fieldWeight in 3069, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                4.369764 = idf(docFreq=1520, maxDocs=44218)
                0.046875 = fieldNorm(doc=3069)
          0.04149442 = weight(_text_:22 in 3069) [ClassicSimilarity], result of:
            0.04149442 = score(doc=3069,freq=2.0), product of:
              0.17874686 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.05104385 = queryNorm
              0.23214069 = fieldWeight in 3069, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3069)
      0.33333334 = coord(1/3)
    
    Content
    Vgl.: https://yis.univie.ac.at/index.php/yis/article/view/1324/1234. Diesem Beitrag liegt folgende Abschlussarbeit zugrunde: Lamparter, Anna: Kompetenzprofil für Information Professionals in Unternehmen. Masterarbeit (M.A.), Hochschule Hannover, 2015. Volltext: https://serwiss.bib.hs-hannover.de/frontdoor/index/index/docId/528 Vgl. auch: (geb. Lamparter): Kompetenzprofil von Information Professionals in Unternehmen. In:
    Date
    28. 7.2016 16:22:54
  10. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.04
    0.044130262 = product of:
      0.06619539 = sum of:
        0.019565428 = weight(_text_:science in 231) [ClassicSimilarity], result of:
          0.019565428 = score(doc=231,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 231, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=231)
        0.04662996 = product of:
          0.09325992 = sum of:
            0.09325992 = weight(_text_:index in 231) [ClassicSimilarity], result of:
              0.09325992 = score(doc=231,freq=6.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.418113 = fieldWeight in 231, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=231)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
    Series
    Lecture notes in computer science; 4825
  11. Cahier, J.-P.; Zaher, L'H.; Isoard , G.: Document et modèle pour l'action, une méthode pour le web socio-sémantique : application à un web 2.0 en développement durable (2010) 0.04
    0.043388095 = product of:
      0.06508214 = sum of:
        0.027391598 = weight(_text_:science in 4836) [ClassicSimilarity], result of:
          0.027391598 = score(doc=4836,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20372227 = fieldWeight in 4836, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4836)
        0.037690543 = product of:
          0.075381085 = sum of:
            0.075381085 = weight(_text_:index in 4836) [ClassicSimilarity], result of:
              0.075381085 = score(doc=4836,freq=2.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.33795667 = fieldWeight in 4836, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4836)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    We present the DOCMA method (DOCument and Model for Action) focused to Socio-Semantic web applications in large communities of interest. DOCMA is dedicated to end-users without any knowledge in Information Science. Community Members can elicit, structure and index shared business items emerging from their inquiry (such as projects, actors, products, geographically situated objects of interest.). We apply DOCMA to an experiment in the field of Sustainable Development: the Cartodd-Map21 collaborative Web portal.
  12. Krattenthaler, C.: Was der h-Index wirklich aussagt (2021) 0.04
    0.037988503 = product of:
      0.113965504 = sum of:
        0.113965504 = product of:
          0.22793101 = sum of:
            0.22793101 = weight(_text_:index in 407) [ClassicSimilarity], result of:
              0.22793101 = score(doc=407,freq=14.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                1.021885 = fieldWeight in 407, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0625 = fieldNorm(doc=407)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    Diese Note legt dar, dass der sogenannte h-Index (Hirschs bibliometrischer Index) im Wesentlichen dieselbe Information wiedergibt wie die Gesamtanzahl von Zitationen von Publikationen einer Autorin oder eines Autors, also ein nutzloser bibliometrischer Index ist. Dies basiert auf einem faszinierenden Satz der Wahrscheinlichkeitstheorie, der hier ebenfalls erläutert wird.
    Content
    Vgl.: DOI: 10.1515/dmvm-2021-0050. Auch abgedruckt u.d.T.: 'Der h-Index - "ein nutzloser bibliometrischer Index"' in Open Password Nr. 1007 vom 06.12.2021 unter: https://www.password-online.de/?mailpoet_router&endpoint=view_in_browser&action=view&data=WzM3NCwiZDI3MzMzOTEwMzUzIiwwLDAsMzQ4LDFd.
    Object
    h-index
  13. Popper, K.R.: Three worlds : the Tanner lecture on human values. Deliverd at the University of Michigan, April 7, 1978 (1978) 0.04
    0.03603163 = product of:
      0.108094886 = sum of:
        0.108094886 = product of:
          0.32428464 = sum of:
            0.32428464 = weight(_text_:3a in 230) [ClassicSimilarity], result of:
              0.32428464 = score(doc=230,freq=2.0), product of:
                0.4327503 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.05104385 = queryNorm
                0.7493574 = fieldWeight in 230, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0625 = fieldNorm(doc=230)
          0.33333334 = coord(1/3)
      0.33333334 = coord(1/3)
    
    Source
    https%3A%2F%2Ftannerlectures.utah.edu%2F_documents%2Fa-to-z%2Fp%2Fpopper80.pdf&usg=AOvVaw3f4QRTEH-OEBmoYr2J_c7H
  14. Metrics in research : for better or worse? (2016) 0.04
    0.035062876 = product of:
      0.05259431 = sum of:
        0.022135753 = weight(_text_:science in 3312) [ClassicSimilarity], result of:
          0.022135753 = score(doc=3312,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.16463245 = fieldWeight in 3312, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=3312)
        0.03045856 = product of:
          0.06091712 = sum of:
            0.06091712 = weight(_text_:index in 3312) [ClassicSimilarity], result of:
              0.06091712 = score(doc=3312,freq=4.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.27311024 = fieldWeight in 3312, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3312)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    If you are an academic researcher but did not earn (yet) your Nobel prize or your retirement, it is unlikely you never heard about research metrics. These metrics aim at quantifying various aspects of the research process, at the level of individual researchers (e.g. h-index, altmetrics), scientific journals (e.g. impact factors) or entire universities/ countries (e.g. rankings). Although such "measurements" have existed in a simple form for a long time, their widespread calculation was enabled by the advent of the digital era (large amount of data available worldwide in a computer-compatible format). And in this new era, what becomes technically possible will be done, and what is done and appears to simplify our lives will be used. As a result, a rapidly growing number of statistics-based numerical indices are nowadays fed into decisionmaking processes. This is true in nearly all aspects of society (politics, economy, education and private life), and in particular in research, where metrics play an increasingly important role in determining positions, funding, awards, research programs, career choices, reputations, etc.
    Content
    Inhalt: Metrics in Research - For better or worse? / Jozica Dolenc, Philippe Hünenberger Oliver Renn - A brief visual history of research metrics / Oliver Renn, Jozica Dolenc, Joachim Schnabl - Bibliometry: The wizard of O's / Philippe Hünenberger - The grip of bibliometrics - A student perspective / Matthias Tinzl - Honesty and transparency to taxpayers is the long-term fundament for stable university funding / Wendelin J. Stark - Beyond metrics: Managing the performance of your work / Charlie Rapple - Scientific profiling instead of bibliometrics: Key performance indicators of the future / Rafael Ball - More knowledge, less numbers / Carl Philipp Rosenau - Do we really need BIBLIO-metrics to evaluate individual researchers? / Rüdiger Mutz - Using research metrics responsibly and effectively as a researcher / Peter I. Darroch, Lisa H. Colledge - Metrics in research: More (valuable) questions than answers / Urs Hugentobler - Publication of research results: Use and abuse / Wilfred F. van Gunsteren - Wanted: Transparent algorithms, interpretation skills, common sense / Eva E. Wille - Impact factors, the h-index, and citation hype - Metrics in research from the point of view of a journal editor / Renato Zenobi - Rashomon or metrics in a publisher's world / Gabriella Karger - The impact factor and I: A love-hate relationship / Jean-Christophe Leroux - Personal experiences bringing altmetrics to the academic market / Ben McLeish - Fatally attracted by numbers? / Oliver Renn - On computable numbers / Gerd Folkers, Laura Folkers - ScienceMatters - Single observation science publishing and linking observations to create an internet of science / Lawrence Rajendran.
  15. Freyberg, L.: ¬Die Lesbarkeit der Welt : Rezension zu 'The Concept of Information in Library and Information Science. A Field in Search of Its Boundaries: 8 Short Comments Concerning Information'. In: Cybernetics and Human Knowing. Vol. 22 (2015), 1, 57-80. Kurzartikel von Luciano Floridi, Søren Brier, Torkild Thellefsen, Martin Thellefsen, Bent Sørensen, Birger Hjørland, Brenda Dervin, Ken Herold, Per Hasle und Michael Buckland (2016) 0.03
    0.03255412 = product of:
      0.048831176 = sum of:
        0.034999702 = weight(_text_:science in 3335) [ClassicSimilarity], result of:
          0.034999702 = score(doc=3335,freq=10.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.26030678 = fieldWeight in 3335, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=3335)
        0.013831474 = product of:
          0.027662948 = sum of:
            0.027662948 = weight(_text_:22 in 3335) [ClassicSimilarity], result of:
              0.027662948 = score(doc=3335,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.15476047 = fieldWeight in 3335, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3335)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Es ist wieder an der Zeit den Begriff "Information" zu aktualisieren beziehungsweise einen Bericht zum Status Quo zu liefern. Information ist der zentrale Gegenstand der Informationswissenschaft und stellt einen der wichtigsten Forschungsgegenstände der Bibliotheks- und Informationswissenschaft dar. Erstaunlicherweise findet jedoch ein stetiger Diskurs, der mit der kritischen Auseinandersetzung und der damit verbundenen Aktualisierung von Konzepten in den Geisteswissensschaften vergleichbar ist, zumindest im deutschsprachigen Raum1 nicht konstant statt. Im Sinne einer theoretischen Grundlagenforschung und zur Erarbeitung einer gemeinsamen begrifflichen Matrix wäre dies aber sicherlich wünschenswert. Bereits im letzten Jahr erschienen in dem von Søren Brier (Siehe "The foundation of LIS in information science and semiotics"2 sowie "Semiotics in Information Science. An Interview with Søren Brier on the application of semiotic theories and the epistemological problem of a transdisciplinary Information Science"3) herausgegebenen Journal "Cybernetics and Human Knowing" acht lesenswerte Stellungnahmen von namhaften Philosophen beziehungsweise Bibliotheks- und Informationswissenschaftlern zum Begriff der Information. Unglücklicherweise ist das Journal "Cybernetics & Human Knowing" in Deutschland schwer zugänglich, da es sich nicht um ein Open-Access-Journal handelt und lediglich von acht deutschen Bibliotheken abonniert wird.4 Aufgrund der schlechten Verfügbarkeit scheint es sinnvoll hier eine ausführliche Besprechung dieser acht Kurzartikel anzubieten.
    Das Journal, das sich laut Zusatz zum Hauptsachtitel thematisch mit "second order cybernetics, autopoiesis and cyber-semiotics" beschäftigt, existiert seit 1992/93 als Druckausgabe. Seit 1998 (Jahrgang 5, Heft 1) wird es parallel kostenpflichtig elektronisch im Paket über den Verlag Imprint Academic in Exeter angeboten. Das Konzept Information wird dort aufgrund der Ausrichtung, die man als theoretischen Beitrag zu den Digital Humanities (avant la lettre) ansehen könnte, regelmäßig behandelt. Insbesondere die phänomenologisch und mathematisch fundierte Semiotik von Charles Sanders Peirce taucht in diesem Zusammenhang immer wieder auf. Dabei spielt stets die Verbindung zur Praxis, vor allem im Bereich Library- and Information Science (LIS), eine große Rolle, die man auch bei Brier selbst, der in seinem Hauptwerk "Cybersemiotics" die Peirceschen Zeichenkategorien unter anderem auf die bibliothekarische Tätigkeit des Indexierens anwendet,5 beobachten kann. Die Ausgabe 1/ 2015 der Zeitschrift fragt nun "What underlines Information?" und beinhaltet unter anderem Artikel zum Entwurf einer Philosophie der Information des Chinesen Wu Kun sowie zu Peirce und Spencer Brown. Die acht Kurzartikel zum Informationsbegriff in der Bibliotheks- und Informationswissenschaft wurden von den Thellefsen-Brüdern (Torkild und Martin) sowie Bent Sørensen, die auch selbst gemeinsam einen der Kommentare verfasst haben.
  16. Schreiber, M.: Restricting the h-index to a citation time window : a case study of a timed Hirsch index (2014) 0.03
    0.032106146 = product of:
      0.09631843 = sum of:
        0.09631843 = product of:
          0.19263686 = sum of:
            0.19263686 = weight(_text_:index in 1563) [ClassicSimilarity], result of:
              0.19263686 = score(doc=1563,freq=10.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.86365044 = fieldWeight in 1563, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1563)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Abstract
    The h-index has been shown to increase in many cases mostly because of citations to rather old publications. This inertia can be circumvented by restricting the evaluation to a citation time window. Here I report results of an empirical study analyzing the evolution of the thus defined timed h-index in dependence on the length of the citation time window.
    Object
    h-index
  17. Dousa, T.: Everything Old is New Again : Perspectivism and Polyhierarchy in Julius O. Kaiser's Theory of Systematic Indexing (2007) 0.03
    0.030991498 = product of:
      0.046487246 = sum of:
        0.019565428 = weight(_text_:science in 4835) [ClassicSimilarity], result of:
          0.019565428 = score(doc=4835,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.1455159 = fieldWeight in 4835, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4835)
        0.026921818 = product of:
          0.053843636 = sum of:
            0.053843636 = weight(_text_:index in 4835) [ClassicSimilarity], result of:
              0.053843636 = score(doc=4835,freq=2.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.24139762 = fieldWeight in 4835, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4835)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In the early years of the 20th century, Julius Otto Kaiser (1868-1927), a special librarian and indexer of technical literature, developed a method of knowledge organization (KO) known as systematic indexing. Certain elements of the method-its stipulation that all indexing terms be divided into fundamental categories "concretes", "countries", and "processes", which are then to be synthesized into indexing "statements" formulated according to strict rules of citation order-have long been recognized as precursors to key principles of the theory of faceted classification. However, other, less well-known elements of the method may prove no less interesting to practitioners of KO. In particular, two aspects of systematic indexing seem to prefigure current trends in KO: (1) a perspectivist outlook that rejects universal classifications in favor of information organization systems customized to reflect local needs and (2) the incorporation of index terms extracted from source documents into a polyhierarchical taxonomical structure. Kaiser's perspectivism anticipates postmodern theories of KO, while his principled use of polyhierarchy to organize terms derived from the language of source documents provides a potentially fruitful model that can inform current discussions about harvesting natural-language terms, such as tags, and incorporating them into a flexibly structured controlled vocabulary.
    Source
    Proceedings 18th Workshop of the American Society for Information Science and Technology Special Interest Group in Classification Research, Milwaukee, Wisconsin. Ed.: Lussky, Joan
  18. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.03
    0.030740602 = product of:
      0.046110902 = sum of:
        0.015652342 = weight(_text_:science in 1256) [ClassicSimilarity], result of:
          0.015652342 = score(doc=1256,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.11641272 = fieldWeight in 1256, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.03125 = fieldNorm(doc=1256)
        0.03045856 = product of:
          0.06091712 = sum of:
            0.06091712 = weight(_text_:index in 1256) [ClassicSimilarity], result of:
              0.06091712 = score(doc=1256,freq=4.0), product of:
                0.22304957 = queryWeight, product of:
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.05104385 = queryNorm
                0.27311024 = fieldWeight in 1256, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  4.369764 = idf(docFreq=1520, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1256)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    This article describes an approach for linking geographically distributed collections of metadata so that they are searchable as a single collection. We describe the infrastructure, which uses standard Internet protocols such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP), to distribute queries, return results, and exchange index information. We discuss the advantages of using linked collections of authoritative metadata as an alternative to using a keyword indexing search-engine for resource discovery. We examine other architectures that use metadata for resource discovery, such as Dienst/NCSTRL, the AHDS HTTP/Z39.50 Gateway, and the ROADS initiative. Finally, we discuss research issues and future directions of the project. The Internet Scout Project, which is funded by the National Science Foundation and is located in the Computer Sciences Department at the University of Wisconsin-Madison, is charged with assisting the higher education community in resource discovery on the Internet. To that end, the Scout Report and subsequent subject-specific Scout Reports were developed to guide the U.S. higher education community to research-quality resources. The Scout Report Signpost utilizes the content from the Scout Reports as the basis of a metadata collection. Signpost consists of more than 2000 cataloged Internet sites using established standards such as Library of Congress subject headings and abbreviated call letters, and emerging standards such as the Dublin Core (DC). This searchable and browseable collection is free and freely accessible, as are all of the Internet Scout Project's services.
    As well developed as both the Scout Reports and Signpost are, they cannot capture the wealth of high-quality content that is available on the Internet. An obvious next step toward increasing the usefulness of our own collection and its value to our customer base is to partner with other high-quality content providers who have developed similar collections and to develop a single, virtual collection. Project Isaac (working title) is the Internet Scout Project's latest resource discovery effort. Project Isaac involves the development of a research testbed that allows experimentation with protocols and algorithms for creating, maintaining, indexing and searching distributed collections of metadata. Project Isaac's infrastructure uses standard Internet protocols, such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP) to distribute queries, return results, and exchange index or centroid information. The overall goal is to support a single-search interface to geographically distributed and independently maintained metadata collections.
  19. Roy, W.; Gray, C.: Preparing existing metadata for repository batch import : a recipe for a fickle food (2018) 0.03
    0.029972691 = product of:
      0.044959035 = sum of:
        0.027669692 = weight(_text_:science in 4550) [ClassicSimilarity], result of:
          0.027669692 = score(doc=4550,freq=4.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.20579056 = fieldWeight in 4550, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4550)
        0.017289342 = product of:
          0.034578685 = sum of:
            0.034578685 = weight(_text_:22 in 4550) [ClassicSimilarity], result of:
              0.034578685 = score(doc=4550,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.19345059 = fieldWeight in 4550, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4550)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    In 2016, the University of Waterloo began offering a mediated copyright review and deposit service to support the growth of our institutional repository UWSpace. This resulted in the need to batch import large lists of published works into the institutional repository quickly and accurately. A range of methods have been proposed for harvesting publications metadata en masse, but many technological solutions can easily become detached from a workflow that is both reproducible for support staff and applicable to a range of situations. Many repositories offer the capacity for batch upload via CSV, so our method provides a template Python script that leverages the Habanero library for populating CSV files with existing metadata retrieved from the CrossRef API. In our case, we have combined this with useful metadata contained in a TSV file downloaded from Web of Science in order to enrich our metadata as well. The appeal of this 'low-maintenance' method is that it provides more robust options for gathering metadata semi-automatically, and only requires the user's ability to access Web of Science and the Python program, while still remaining flexible enough for local customizations.
    Date
    10.11.2018 16:27:22
  20. Place, E.: Internationale Zusammenarbeit bei Internet Subject Gateways (1999) 0.03
    0.029483816 = product of:
      0.044225723 = sum of:
        0.023478512 = weight(_text_:science in 4189) [ClassicSimilarity], result of:
          0.023478512 = score(doc=4189,freq=2.0), product of:
            0.13445559 = queryWeight, product of:
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.05104385 = queryNorm
            0.17461908 = fieldWeight in 4189, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              2.6341193 = idf(docFreq=8627, maxDocs=44218)
              0.046875 = fieldNorm(doc=4189)
        0.02074721 = product of:
          0.04149442 = sum of:
            0.04149442 = weight(_text_:22 in 4189) [ClassicSimilarity], result of:
              0.04149442 = score(doc=4189,freq=2.0), product of:
                0.17874686 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05104385 = queryNorm
                0.23214069 = fieldWeight in 4189, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4189)
          0.5 = coord(1/2)
      0.6666667 = coord(2/3)
    
    Abstract
    Eine ganze Anzahl von Bibliotheken in Europa befaßt sich mit der Entwicklung von Internet Subject Gateways - einer Serviceleistung, die den Nutzern helfen soll, qualitativ hochwertige Internetquellen zu finden. Subject Gateways wie SOSIG (The Social Science Information Gateway) sind bereits seit einigen Jahren im Internet verfügbar und stellen eine Alternative zu Internet-Suchmaschinen wie AltaVista und Verzeichnissen wie Yahoo dar. Bezeichnenderweise stützen sich Subject Gateways auf die Fertigkeiten, Verfahrensweisen und Standards der internationalen Bibliothekswelt und wenden diese auf Informationen aus dem Internet an. Dieses Referat will daher betonen, daß Bibliothekare/innen idealerweise eine vorherrschende Rolle im Aufbau von Suchservices für Internetquellen spielen und daß Information Gateways eine Möglichkeit dafür darstellen. Es wird einige der Subject Gateway-Initiativen in Europa umreißen und die Werkzeuge und Technologien beschreiben, die vom Projekt DESIRE entwickelt wurden, um die Entwicklung neuer Gateways in anderen Ländern zu unterstützen. Es wird auch erörtert, wie IMesh, eine Gruppe für Gateways aus der ganzen Welt eine internationale Strategie für Gateways anstrebt und versucht, Standards zur Umsetzung dieses Projekts zu entwickeln
    Date
    22. 6.2002 19:35:09

Years

Languages

  • e 254
  • d 142
  • el 2
  • a 1
  • f 1
  • i 1
  • nl 1
  • sp 1
  • More… Less…

Types

  • a 208
  • i 17
  • x 9
  • r 8
  • m 6
  • s 5
  • b 4
  • p 3
  • n 1
  • More… Less…

Themes