Search (20 results, page 1 of 1)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  • × year_i:[2000 TO 2010}
  1. Chan, L.M.; Mitchell, J.S.: Dewey Decimal Classification : principles and applications (2003) 0.12
    0.12248486 = product of:
      0.24496973 = sum of:
        0.24496973 = sum of:
          0.15994544 = weight(_text_:2003 in 3247) [ClassicSimilarity], result of:
            0.15994544 = score(doc=3247,freq=3.0), product of:
              0.19453894 = queryWeight, product of:
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.044824958 = queryNorm
              0.822177 = fieldWeight in 3247, product of:
                1.7320508 = tf(freq=3.0), with freq of:
                  3.0 = termFreq=3.0
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.109375 = fieldNorm(doc=3247)
          0.08502428 = weight(_text_:22 in 3247) [ClassicSimilarity], result of:
            0.08502428 = score(doc=3247,freq=2.0), product of:
              0.15696937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044824958 = queryNorm
              0.5416616 = fieldWeight in 3247, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.109375 = fieldNorm(doc=3247)
      0.5 = coord(1/2)
    
    Object
    DDC-22
    Year
    2003
  2. Kaushik, S.K.: DDC 22 : a practical approach (2004) 0.05
    0.050792575 = product of:
      0.10158515 = sum of:
        0.10158515 = sum of:
          0.03731283 = weight(_text_:2003 in 1842) [ClassicSimilarity], result of:
            0.03731283 = score(doc=1842,freq=2.0), product of:
              0.19453894 = queryWeight, product of:
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.044824958 = queryNorm
              0.19180135 = fieldWeight in 1842, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.03125 = fieldNorm(doc=1842)
          0.06427232 = weight(_text_:22 in 1842) [ClassicSimilarity], result of:
            0.06427232 = score(doc=1842,freq=14.0), product of:
              0.15696937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044824958 = queryNorm
              0.4094577 = fieldWeight in 1842, product of:
                3.7416575 = tf(freq=14.0), with freq of:
                  14.0 = termFreq=14.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1842)
      0.5 = coord(1/2)
    
    Abstract
    A system of library classification that flashed across the inquiring mind of young Melvil Louis Kossuth Dewey (known as Melvil Dewey) in 1873 is still the most popular classification scheme.. The modern library classification begins with Dewey Decimal Classification (DDC). Melvil Dewey devised DDC in 1876. DDC has is credit of 128 years of boudless success. The DDC is being taught as a practical subject throughout the world and it is being used in majority of libraries in about 150 countries. This is the result of continuous revision that 22nd Edition of DDC has been published in July 2003. No other classification scheme has published so many editions. Some welcome changes have been made in DDC 22. To reduce the Christian bias in 200 religion, the numbers 201 to 209 have been devoted to specific aspects of religion. In the previous editions these numbers were devoted to Christianity. to enhance the classifier's efficiency, Table 7 has been removed from DDC 22 and the provision of adding group of persons is made by direct use of notation already available in schedules and in notation -08 from Table 1 Standard Subdivision. The present book is an attempt to explain, with suitable examples, the salient provisions of DDC 22. The book is written in simple language so that the students may not face any difficulty in understanding what is being explained. The examples in the book are explained in a step-by-step procedure. It is hoped that this book will prove of great help and use to the library professionals in general and library and information science students in particular.
    Content
    1. Introduction to DDC 22 2. Major changes in DDC 22 3. Introduction to the schedules 4. Use of Table 1 : Standard Subdivisions 5. Use of Table 2 : Areas 6. Use of Table 3 : Subdivisions for the arts, for individual literatures, for specific literary forms 7. Use to Table 4 : Subdivisions of individual languages and language families 8. Use of Table 5 : Ethic and National groups 9. Use of Table 6 : Languages 10. Treatment of Groups of Persons
    Object
    DDC-22
  3. Haller, K.; Popst, H.: Katalogisierung nach den RAK-WB : eine Einführung in die Regeln für die alphabetische Katalogisierung in wissenschaftlichen Bibliotheken (2003) 0.04
    0.043744594 = product of:
      0.08748919 = sum of:
        0.08748919 = sum of:
          0.057123374 = weight(_text_:2003 in 1811) [ClassicSimilarity], result of:
            0.057123374 = score(doc=1811,freq=3.0), product of:
              0.19453894 = queryWeight, product of:
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.044824958 = queryNorm
              0.29363465 = fieldWeight in 1811, product of:
                1.7320508 = tf(freq=3.0), with freq of:
                  3.0 = termFreq=3.0
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1811)
          0.030365815 = weight(_text_:22 in 1811) [ClassicSimilarity], result of:
            0.030365815 = score(doc=1811,freq=2.0), product of:
              0.15696937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044824958 = queryNorm
              0.19345059 = fieldWeight in 1811, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=1811)
      0.5 = coord(1/2)
    
    Date
    17. 6.2015 15:22:06
    Year
    2003
  4. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.04
    0.041644707 = product of:
      0.083289415 = sum of:
        0.083289415 = sum of:
          0.058996763 = weight(_text_:2003 in 1767) [ClassicSimilarity], result of:
            0.058996763 = score(doc=1767,freq=5.0), product of:
              0.19453894 = queryWeight, product of:
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.044824958 = queryNorm
              0.30326456 = fieldWeight in 1767, product of:
                2.236068 = tf(freq=5.0), with freq of:
                  5.0 = termFreq=5.0
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.03125 = fieldNorm(doc=1767)
          0.024292652 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
            0.024292652 = score(doc=1767,freq=2.0), product of:
              0.15696937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044824958 = queryNorm
              0.15476047 = fieldWeight in 1767, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1767)
      0.5 = coord(1/2)
    
    Date
    22. 6.2009 12:46:51
    Footnote
    Rez. in: nfd 54(2003) H.5, S.314 (W. Ratzek): "Um entscheidungsrelevante Daten aus der ständig wachsenden Flut von mehr oder weniger relevanten Dokumenten zu extrahieren, müssen Unternehmen, öffentliche Verwaltung oder Einrichtungen der Fachinformation effektive und effiziente Filtersysteme entwickeln, einsetzen und pflegen. Das vorliegende Lehrbuch von Holger Nohr bietet erstmalig eine grundlegende Einführung in das Thema "automatische Indexierung". Denn: "Wie man Information sammelt, verwaltet und verwendet, wird darüber entscheiden, ob man zu den Gewinnern oder Verlierern gehört" (Bill Gates), heißt es einleitend. Im ersten Kapitel "Einleitung" stehen die Grundlagen im Mittelpunkt. Die Zusammenhänge zwischen Dokumenten-Management-Systeme, Information Retrieval und Indexierung für Planungs-, Entscheidungs- oder Innovationsprozesse, sowohl in Profit- als auch Non-Profit-Organisationen werden beschrieben. Am Ende des einleitenden Kapitels geht Nohr auf die Diskussion um die intellektuelle und automatische Indexierung ein und leitet damit über zum zweiten Kapitel "automatisches Indexieren. Hier geht der Autor überblickartig unter anderem ein auf - Probleme der automatischen Sprachverarbeitung und Indexierung - verschiedene Verfahren der automatischen Indexierung z.B. einfache Stichwortextraktion / Volltextinvertierung, - statistische Verfahren, Pattern-Matching-Verfahren. Die "Verfahren der automatischen Indexierung" behandelt Nohr dann vertiefend und mit vielen Beispielen versehen im umfangreichsten dritten Kapitel. Das vierte Kapitel "Keyphrase Extraction" nimmt eine Passpartout-Status ein: "Eine Zwischenstufe auf dem Weg von der automatischen Indexierung hin zur automatischen Generierung textueller Zusammenfassungen (Automatic Text Summarization) stellen Ansätze dar, die Schlüsselphrasen aus Dokumenten extrahieren (Keyphrase Extraction). Die Grenzen zwischen den automatischen Verfahren der Indexierung und denen des Text Summarization sind fließend." (S. 91). Am Beispiel NCR"s Extractor/Copernic Summarizer beschreibt Nohr die Funktionsweise.
    Year
    2003
  5. Glöggler, M.: Suchmaschinen im Internet : Funktionsweisen, Ranking, Methoden, Top Positionen (2003) 0.04
    0.03998636 = product of:
      0.07997272 = sum of:
        0.07997272 = product of:
          0.15994544 = sum of:
            0.15994544 = weight(_text_:2003 in 1818) [ClassicSimilarity], result of:
              0.15994544 = score(doc=1818,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.822177 = fieldWeight in 1818, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.109375 = fieldNorm(doc=1818)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  6. Taylor, A.G.: ¬The organization of information (2003) 0.03
    0.028561687 = product of:
      0.057123374 = sum of:
        0.057123374 = product of:
          0.11424675 = sum of:
            0.11424675 = weight(_text_:2003 in 4596) [ClassicSimilarity], result of:
              0.11424675 = score(doc=4596,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.5872693 = fieldWeight in 4596, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4596)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  7. Vonhoegen, H.: Einstieg in XML (2002) 0.03
    0.026952399 = product of:
      0.053904798 = sum of:
        0.053904798 = sum of:
          0.032648727 = weight(_text_:2003 in 4002) [ClassicSimilarity], result of:
            0.032648727 = score(doc=4002,freq=2.0), product of:
              0.19453894 = queryWeight, product of:
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.044824958 = queryNorm
              0.16782619 = fieldWeight in 4002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4002)
          0.02125607 = weight(_text_:22 in 4002) [ClassicSimilarity], result of:
            0.02125607 = score(doc=4002,freq=2.0), product of:
              0.15696937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044824958 = queryNorm
              0.1354154 = fieldWeight in 4002, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.02734375 = fieldNorm(doc=4002)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.14 (S. Meyen): "Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden. Schlägt man das Buch auf und beginnt, im EinleitungsKapitel zu schmökern, fällt sogleich ins Auge, dass der Leser nicht mit Lektionen im Stile von "bei XML sind die spitzen Klammern ganz wichtig" belehrt wird, obgleich es sich um ein Buch für Anfänger handelt. Im Gegenteil: Es geht gleich zur Sache und eine gesunde Mischung an Vorkenntnissen wird vorausgesetzt. Wer sich heute für XML interessiert, der hat ja mit 99-prozentiger Wahrscheinlichkeit schon seine einschlägigen Erfahrungen mit HTML und dem Web gemacht und ist kein Newbie in dem Reich der spitzen Klammern und der (einigermaßen) wohlformatierten Dokumente. Und hier liegt eine deutliche Stärke des Werkes Helmut Vonhoegens, der seinen Einsteiger-Leser recht gut einzuschätzen weiß und ihn daher praxisnah und verständlich ans Thema heranführt. Das dritte Kapitel beschäftigt sich mit der Document Type Definition (DTD) und beschreibt deren Einsatzziele und Verwendungsweisen. Doch betont der Autor hier unablässig die Begrenztheit dieses Ansatzes, welche den Ruf nach einem neuen Konzept deutlich macht: XML Schema, welches er im folgenden Kapitel darstellt. Ein recht ausführliches Kapitel widmet sich dann dem relativ aktuellen XML Schema-Konzept und erläutert dessen Vorzüge gegenüber der DTD (Modellierung komplexer Datenstrukturen, Unterstützung zahlreicher Datentypen, Zeichenbegrenzungen u.v.m.). XML Schema legt, so erfährt der Leser, wie die alte DTD, das Vokabular und die zulässige Grammatik eines XML-Dokuments fest, ist aber seinerseits ebenfalls ein XML-Dokument und kann (bzw. sollte) wie jedes andere XML auf Wohlgeformtheit überprüft werden. Weitere Kapitel behandeln die Navigations-Standards XPath, XLink und XPointer, Transformationen mit XSLT und XSL und natürlich die XML-Programmierschnittstellen DOM und SAX. Dabei kommen verschiedene Implementierungen zum Einsatz und erfreulicherweise werden Microsoft-Ansätze auf der einen und Java/Apache-Projekte auf der anderen Seite in ungefähr vergleichbarem Umfang vorgestellt. Im letzten Kapitel schließlich behandelt Vonhoegen die obligatorischen Web Services ("Webdienste") als Anwendungsfall von XML und demonstriert ein kleines C#- und ASP-basiertes Beispiel (das Java-Äquivalent mit Apache Axis fehlt leider). "Einstieg in XML" präsentiert seinen Stoff in klar verständlicher Form und versteht es, seine Leser auf einem guten Niveau "abzuholen". Es bietet einen guten Überblick über die Grundlagen von XML und kann - zumindest derzeit noch - mit recht hoher Aktualität aufwarten."
  8. Wands, B.: Digital creativity : techniques for digital media and the Internet (2002) 0.02
    0.02332052 = product of:
      0.04664104 = sum of:
        0.04664104 = product of:
          0.09328208 = sum of:
            0.09328208 = weight(_text_:2003 in 1181) [ClassicSimilarity], result of:
              0.09328208 = score(doc=1181,freq=2.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.4795034 = fieldWeight in 1181, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1181)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: JASIST 54(2003) no.4, S.357-358 (J.L. van Rockel): " 'Digital creativity' is an excellent book that will fit nicely in courses, faculty development workshops, and library collections; offering how-to-do-it and how-to-think-about-it examples."
  9. Bowman, J.H.: Essential Dewey (2005) 0.02
    0.021474533 = product of:
      0.042949066 = sum of:
        0.042949066 = sum of:
          0.018656416 = weight(_text_:2003 in 359) [ClassicSimilarity], result of:
            0.018656416 = score(doc=359,freq=2.0), product of:
              0.19453894 = queryWeight, product of:
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.044824958 = queryNorm
              0.09590068 = fieldWeight in 359, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                4.339969 = idf(docFreq=1566, maxDocs=44218)
                0.015625 = fieldNorm(doc=359)
          0.024292652 = weight(_text_:22 in 359) [ClassicSimilarity], result of:
            0.024292652 = score(doc=359,freq=8.0), product of:
              0.15696937 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.044824958 = queryNorm
              0.15476047 = fieldWeight in 359, product of:
                2.828427 = tf(freq=8.0), with freq of:
                  8.0 = termFreq=8.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.015625 = fieldNorm(doc=359)
      0.5 = coord(1/2)
    
    Content
    "The contents of the book cover: This book is intended as an introduction to the Dewey Decimal Classification, edition 22. It is not a substitute for it, and I assume that you have it, all four volumes of it, by you while reading the book. I have deliberately included only a short section an WebDewey. This is partly because WebDewey is likely to change more frequently than the printed version, but also because this book is intended to help you use the scheme regardless of the manifestation in which it appears. If you have a subscription to WebDewey and not the printed volumes you may be able to manage with that, but you may then find my references to volumes and page numbers baffling. All the examples and exercises are real; what is not real is the idea that you can classify something without seeing more than the title. However, there is nothing that I can do about this, and I have therefore tried to choose examples whose titles adequately express their subject-matter. Sometimes when you look at the 'answers' you may feel that you have been cheated, but I hope that this will be seldom. Two people deserve special thanks. My colleague Vanda Broughton has read drafts of the book and made many suggestions. Ross Trotter, chair of the CILIP Dewey Decimal Classification Committee, who knows more about Dewey than anyone in Britain today, has commented extensively an it and as far as possible has saved me from error, as well as suggesting many improvements. What errors remain are due to me alone. Thanks are also owed to OCLC Online Computer Library Center, for permission to reproduce some specimen pages of DDC 22. Excerpts from the Dewey Decimal Classification are taken from the Dewey Decimal Classification and Relative Index, Edition 22 which is Copyright 2003 OCLC Online Computer Library Center, Inc. DDC, Dewey, Dewey Decimal Classification and WebDewey are registered trademarks of OCLC Online Computer Library Center, Inc."
    Object
    DDC-22
  10. Lancaster, F.W.: Indexing and abstracting in theory and practice (2003) 0.02
    0.017137012 = product of:
      0.034274023 = sum of:
        0.034274023 = product of:
          0.068548046 = sum of:
            0.068548046 = weight(_text_:2003 in 4913) [ClassicSimilarity], result of:
              0.068548046 = score(doc=4913,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.35236156 = fieldWeight in 4913, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4913)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  11. Scott, M.L.: Dewey Decimal Classification, 22nd edition : a study manual and number building guide (2005) 0.02
    0.015182908 = product of:
      0.030365815 = sum of:
        0.030365815 = product of:
          0.06073163 = sum of:
            0.06073163 = weight(_text_:22 in 4594) [ClassicSimilarity], result of:
              0.06073163 = score(doc=4594,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.38690117 = fieldWeight in 4594, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=4594)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Object
    DDC-22
  12. Gaus, W.; Leiner, F.: Dokumentations- und Ordnungslehre : Theorie und Praxis des Information Retrieval (2003) 0.01
    0.014280844 = product of:
      0.028561687 = sum of:
        0.028561687 = product of:
          0.057123374 = sum of:
            0.057123374 = weight(_text_:2003 in 4524) [ClassicSimilarity], result of:
              0.057123374 = score(doc=4524,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.29363465 = fieldWeight in 4524, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4524)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  13. Understanding metadata (2004) 0.01
    0.012146326 = product of:
      0.024292652 = sum of:
        0.024292652 = product of:
          0.048585303 = sum of:
            0.048585303 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.048585303 = score(doc=2686,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    10. 9.2004 10:22:40
  14. Read, J.: Cataloguing without tears : managing knowledge in the information society (2003) 0.01
    0.011424675 = product of:
      0.02284935 = sum of:
        0.02284935 = product of:
          0.0456987 = sum of:
            0.0456987 = weight(_text_:2003 in 4509) [ClassicSimilarity], result of:
              0.0456987 = score(doc=4509,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.23490772 = fieldWeight in 4509, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4509)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  15. Stock, W.G.: Qualitätskriterien von Suchmaschinen : Checkliste für Retrievalsysteme (2000) 0.01
    0.007591454 = product of:
      0.015182908 = sum of:
        0.015182908 = product of:
          0.030365815 = sum of:
            0.030365815 = weight(_text_:22 in 5773) [ClassicSimilarity], result of:
              0.030365815 = score(doc=5773,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.19345059 = fieldWeight in 5773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5773)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Password. 2000, H.5, S.22-31
  16. Brühl, B.: Thesauri und Klassifikationen : Naturwissenschaften - Technik - Wirtschaft (2005) 0.01
    0.006073163 = product of:
      0.012146326 = sum of:
        0.012146326 = product of:
          0.024292652 = sum of:
            0.024292652 = weight(_text_:22 in 3487) [ClassicSimilarity], result of:
              0.024292652 = score(doc=3487,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.15476047 = fieldWeight in 3487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3487)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Series
    Materialien zur Information und Dokumentation; Bd.22
  17. Schwartz, C.: Sorting out the Web : approaches to subject access (2001) 0.01
    0.00583013 = product of:
      0.01166026 = sum of:
        0.01166026 = product of:
          0.02332052 = sum of:
            0.02332052 = weight(_text_:2003 in 2050) [ClassicSimilarity], result of:
              0.02332052 = score(doc=2050,freq=2.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.11987585 = fieldWeight in 2050, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=2050)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: KO 50(2003) no.1, S.45-46 (L.M. Given): "In her own preface to this work, the author notes her lifelong fascination with classification and order, as well as her more recent captivation with the Internet - a place of "chaos in need of organization" (xi). Sorting out the Web examines current efforts to organize the Web and is well-informed by the author's academic and professional expertise in information organization, information retrieval, and Web development. Although the book's level and tone are particularly relevant to a student audience (or others interested in Web-based subject access at an introductory level), it will also appeal to information professionals developing subject access systems across a range of information contexts. There are six chapters in the book, each describing and analyzing one core concept related to the organization of Web content. All topics are presented in a manner ideal for newcomers to the area, with clear definitions, examples, and visuals that illustrate the principles under discussion. The first chapter provides a brief introduction to developments in information technology, including an historical overview of information services, users' needs, and libraries' responses to the Internet. Chapter two introduces metadata, including core concepts and metadata formats. Throughout this chapter the author presents a number of figures that aptly illustrate the application of metadata in HTML, SGML, and MARC record environments, and the use of metadata tools (e.g., XML, RDF). Chapter three begins with an overview of classification theory and specific schemes, but the author devotes most of the discussion to the application of classification systems in the Web environment (e.g., Dewey, LCC, UDC). Web screen captures illustrate the use of these schemes for information sources posted to sites around the world. The chapter closes with a discussion of the future of classification; this is a particularly useful section as the author presents a listing of core journal and conference venues where new approaches to Web classification are explored. In chapter four, the author extends the discussion of classification to the use of controlled vocabularies. As in the first few chapters, the author first presents core background material, including reasons to use controlled vocabularies and the differences between preand post-coordinate indexing, and then discusses the application of specific vocabularies in the Web environment (e.g., Infomine's use of LCSH). The final section of the chapter explores failure in subject searching and the limitations of controlled vocabularies for the Web. Chapter five discusses one of the most common and fast-growing topics related to subject access an the Web: search engines. The author presents a clear definition of the term that encompasses classified search lists (e.g., Yahoo) and query-based engines (e.g., Alta Vista). In addition to historical background an the development of search engines, Schwartz also examines search service types, features, results, and system performance.
  18. Chowdhury, G.G.; Chowdhury, S.: Introduction to digital libraries (2003) 0.00
    0.004998295 = product of:
      0.00999659 = sum of:
        0.00999659 = product of:
          0.01999318 = sum of:
            0.01999318 = weight(_text_:2003 in 6119) [ClassicSimilarity], result of:
              0.01999318 = score(doc=6119,freq=3.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.102772124 = fieldWeight in 6119, product of:
                  1.7320508 = tf(freq=3.0), with freq of:
                    3.0 = termFreq=3.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.013671875 = fieldNorm(doc=6119)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Year
    2003
  19. Booth, P.F.: Indexing : the manual of good practice (2001) 0.00
    0.004664104 = product of:
      0.009328208 = sum of:
        0.009328208 = product of:
          0.018656416 = sum of:
            0.018656416 = weight(_text_:2003 in 1968) [ClassicSimilarity], result of:
              0.018656416 = score(doc=1968,freq=2.0), product of:
                0.19453894 = queryWeight, product of:
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.044824958 = queryNorm
                0.09590068 = fieldWeight in 1968, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.339969 = idf(docFreq=1566, maxDocs=44218)
                  0.015625 = fieldNorm(doc=1968)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.7, S.440-442 (R. Fugmann): "Das Buch beginnt mit dem Kapitel "Myths about Indexing" und mit der Nennung von weit verbreiteten Irrtümern über das Indexieren, und zwar vorrangig über das Registermachen. Mit einem einzigen Satz ist die Problematik treffend skizziert, welcher das Buch gewidmet ist: "With the development of electronic documents, it has become possible to store very large amounts of information; but storage is not of much use without the capability to retrieve, to convert, transfer and reuse the information". Kritisiert wird die weit verbreitet anzutreffende Ansicht, das Indexieren sei lediglich eine Sache vom "picking out words from the text or naming objects in images and using those words as index headings". Eine solche Arbeitsweise führt jedoch nicht zu Registern, sondern zu Konkordanzen (d.h. zu alphabetischen Fundstellenlisten für Textwörter) und"... is entirely dependent an the words themselves and is not concerned with the ideas behind them". Das Sammeln von Information ist einfach. Aber die (Wieder-) Auffindbarkeit herzustellen muss gelernt werden, wenn mehr ermöglicht werden soll als lediglich das Wiederfinden von Texten, die man in allen Einzelheiten noch genau in Erinnerung behalten hat (known-item searches, questions of recall), die Details der sprachlichen Ausdrucksweise für die gesuchten Begriffe eingeschlossen. Die Verfasserin beschreibt aus ihrer großen praktischen Erfahrung, welche Schritte hierzu auf der gedanklichen und technischen Ebene unternommen werden müssen. Zu den erstgenannten Schritten rechnet die Abtrennung von Details, welche nicht im Index vertreten sein sollten ("unsought terms"), weil sie mit Sicherheit kein Suchziel darstellen werden und als "false friends" zur Überflutung des Suchenden mit Nebensächlichkeiten führen würden, eine Entscheidung, welche nur mit guter Sachkenntnis gefällt werden kann. All Dasjenige hingegen, was in Gegenwart und Zukunft (!) ein sinnvolles Suchziel darstellen könnte und "sufficiently informative" ist, verdient ein Schlagwort im Register. Man lernt auch durch lehrreiche Beispiele, wodurch ein Textwort unbrauchbar für das Register wird, wenn es dort als (schlechtes) Schlagwort erscheint, herausgelöst aus dem interpretierenden Zusammenhang, in welchen es im Text eingebettet gewesen ist. Auch muss die Vieldeutigkeit bereinigt werden, die fast jedem natursprachigen Wort anhaftet. Sonst wird der Suchende beim Nachschlagen allzu oft in die Irre geführt, und zwar um so öfter, je größer ein diesbezüglich unbereinigter Speicher bereits geworden ist.
  20. Anderson, R.; Birbeck, M.; Kay, M.; Livingstone, S.; Loesgen, B.; Martin, D.; Mohr, S.; Ozu, N.; Peat, B.; Pinnock, J.; Stark, P.; Williams, K.: XML professionell : behandelt W3C DOM, SAX, CSS, XSLT, DTDs, XML Schemas, XLink, XPointer, XPath, E-Commerce, BizTalk, B2B, SOAP, WAP, WML (2000) 0.00
    0.0045548724 = product of:
      0.009109745 = sum of:
        0.009109745 = product of:
          0.01821949 = sum of:
            0.01821949 = weight(_text_:22 in 729) [ClassicSimilarity], result of:
              0.01821949 = score(doc=729,freq=2.0), product of:
                0.15696937 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044824958 = queryNorm
                0.116070345 = fieldWeight in 729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=729)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2005 15:12:11