Search (34 results, page 2 of 2)

  • × theme_ss:"Grundlagen u. Einführungen: Allgemeine Literatur"
  1. Understanding metadata (2004) 0.01
    0.0090673035 = product of:
      0.02720191 = sum of:
        0.02720191 = product of:
          0.05440382 = sum of:
            0.05440382 = weight(_text_:22 in 2686) [ClassicSimilarity], result of:
              0.05440382 = score(doc=2686,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.30952093 = fieldWeight in 2686, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2686)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    10. 9.2004 10:22:40
  2. Lancaster, F.W.: Vocabulary control for information retrieval (1986) 0.01
    0.0090673035 = product of:
      0.02720191 = sum of:
        0.02720191 = product of:
          0.05440382 = sum of:
            0.05440382 = weight(_text_:22 in 217) [ClassicSimilarity], result of:
              0.05440382 = score(doc=217,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.30952093 = fieldWeight in 217, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=217)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 4.2007 10:07:51
  3. Chowdhury, G.G.: Introduction to modern information retrieval (2010) 0.01
    0.008824733 = product of:
      0.026474198 = sum of:
        0.026474198 = product of:
          0.052948397 = sum of:
            0.052948397 = weight(_text_:publishing in 4903) [ClassicSimilarity], result of:
              0.052948397 = score(doc=4903,freq=2.0), product of:
                0.24522576 = queryWeight, product of:
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.05019314 = queryNorm
                0.21591695 = fieldWeight in 4903, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  4.885643 = idf(docFreq=907, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4903)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Imprint
    London : Facet Publishing
  4. Broughton, V.: Essential classification (2004) 0.01
    0.007991287 = product of:
      0.02397386 = sum of:
        0.02397386 = weight(_text_:electronic in 2824) [ClassicSimilarity], result of:
          0.02397386 = score(doc=2824,freq=4.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.122172035 = fieldWeight in 2824, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.015625 = fieldNorm(doc=2824)
      0.33333334 = coord(1/3)
    
    Footnote
    Essential Classification is also an exercise book. Indeed, it contains a number of practical exercises and activities in every chapter, along with suggested answers. Unfortunately, the answers are too often provided without the justifications and explanations that students would no doubt demand. The author has taken great care to explain all technical terms in her text, but formal definitions are also gathered in an extensive 172-term Glossary; appropriately, these terms appear in bold type the first time they are used in the text. A short, very short, annotated bibliography of standard classification textbooks and of manuals for the use of major classification schemes is provided. A detailed 11-page index completes the set of learning aids which will be useful to an audience of students in their effort to grasp the basic concepts of the theory and the practice of document classification in a traditional environment. Essential Classification is a fine textbook. However, this reviewer deplores the fact that it presents only a very "traditional" view of classification, without much reference to newer environments such as the Internet where classification also manifests itself in various forms. In Essential Classification, books are always used as examples, and we have to take the author's word that traditional classification practices and tools can also be applied to other types of documents and elsewhere than in the traditional library. Vanda Broughton writes, for example, that "Subject headings can't be used for physical arrangement" (p. 101), but this is not entirely true. Subject headings can be used for physical arrangement of vertical files, for example, with each folder bearing a simple or complex heading which is then used for internal organization. And if it is true that subject headings cannot be reproduced an the spine of [physical] books (p. 93), the situation is certainly different an the World Wide Web where subject headings as metadata can be most useful in ordering a collection of hot links. The emphasis is also an the traditional paperbased, rather than an the electronic version of classification schemes, with excellent justifications of course. The reality is, however, that supporting organizations (LC, OCLC, etc.) are now providing great quality services online, and that updates are now available only in an electronic format and not anymore on paper. E-based versions of classification schemes could be safely ignored in a theoretical text, but they have to be described and explained in a textbook published in 2005. One last comment: Professor Broughton tends to use the same term, "classification" to represent the process (as in classification is grouping) and the tool (as in constructing a classification, using a classification, etc.). Even in the Glossary where classification is first well-defined as a process, and classification scheme as "a set of classes ...", the definition of classification scheme continues: "the classification consists of a vocabulary (...) and syntax..." (p. 296-297). Such an ambiguous use of the term classification seems unfortunate and unnecessarily confusing in an otherwise very good basic textbook an categorization of concepts and subjects, document organization and subject representation."
  5. Foskett, A.C.: ¬The subject approach to information (1996) 0.01
    0.0068004774 = product of:
      0.020401431 = sum of:
        0.020401431 = product of:
          0.040802862 = sum of:
            0.040802862 = weight(_text_:22 in 749) [ClassicSimilarity], result of:
              0.040802862 = score(doc=749,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.23214069 = fieldWeight in 749, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=749)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    25. 7.2002 21:22:31
  6. Chowdhury, G.G.: Introduction to modern information retrieval (1999) 0.01
    0.0068004774 = product of:
      0.020401431 = sum of:
        0.020401431 = product of:
          0.040802862 = sum of:
            0.040802862 = weight(_text_:22 in 4902) [ClassicSimilarity], result of:
              0.040802862 = score(doc=4902,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.23214069 = fieldWeight in 4902, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4902)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Content
    Enthält die Kapitel: 1. Basic concepts of information retrieval systems, 2. Database technology, 3. Bibliographic formats, 4. Subject analysis and representation, 5. Automatic indexing and file organization, 6. Vocabulary control, 7. Abstracts and abstracting, 8. Searching and retrieval, 9. Users of information retrieval, 10. Evaluation of information retrieval systems, 11. Evaluation experiments, 12. Online information retrieval, 13. CD-ROM information retrieval, 14. Trends in CD-ROM and online information retrieval, 15. Multimedia information retrieval, 16. Hypertext and hypermedia systems, 17. Intelligent information retrieval, 18. Natural language processing and information retrieval, 19. Natural language interfaces, 20. Natural language text processing and retrieval systems, 21. Problems and prospects of natural language processing systems, 22. The Internet and information retrieval, 23. Trends in information retrieval.
  7. Stock, W.G.: Qualitätskriterien von Suchmaschinen : Checkliste für Retrievalsysteme (2000) 0.01
    0.0056670653 = product of:
      0.017001195 = sum of:
        0.017001195 = product of:
          0.03400239 = sum of:
            0.03400239 = weight(_text_:22 in 5773) [ClassicSimilarity], result of:
              0.03400239 = score(doc=5773,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.19345059 = fieldWeight in 5773, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5773)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Source
    Password. 2000, H.5, S.22-31
  8. Haller, K.; Popst, H.: Katalogisierung nach den RAK-WB : eine Einführung in die Regeln für die alphabetische Katalogisierung in wissenschaftlichen Bibliotheken (2003) 0.01
    0.0056670653 = product of:
      0.017001195 = sum of:
        0.017001195 = product of:
          0.03400239 = sum of:
            0.03400239 = weight(_text_:22 in 1811) [ClassicSimilarity], result of:
              0.03400239 = score(doc=1811,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.19345059 = fieldWeight in 1811, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1811)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    17. 6.2015 15:22:06
  9. Booth, P.F.: Indexing : the manual of good practice (2001) 0.01
    0.0056506926 = product of:
      0.016952077 = sum of:
        0.016952077 = weight(_text_:electronic in 1968) [ClassicSimilarity], result of:
          0.016952077 = score(doc=1968,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.08638867 = fieldWeight in 1968, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.015625 = fieldNorm(doc=1968)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: nfd - Information Wissenschaft und Praxis 54(2003) H.7, S.440-442 (R. Fugmann): "Das Buch beginnt mit dem Kapitel "Myths about Indexing" und mit der Nennung von weit verbreiteten Irrtümern über das Indexieren, und zwar vorrangig über das Registermachen. Mit einem einzigen Satz ist die Problematik treffend skizziert, welcher das Buch gewidmet ist: "With the development of electronic documents, it has become possible to store very large amounts of information; but storage is not of much use without the capability to retrieve, to convert, transfer and reuse the information". Kritisiert wird die weit verbreitet anzutreffende Ansicht, das Indexieren sei lediglich eine Sache vom "picking out words from the text or naming objects in images and using those words as index headings". Eine solche Arbeitsweise führt jedoch nicht zu Registern, sondern zu Konkordanzen (d.h. zu alphabetischen Fundstellenlisten für Textwörter) und"... is entirely dependent an the words themselves and is not concerned with the ideas behind them". Das Sammeln von Information ist einfach. Aber die (Wieder-) Auffindbarkeit herzustellen muss gelernt werden, wenn mehr ermöglicht werden soll als lediglich das Wiederfinden von Texten, die man in allen Einzelheiten noch genau in Erinnerung behalten hat (known-item searches, questions of recall), die Details der sprachlichen Ausdrucksweise für die gesuchten Begriffe eingeschlossen. Die Verfasserin beschreibt aus ihrer großen praktischen Erfahrung, welche Schritte hierzu auf der gedanklichen und technischen Ebene unternommen werden müssen. Zu den erstgenannten Schritten rechnet die Abtrennung von Details, welche nicht im Index vertreten sein sollten ("unsought terms"), weil sie mit Sicherheit kein Suchziel darstellen werden und als "false friends" zur Überflutung des Suchenden mit Nebensächlichkeiten führen würden, eine Entscheidung, welche nur mit guter Sachkenntnis gefällt werden kann. All Dasjenige hingegen, was in Gegenwart und Zukunft (!) ein sinnvolles Suchziel darstellen könnte und "sufficiently informative" ist, verdient ein Schlagwort im Register. Man lernt auch durch lehrreiche Beispiele, wodurch ein Textwort unbrauchbar für das Register wird, wenn es dort als (schlechtes) Schlagwort erscheint, herausgelöst aus dem interpretierenden Zusammenhang, in welchen es im Text eingebettet gewesen ist. Auch muss die Vieldeutigkeit bereinigt werden, die fast jedem natursprachigen Wort anhaftet. Sonst wird der Suchende beim Nachschlagen allzu oft in die Irre geführt, und zwar um so öfter, je größer ein diesbezüglich unbereinigter Speicher bereits geworden ist.
  10. Broughton, V.: Essential thesaurus construction (2006) 0.01
    0.0056506926 = product of:
      0.016952077 = sum of:
        0.016952077 = weight(_text_:electronic in 2924) [ClassicSimilarity], result of:
          0.016952077 = score(doc=2924,freq=2.0), product of:
            0.19623034 = queryWeight, product of:
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.05019314 = queryNorm
            0.08638867 = fieldWeight in 2924, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.9095051 = idf(docFreq=2409, maxDocs=44218)
              0.015625 = fieldNorm(doc=2924)
      0.33333334 = coord(1/3)
    
    Footnote
    Weitere Rez. in: New Library World 108(2007) nos.3/4, S.190-191 (K.V. Trickey): "Vanda has provided a very useful work that will enable any reader who is prepared to follow her instruction to produce a thesaurus that will be a quality language-based subject access tool that will make the task of information retrieval easier and more effective. Once again I express my gratitude to Vanda for producing another excellent book." - Electronic Library 24(2006) no.6, S.866-867 (A.G. Smith): "Essential thesaurus construction is an ideal instructional text, with clear bullet point summaries at the ends of sections, and relevant and up to date references, putting thesauri in context with the general theory of information retrieval. But it will also be a valuable reference for any information professional developing or using a controlled vocabulary." - KO 33(2006) no.4, S.215-216 (M.P. Satija)
  11. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.00
    0.0045336518 = product of:
      0.013600955 = sum of:
        0.013600955 = product of:
          0.02720191 = sum of:
            0.02720191 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
              0.02720191 = score(doc=1767,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.15476047 = fieldWeight in 1767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1767)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2009 12:46:51
  12. Brühl, B.: Thesauri und Klassifikationen : Naturwissenschaften - Technik - Wirtschaft (2005) 0.00
    0.0045336518 = product of:
      0.013600955 = sum of:
        0.013600955 = product of:
          0.02720191 = sum of:
            0.02720191 = weight(_text_:22 in 3487) [ClassicSimilarity], result of:
              0.02720191 = score(doc=3487,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.15476047 = fieldWeight in 3487, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3487)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Series
    Materialien zur Information und Dokumentation; Bd.22
  13. Vonhoegen, H.: Einstieg in XML (2002) 0.00
    0.003966945 = product of:
      0.011900836 = sum of:
        0.011900836 = product of:
          0.023801671 = sum of:
            0.023801671 = weight(_text_:22 in 4002) [ClassicSimilarity], result of:
              0.023801671 = score(doc=4002,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.1354154 = fieldWeight in 4002, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=4002)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Footnote
    Rez. in: XML Magazin und Web Services 2003, H.1, S.14 (S. Meyen): "Seit dem 22. Februar 1999 ist das Resource Description Framework (RDF) als W3C-Empfehlung verfügbar. Doch was steckt hinter diesem Standard, der das Zeitalter des Semantischen Webs einläuten soll? Was RDF bedeutet, wozu man es einsetzt, welche Vorteile es gegenüber XML hat und wie man RDF anwendet, soll in diesem Artikel erläutert werden. Schlägt man das Buch auf und beginnt, im EinleitungsKapitel zu schmökern, fällt sogleich ins Auge, dass der Leser nicht mit Lektionen im Stile von "bei XML sind die spitzen Klammern ganz wichtig" belehrt wird, obgleich es sich um ein Buch für Anfänger handelt. Im Gegenteil: Es geht gleich zur Sache und eine gesunde Mischung an Vorkenntnissen wird vorausgesetzt. Wer sich heute für XML interessiert, der hat ja mit 99-prozentiger Wahrscheinlichkeit schon seine einschlägigen Erfahrungen mit HTML und dem Web gemacht und ist kein Newbie in dem Reich der spitzen Klammern und der (einigermaßen) wohlformatierten Dokumente. Und hier liegt eine deutliche Stärke des Werkes Helmut Vonhoegens, der seinen Einsteiger-Leser recht gut einzuschätzen weiß und ihn daher praxisnah und verständlich ans Thema heranführt. Das dritte Kapitel beschäftigt sich mit der Document Type Definition (DTD) und beschreibt deren Einsatzziele und Verwendungsweisen. Doch betont der Autor hier unablässig die Begrenztheit dieses Ansatzes, welche den Ruf nach einem neuen Konzept deutlich macht: XML Schema, welches er im folgenden Kapitel darstellt. Ein recht ausführliches Kapitel widmet sich dann dem relativ aktuellen XML Schema-Konzept und erläutert dessen Vorzüge gegenüber der DTD (Modellierung komplexer Datenstrukturen, Unterstützung zahlreicher Datentypen, Zeichenbegrenzungen u.v.m.). XML Schema legt, so erfährt der Leser, wie die alte DTD, das Vokabular und die zulässige Grammatik eines XML-Dokuments fest, ist aber seinerseits ebenfalls ein XML-Dokument und kann (bzw. sollte) wie jedes andere XML auf Wohlgeformtheit überprüft werden. Weitere Kapitel behandeln die Navigations-Standards XPath, XLink und XPointer, Transformationen mit XSLT und XSL und natürlich die XML-Programmierschnittstellen DOM und SAX. Dabei kommen verschiedene Implementierungen zum Einsatz und erfreulicherweise werden Microsoft-Ansätze auf der einen und Java/Apache-Projekte auf der anderen Seite in ungefähr vergleichbarem Umfang vorgestellt. Im letzten Kapitel schließlich behandelt Vonhoegen die obligatorischen Web Services ("Webdienste") als Anwendungsfall von XML und demonstriert ein kleines C#- und ASP-basiertes Beispiel (das Java-Äquivalent mit Apache Axis fehlt leider). "Einstieg in XML" präsentiert seinen Stoff in klar verständlicher Form und versteht es, seine Leser auf einem guten Niveau "abzuholen". Es bietet einen guten Überblick über die Grundlagen von XML und kann - zumindest derzeit noch - mit recht hoher Aktualität aufwarten."
  14. Anderson, R.; Birbeck, M.; Kay, M.; Livingstone, S.; Loesgen, B.; Martin, D.; Mohr, S.; Ozu, N.; Peat, B.; Pinnock, J.; Stark, P.; Williams, K.: XML professionell : behandelt W3C DOM, SAX, CSS, XSLT, DTDs, XML Schemas, XLink, XPointer, XPath, E-Commerce, BizTalk, B2B, SOAP, WAP, WML (2000) 0.00
    0.0034002387 = product of:
      0.010200716 = sum of:
        0.010200716 = product of:
          0.020401431 = sum of:
            0.020401431 = weight(_text_:22 in 729) [ClassicSimilarity], result of:
              0.020401431 = score(doc=729,freq=2.0), product of:
                0.17576782 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.05019314 = queryNorm
                0.116070345 = fieldWeight in 729, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=729)
          0.5 = coord(1/2)
      0.33333334 = coord(1/3)
    
    Date
    22. 6.2005 15:12:11

Years

Languages

  • e 23
  • d 11

Types

  • m 32
  • a 1
  • el 1
  • s 1
  • More… Less…