Search (104 results, page 5 of 6)

  • × theme_ss:"Automatisches Indexieren"
  1. Schöneberg, U.; Gödert, W.: Erschließung mathematischer Publikationen mittels linguistischer Verfahren (2012) 0.01
    0.0059014075 = product of:
      0.011802815 = sum of:
        0.011802815 = product of:
          0.035408445 = sum of:
            0.035408445 = weight(_text_:c in 1055) [ClassicSimilarity], result of:
              0.035408445 = score(doc=1055,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.22866541 = fieldWeight in 1055, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1055)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    http://at.yorku.ca/c/b/f/j/99.htm
  2. Franke-Maier, M.; Beck, C.; Kasprzik, A.; Maas, J.F.; Pielmeier, S.; Wiesenmüller, H: ¬Ein Feuerwerk an Algorithmen und der Startschuss zur Bildung eines Kompetenznetzwerks für maschinelle Erschließung : Bericht zur Fachtagung Netzwerk maschinelle Erschließung an der Deutschen Nationalbibliothek am 10. und 11. Oktober 2019 (2020) 0.01
    0.0059014075 = product of:
      0.011802815 = sum of:
        0.011802815 = product of:
          0.035408445 = sum of:
            0.035408445 = weight(_text_:c in 5851) [ClassicSimilarity], result of:
              0.035408445 = score(doc=5851,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.22866541 = fieldWeight in 5851, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5851)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  3. Weiner, U.: Vor uns die Dokumentenflut oder Automatische Indexierung als notwendige und sinnvolle Ergänzung zur intellektuellen Sacherschließung (2012) 0.01
    0.0058798613 = product of:
      0.011759723 = sum of:
        0.011759723 = product of:
          0.035279166 = sum of:
            0.035279166 = weight(_text_:i in 598) [ClassicSimilarity], result of:
              0.035279166 = score(doc=598,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.20836058 = fieldWeight in 598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=598)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Vor dem Hintergrund veränderter Ansprüche der Bibliotheksbenutzer an Recherchemöglichkeiten - weg vom klassischen Online-Katalog hin zum "One-Stop-Shop" mit Funktionalitäten wie thematisches Browsing, Relevanzranking und dergleichen mehr - einerseits und der notwendigen Bearbeitung von Massendaten (Stichwort Dokumentenflut) andererseits rücken Systeme zur automatischen Indexierung wieder verstärkt in den Mittelpunkt des Interesses. Da in Österreich die Beschäftigung mit diesem Thema im Bibliotheksbereich bislang nur sehr selektiv, bezogen auf wenige konkrete Projekte, erfolgte, wird zuerst ein allgemeiner theoretischer Überblick über die unterschiedlichen Verfahrensansätze der automatischen Indexierung geboten. Im nächsten Schritt werden mit der IDX-basierten Indexierungssoftware MILOS (mit den Teilprojekten MILOS I, MILOS II und KASCADE) und dem modularen System intelligentCAPTURE (mit der integrierten Indexierungssoftware AUTINDEX) die bis vor wenigen Jahren im deutschsprachigen Raum einzigen im Praxiseinsatz befindlichen automatischen Indexierungssysteme vorgestellt. Mit zunehmender Notwendigkeit, neue Wege der inhaltlichen Erschließung zu beschreiten, wurden in den vergangenen 5 - 6 Jahren zahlreiche Softwareentwicklungen auf ihre Einsatzmöglichkeit im Bibliotheksbereich hin getestet. Stellvertretend für diese in Entwicklung befindlichen Systeme zur automatischen inhaltlichen Erschließung wird das Projekt PETRUS, welches in den Jahren 2009 - 2011 an der DNB durchgeführt wurde und die Komponenten PICA Match&Merge sowie die Extraction Platform der Firma Averbis beinhaltet, vorgestellt.
  4. Blank, I.; Rokach, L.; Shani, G.: Leveraging metadata to recommend keywords for academic papers (2016) 0.01
    0.0058798613 = product of:
      0.011759723 = sum of:
        0.011759723 = product of:
          0.035279166 = sum of:
            0.035279166 = weight(_text_:i in 3232) [ClassicSimilarity], result of:
              0.035279166 = score(doc=3232,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.20836058 = fieldWeight in 3232, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3232)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  5. Strobel, S.; Marín-Arraiza, P.: Metadata for scientific audiovisual media : current practices and perspectives of the TIB / AV-portal (2015) 0.01
    0.0058798613 = product of:
      0.011759723 = sum of:
        0.011759723 = product of:
          0.035279166 = sum of:
            0.035279166 = weight(_text_:i in 3667) [ClassicSimilarity], result of:
              0.035279166 = score(doc=3667,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.20836058 = fieldWeight in 3667, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3667)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Descriptive metadata play a key role in finding relevant search results in large amounts of unstructured data. However, current scientific audiovisual media are provided with little metadata, which makes them hard to find, let alone individual sequences. In this paper, the TIB / AV-Portal is presented as a use case where methods concerning the automatic generation of metadata, a semantic search and cross-lingual retrieval (German/English) have already been applied. These methods result in a better discoverability of the scientific audiovisual media hosted in the portal. Text, speech, and image content of the video are automatically indexed by specialised GND (Gemeinsame Normdatei) subject headings. A semantic search is established based on properties of the GND ontology. The cross-lingual retrieval uses English 'translations' that were derived by an ontology mapping (DBpedia i. a.). Further ways of increasing the discoverability and reuse of the metadata are publishing them as Linked Open Data and interlinking them with other data sets.
  6. Ma, N.; Zheng, H.T.; Xiao, X.: ¬An ontology-based latent semantic indexing approach using long short-term memory networks (2017) 0.01
    0.0058798613 = product of:
      0.011759723 = sum of:
        0.011759723 = product of:
          0.035279166 = sum of:
            0.035279166 = weight(_text_:i in 3810) [ClassicSimilarity], result of:
              0.035279166 = score(doc=3810,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.20836058 = fieldWeight in 3810, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3810)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Web and Big Data: First International Joint Conference, APWeb-WAIM 2017, Beijing, China, July 7-9, 2017, Proceedings, Part I. Eds.: L. Chen et al
  7. Suominen, O.; Koskenniemi, I.: Annif Analyzer Shootout : comparing text lemmatization methods for automated subject indexing (2022) 0.01
    0.0058798613 = product of:
      0.011759723 = sum of:
        0.011759723 = product of:
          0.035279166 = sum of:
            0.035279166 = weight(_text_:i in 658) [ClassicSimilarity], result of:
              0.035279166 = score(doc=658,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.20836058 = fieldWeight in 658, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=658)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  8. Ahmed, M.: Automatic indexing for agriculture : designing a framework by deploying Agrovoc, Agris and Annif (2023) 0.01
    0.0058798613 = product of:
      0.011759723 = sum of:
        0.011759723 = product of:
          0.035279166 = sum of:
            0.035279166 = weight(_text_:i in 1024) [ClassicSimilarity], result of:
              0.035279166 = score(doc=1024,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.20836058 = fieldWeight in 1024, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1024)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    There are several ways to employ machine learning for automating subject indexing. One popular strategy is to utilize a supervised learning algorithm to train a model on a set of documents that have been manually indexed by subject matter using a standard vocabulary. The resulting model can then predict the subject of new and previously unseen documents by identifying patterns learned from the training data. To do this, the first step is to gather a large dataset of documents and manually assign each document a set of subject keywords/descriptors from a controlled vocabulary (e.g., from Agrovoc). Next, the dataset (obtained from Agris) can be divided into - i) a training dataset, and ii) a test dataset. The training dataset is used to train the model, while the test dataset is used to evaluate the model's performance. Machine learning can be a powerful tool for automating the process of subject indexing. This research is an attempt to apply Annif (http://annif. org/), an open-source AI/ML framework, to autogenerate subject keywords/descriptors for documentary resources in the domain of agriculture. The training dataset is obtained from Agris, which applies the Agrovoc thesaurus as a vocabulary tool (https://www.fao.org/agris/download).
  9. Milstead, J.L.: Thesauri in a full-text world (1998) 0.01
    0.0050684595 = product of:
      0.010136919 = sum of:
        0.010136919 = product of:
          0.030410757 = sum of:
            0.030410757 = weight(_text_:22 in 2337) [ClassicSimilarity], result of:
              0.030410757 = score(doc=2337,freq=2.0), product of:
                0.15720168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044891298 = queryNorm
                0.19345059 = fieldWeight in 2337, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2337)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 9.1997 19:16:05
  10. Junger, U.; Schwens, U.: ¬Die inhaltliche Erschließung des schriftlichen kulturellen Erbes auf dem Weg in die Zukunft : Automatische Vergabe von Schlagwörtern in der Deutschen Nationalbibliothek (2017) 0.01
    0.0050684595 = product of:
      0.010136919 = sum of:
        0.010136919 = product of:
          0.030410757 = sum of:
            0.030410757 = weight(_text_:22 in 3780) [ClassicSimilarity], result of:
              0.030410757 = score(doc=3780,freq=2.0), product of:
                0.15720168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044891298 = queryNorm
                0.19345059 = fieldWeight in 3780, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3780)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    19. 8.2017 9:24:22
  11. Li, W.; Wong, K.-F.; Yuan, C.: Toward automatic Chinese temporal information extraction (2001) 0.00
    0.0049178395 = product of:
      0.009835679 = sum of:
        0.009835679 = product of:
          0.029507035 = sum of:
            0.029507035 = weight(_text_:c in 6029) [ClassicSimilarity], result of:
              0.029507035 = score(doc=6029,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.1905545 = fieldWeight in 6029, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6029)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  12. Tsai, C.-F.; McGarry, K.; Tait, J.: Qualitative evaluation of automatic assignment of keywords to images (2006) 0.00
    0.0049178395 = product of:
      0.009835679 = sum of:
        0.009835679 = product of:
          0.029507035 = sum of:
            0.029507035 = weight(_text_:c in 963) [ClassicSimilarity], result of:
              0.029507035 = score(doc=963,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.1905545 = fieldWeight in 963, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=963)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  13. Vilares, D.; Alonso, M.A.; Gómez-Rodríguez, C.: On the usefulness of lexical and syntactic processing in polarity classification of Twitter messages (2015) 0.00
    0.0049178395 = product of:
      0.009835679 = sum of:
        0.009835679 = product of:
          0.029507035 = sum of:
            0.029507035 = weight(_text_:c in 2161) [ClassicSimilarity], result of:
              0.029507035 = score(doc=2161,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.1905545 = fieldWeight in 2161, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2161)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  14. Toepfer, M.; Seifert, C.: Content-based quality estimation for automatic subject indexing of short texts under precision and recall constraints 0.00
    0.0049178395 = product of:
      0.009835679 = sum of:
        0.009835679 = product of:
          0.029507035 = sum of:
            0.029507035 = weight(_text_:c in 4309) [ClassicSimilarity], result of:
              0.029507035 = score(doc=4309,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.1905545 = fieldWeight in 4309, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4309)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  15. Li, X.; Zhang, A.; Li, C.; Ouyang, J.; Cai, Y.: Exploring coherent topics by topic modeling with term weighting (2018) 0.00
    0.0049178395 = product of:
      0.009835679 = sum of:
        0.009835679 = product of:
          0.029507035 = sum of:
            0.029507035 = weight(_text_:c in 5045) [ClassicSimilarity], result of:
              0.029507035 = score(doc=5045,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.1905545 = fieldWeight in 5045, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5045)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  16. Zhang, Y.; Zhang, C.; Li, J.: Joint modeling of characters, words, and conversation contexts for microblog keyphrase extraction (2020) 0.00
    0.0049178395 = product of:
      0.009835679 = sum of:
        0.009835679 = product of:
          0.029507035 = sum of:
            0.029507035 = weight(_text_:c in 5816) [ClassicSimilarity], result of:
              0.029507035 = score(doc=5816,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.1905545 = fieldWeight in 5816, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5816)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  17. Yang, T.-H.; Hsieh, Y.-L.; Liu, S.-H.; Chang, Y.-C.; Hsu, W.-L.: ¬A flexible template generation and matching method with applications for publication reference metadata extraction (2021) 0.00
    0.0049178395 = product of:
      0.009835679 = sum of:
        0.009835679 = product of:
          0.029507035 = sum of:
            0.029507035 = weight(_text_:c in 63) [ClassicSimilarity], result of:
              0.029507035 = score(doc=63,freq=2.0), product of:
                0.15484828 = queryWeight, product of:
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.044891298 = queryNorm
                0.1905545 = fieldWeight in 63, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.4494052 = idf(docFreq=3817, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=63)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  18. Halip, I.: Automatische Extrahierung von Schlagworten aus unstrukturierten Texten (2005) 0.00
    0.0047038887 = product of:
      0.009407777 = sum of:
        0.009407777 = product of:
          0.028223332 = sum of:
            0.028223332 = weight(_text_:i in 861) [ClassicSimilarity], result of:
              0.028223332 = score(doc=861,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.16668847 = fieldWeight in 861, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=861)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
  19. Schneider, A.: Moderne Retrievalverfahren in klassischen bibliotheksbezogenen Anwendungen : Projekte und Perspektiven (2008) 0.00
    0.0047038887 = product of:
      0.009407777 = sum of:
        0.009407777 = product of:
          0.028223332 = sum of:
            0.028223332 = weight(_text_:i in 4031) [ClassicSimilarity], result of:
              0.028223332 = score(doc=4031,freq=2.0), product of:
                0.16931784 = queryWeight, product of:
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.044891298 = queryNorm
                0.16668847 = fieldWeight in 4031, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.7717297 = idf(docFreq=2765, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4031)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Die vorliegende Arbeit beschäftigt sich mit modernen Retrievalverfahren in klassischen bibliotheksbezogenen Anwendungen. Wie die Verbindung der beiden gegensätzlich scheinenden Wortgruppen im Titel zeigt, werden in der Arbeit Aspekte aus der Informatik bzw. Informationswissenschaft mit Aspekten aus der Bibliothekstradition verknüpft. Nach einer kurzen Schilderung der Ausgangslage, der so genannten Informationsflut, im ersten Kapitel stellt das zweite Kapitel eine Einführung in die Theorie des Information Retrieval dar. Im Einzelnen geht es um die Grundlagen von Information Retrieval und Information-Retrieval-Systemen sowie um die verschiedenen Möglichkeiten der Informationserschließung. Hier werden Formal- und Sacherschließung, Indexierung und automatische Indexierung behandelt. Des Weiteren werden im Rahmen der Theorie des Information Retrieval unterschiedliche Information-Retrieval-Modelle und die Evaluation durch Retrievaltests vorgestellt. Nach der Theorie folgt im dritten Kapitel die Praxis des Information Retrieval. Es werden die organisationsinterne Anwendung, die Anwendung im Informations- und Dokumentationsbereich sowie die Anwendung im Bibliotheksbereich unterschieden. Die organisationsinterne Anwendung wird durch das Beispiel der Datenbank KURS zur Aus- und Weiterbildung veranschaulicht. Die Anwendung im Bibliotheksbereich bezieht sich in erster Linie auf den OPAC als Kompromiss zwischen bibliothekarischer Indexierung und Endnutzeranforderungen und auf seine Anreicherung (sog. Catalogue Enrichment), um das Retrieval zu verbessern. Der Bibliotheksbereich wird ausführlicher behandelt, indem ein Rückblick auf abgeschlossene Projekte zu Informations- und Indexierungssystemen aus den Neunziger Jahren (OSIRIS, MILOS I und II, KASCADE) sowie ein Einblick in aktuelle Projekte gegeben werden. In den beiden folgenden Kapiteln wird je ein aktuelles Projekt zur Verbesserung des Retrievals durch Kataloganreicherung, automatische Erschließung und fortschrittliche Retrievalverfahren präsentiert: das Suchportal dandelon.com und das 180T-Projekt des Hochschulbibliothekszentrums des Landes Nordrhein-Westfalen. Hierbei werden jeweils Projektziel, Projektpartner, Projektorganisation, Projektverlauf und die verwendete Technologie vorgestellt. Die Projekte unterscheiden sich insofern, dass in dem einen Fall eine große Verbundzentrale die Projektkoordination übernimmt, im anderen Fall jede einzelne teilnehmende Bibliothek selbst für die Durchführung verantwortlich ist. Im sechsten und letzten Kapitel geht es um das Fazit und die Perspektiven. Es werden sowohl die beiden beschriebenen Projekte bewertet als auch ein Ausblick auf Entwicklungen bezüglich des Bibliothekskatalogs gegeben. Diese Veröffentlichung geht zurück auf eine Master-Arbeit im postgradualen Fernstudiengang Master of Arts (Library and Information Science) an der Humboldt-Universität zu Berlin.
  20. Nohr, H.: Grundlagen der automatischen Indexierung : ein Lehrbuch (2003) 0.00
    0.0040547675 = product of:
      0.008109535 = sum of:
        0.008109535 = product of:
          0.024328604 = sum of:
            0.024328604 = weight(_text_:22 in 1767) [ClassicSimilarity], result of:
              0.024328604 = score(doc=1767,freq=2.0), product of:
                0.15720168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.044891298 = queryNorm
                0.15476047 = fieldWeight in 1767, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1767)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Date
    22. 6.2009 12:46:51

Years

Languages

  • e 61
  • d 41
  • ru 1
  • sp 1
  • More… Less…

Types

  • a 92
  • el 8
  • x 7
  • s 2
  • m 1
  • p 1
  • More… Less…