Search (89 results, page 1 of 5)

  • × theme_ss:"Verteilte bibliographische Datenbanken"
  1. Rusch, B.: Kooperativer Bibliotheksverbund Berlin-Brandenburg : Erste Erfahrungen im Produktionsbetrieb (2000) 0.02
    0.023630697 = product of:
      0.11027659 = sum of:
        0.004183407 = weight(_text_:information in 5519) [ClassicSimilarity], result of:
          0.004183407 = score(doc=5519,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.09697737 = fieldWeight in 5519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5519)
        0.08264302 = weight(_text_:kongress in 5519) [ClassicSimilarity], result of:
          0.08264302 = score(doc=5519,freq=4.0), product of:
            0.16122791 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.024573348 = queryNorm
            0.51258504 = fieldWeight in 5519, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5519)
        0.023450168 = weight(_text_:frankfurt in 5519) [ClassicSimilarity], result of:
          0.023450168 = score(doc=5519,freq=2.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.22960341 = fieldWeight in 5519, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5519)
      0.21428572 = coord(3/14)
    
    Abstract
    Die Verbundentwicklungen in Berlin und Brandenburg erfolgen seit 1997 als Projekt, das am Konrad-Zuse-Zentrum für Informationstechnik Berlin (ZIB) angesiedelt ist. Diese außeruniversitäre Forschungseinrichtung betreibt in enger fächerübergreifender Kooperation mit den Hochschulen und wissenschaftlichen Einrichtungen in Berlin Forschung und Entwicklung auf dem Gebiet der Informationstechnik, vorzugsweise in anwendungsorientierter algorithmischer Mathematik. Gemeinsam mit der Softwarefirma ExLibris wird hier auf der Basis des Bibliothekssystems Aleph500 eine bibliothekarische Suchmaschine realisiert. Dabei übernimmt das ZIB die Konzeption und ExLibris die programmtechnische Umsetzung. An dem Projekt Kooperativer Bibliotheksverbund Berlin-Brandenburg (KOBV) sind insgesamt 1-5 bibliothekarische Einrichtungen beteiligt. Neben den großen Universitätsbibliotheken der beiden Länder - der Freien Universität Berlin, der Technischen Universität Berlin, der Humboldt-Universität und der Hochschule der Künste sowie der Europa-Universität Viadrina in Frankfurt (Oder) und der Brandenburgischen Technischen Universität Cottbus - sind das die brandenburgischen Fachhochschulen, die Stadt- und Landesbibliothek in Potsdam und nicht zuletzt die Staatsbibliothek zu Berlin. Außer diesen dezidiert als Projektpartner in einer entsprechenden Vereinbarung genannten Bibliotheken arbeiten weitere bibliothekarische Institutionen als Tester mit
    Series
    Gemeinsamer Kongress der Bundesvereinigung Deutscher Bibliotheksverbände e.V. (BDB) und der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI); Bd.1)(Tagungen der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V.; Bd.3
    Source
    Information und Öffentlichkeit: 1. Gemeinsamer Kongress der Bundesvereinigung Deutscher Bibliotheksverbände e.V. (BDB) und der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI), Leipzig, 20.-23.3.2000. Zugleich 90. Deutscher Bibliothekartag, 52. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. (DGI). Hrsg.: G. Ruppelt u. H. Neißer
  2. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.02
    0.018769031 = product of:
      0.065691605 = sum of:
        0.03232916 = weight(_text_:web in 6959) [ClassicSimilarity], result of:
          0.03232916 = score(doc=6959,freq=10.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.40312994 = fieldWeight in 6959, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.010247213 = weight(_text_:information in 6959) [ClassicSimilarity], result of:
          0.010247213 = score(doc=6959,freq=12.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.23754507 = fieldWeight in 6959, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.017566316 = weight(_text_:retrieval in 6959) [ClassicSimilarity], result of:
          0.017566316 = score(doc=6959,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.23632148 = fieldWeight in 6959, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.0055489163 = product of:
          0.016646748 = sum of:
            0.016646748 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
              0.016646748 = score(doc=6959,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.19345059 = fieldWeight in 6959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6959)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  3. Kaizik, A.; Gödert, W.; Milanesi, C.: Erfahrungen und Ergebnisse aus der Evaluierung des EU-Projektes EULER im Rahmen des an der FH Köln angesiedelten Projektes EJECT (Evaluation von Subject Gateways des World Wide Web (2001) 0.02
    0.01753862 = product of:
      0.06138517 = sum of:
        0.01445804 = weight(_text_:web in 5801) [ClassicSimilarity], result of:
          0.01445804 = score(doc=5801,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.18028519 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5801)
        0.005916231 = weight(_text_:information in 5801) [ClassicSimilarity], result of:
          0.005916231 = score(doc=5801,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13714671 = fieldWeight in 5801, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5801)
        0.033163548 = weight(_text_:frankfurt in 5801) [ClassicSimilarity], result of:
          0.033163548 = score(doc=5801,freq=4.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.32470825 = fieldWeight in 5801, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5801)
        0.007847352 = product of:
          0.023542056 = sum of:
            0.023542056 = weight(_text_:22 in 5801) [ClassicSimilarity], result of:
              0.023542056 = score(doc=5801,freq=4.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.27358043 = fieldWeight in 5801, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5801)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Date
    22. 6.2002 19:42:22
    Imprint
    Frankfurt am Main : DGI
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
    Theme
    Information Gateway
  4. Croft, W.B.: Combining approaches to information retrieval (2000) 0.02
    0.015386885 = product of:
      0.07180546 = sum of:
        0.017349645 = weight(_text_:web in 6862) [ClassicSimilarity], result of:
          0.017349645 = score(doc=6862,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.21634221 = fieldWeight in 6862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
        0.012296655 = weight(_text_:information in 6862) [ClassicSimilarity], result of:
          0.012296655 = score(doc=6862,freq=12.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.2850541 = fieldWeight in 6862, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
        0.042159162 = weight(_text_:retrieval in 6862) [ClassicSimilarity], result of:
          0.042159162 = score(doc=6862,freq=16.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.5671716 = fieldWeight in 6862, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
      0.21428572 = coord(3/14)
    
    Abstract
    The combination of different text representations and search strategies has become a standard technique for improving the effectiveness of information retrieval. Combination, for example, has been studied extensively in the TREC evaluations and is the basis of the "meta-search" engines used on the Web. This paper examines the development of this technique, including both experimental results and the retrieval models that have been proposed as formal frameworks for combination. We show that combining approaches for information retrieval can be modeled as combining the outputs of multiple classifiers based on one or more representations, and that this simple model can provide explanations for many of the experimental results. We also show that this view of combination is very similar to the inference net model, and that a new approach to retrieval based on language models supports combination and can be integrated with the inference net model
    Series
    The Kluwer international series on information retrieval; 7
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  5. Lopatenko, A.; Asserson, A.; Jeffery, K.G.: CERIF - Information retrieval of research information in a distributed heterogeneous environment (2002) 0.01
    0.014587612 = product of:
      0.06807552 = sum of:
        0.03469929 = weight(_text_:web in 3597) [ClassicSimilarity], result of:
          0.03469929 = score(doc=3597,freq=8.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.43268442 = fieldWeight in 3597, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3597)
        0.012296655 = weight(_text_:information in 3597) [ClassicSimilarity], result of:
          0.012296655 = score(doc=3597,freq=12.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.2850541 = fieldWeight in 3597, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3597)
        0.021079581 = weight(_text_:retrieval in 3597) [ClassicSimilarity], result of:
          0.021079581 = score(doc=3597,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.2835858 = fieldWeight in 3597, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3597)
      0.21428572 = coord(3/14)
    
    Abstract
    User demands to have access to complete and actual information about research may require integration of data from different CRISs. CRISs are rarely homogenous systems and problems of CRISs integration must be addressed from technological point of view. Implementation of CRIS providing access to heterogeneous data distributed among a number of CRISs is described. A few technologies - distributed databases, web services, semantic web are used for distributed CRIS to address different user requirements. Distributed databases serve to implement very efficient integration of homogenous systems, web services - to provide open access to research information, semantic web - to solve problems of integration semantically and structurally heterogeneous data sources and provide intelligent data retrieval interfaces. The problems of data completeness in distributed systems are addressed and CRIS-adequate solution for data completeness is suggested.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  6. Avrahami, T.T.; Yau, L.; Si, L.; Callan, J.P.: ¬The FedLemur project : Federated search in the real world (2006) 0.01
    0.013146668 = product of:
      0.046013337 = sum of:
        0.017349645 = weight(_text_:web in 5271) [ClassicSimilarity], result of:
          0.017349645 = score(doc=5271,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.21634221 = fieldWeight in 5271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5271)
        0.007099477 = weight(_text_:information in 5271) [ClassicSimilarity], result of:
          0.007099477 = score(doc=5271,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.16457605 = fieldWeight in 5271, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5271)
        0.014905514 = weight(_text_:retrieval in 5271) [ClassicSimilarity], result of:
          0.014905514 = score(doc=5271,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 5271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5271)
        0.006658699 = product of:
          0.019976096 = sum of:
            0.019976096 = weight(_text_:22 in 5271) [ClassicSimilarity], result of:
              0.019976096 = score(doc=5271,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.23214069 = fieldWeight in 5271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5271)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Abstract
    Federated search and distributed information retrieval systems provide a single user interface for searching multiple full-text search engines. They have been an active area of research for more than a decade, but in spite of their success as a research topic, they are still rare in operational environments. This article discusses a prototype federated search system developed for the U.S. government's FedStats Web portal, and the issues addressed in adapting research solutions to this operational environment. A series of experiments explore how well prior research results, parameter settings, and heuristics apply in the FedStats environment. The article concludes with a set of lessons learned from this technology transfer effort, including observations about search engine quality in the real world.
    Date
    22. 7.2006 16:02:07
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.347-358
  7. Jahns, Y.; Trummer, M.: Sacherschließung - Informationsdienstleistung nach Maß : Kann Heterogenität beherrscht werden? (2004) 0.01
    0.01236636 = product of:
      0.04328226 = sum of:
        0.0016733628 = weight(_text_:information in 2789) [ClassicSimilarity], result of:
          0.0016733628 = score(doc=2789,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.03879095 = fieldWeight in 2789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=2789)
        0.004968505 = weight(_text_:retrieval in 2789) [ClassicSimilarity], result of:
          0.004968505 = score(doc=2789,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.06684181 = fieldWeight in 2789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.015625 = fieldNorm(doc=2789)
        0.023374973 = weight(_text_:kongress in 2789) [ClassicSimilarity], result of:
          0.023374973 = score(doc=2789,freq=2.0), product of:
            0.16122791 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.024573348 = queryNorm
            0.14498094 = fieldWeight in 2789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.015625 = fieldNorm(doc=2789)
        0.013265419 = weight(_text_:frankfurt in 2789) [ClassicSimilarity], result of:
          0.013265419 = score(doc=2789,freq=4.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.1298833 = fieldWeight in 2789, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.015625 = fieldNorm(doc=2789)
      0.2857143 = coord(4/14)
    
    Content
    "... unter diesem Motto hat die Deutsche Bücherei Leipzig am 23. März 2004 auf dem Leipziger Kongress für Bibliothek und Information eine Vortragsreihe initiiert. Vorgestellt wurden Projekte, die sich im Spannungsfeld von Standardisierung und Heterogenität der Sacherschließung bewegen. Die Benutzer unserer Bibliotheken und Informationseinrichtungen stehen heute einer Fülle von Informationen gegenüber, die sie aus zahlreichen Katalogen und Fachdatenbanken abfragen können. Diese Recherche kann schnell zeitraubend werden, wenn der Benutzer mit verschiedenen Suchbegriffen und -logiken arbeiten muss, um zur gewünschten Ressource zu gelangen. Ein Schlagwort A kann in jedem der durchsuchten Systeme eine andere Bedeutung annehmen. Homogenität erreicht man klassisch zunächst durch Normierung und Standardisierung. Für die zwei traditionellen Verfahren der inhaltlichen Erschließung - der klassifikatorischen und der verbalen - haben sich in Deutschland verschiedene Standards durchgesetzt. Klassifikatorische Erschließung wird mit ganz unterschiedlichen Systemen betrieben. Verbreitet sind etwa die Regensburger Verbundklassifikation (RVK) oder die Basisklassifikation (BK). Von Spezial- und Facheinrichtungen werden entsprechende Fachklassifikationen eingesetzt. Weltweit am häufigsten angewandt ist die Dewey Decimal Classification (DDC), die seit 2003 ins Deutsche übertragen wird. Im Bereich der verbalen Sacherschließung haben sich, vor allem bei den wissenschaftlichen Universalbibliotheken, die Regeln für den Schlagwortkatalog (RSWK) durchgesetzt, durch die zugleich die Schlagwortnormdatei (SWD) kooperativ aufgebaut wurde. Daneben erschließen wiederum viele Spezial- und Facheinrichtungen mit selbst entwickelten Fachthesauri.
    Neben die Pflege der Standards tritt als Herausforderung die Vernetzung der Systeme, um heterogene Dokumentenbestände zu verbinden. »Standardisierung muss von der verbleibenden Heterogenität her gedacht werden«." Diese Aufgaben können nur in Kooperation von Bibliotheken und Informationseinrichtungen gelöst werden. Die vorgestellten Projekte zeigen, wie dies gelingen kann. Sie verfolgen alle das Ziel, Informationen über Inhalte schneller und besser für die Nutzer zur Verfügung zu stellen. Fachliche Recherchen über mehrere Informationsanbieter werden durch die Heterogenität überwindende Suchdienste ermöglicht. Die Einführung der DDC im deutschen Sprachraum steht genau im Kern des Spannungsfeldes. Die DDC stellt durch ihren universalen Charakter nicht nur einen übergreifenden Standard her. Ihre Anwendung ist nur nutzbringend, wenn zugleich die Vernetzung mit den in Deutschland bewährten Klassifikationen und Thesauri erfolgt. Ziel des Projektes DDC Deutsch ist nicht nur eine Übersetzung ins Deutsche, die DDC soll auch in Form elektronischer Dienste zur Verfügung gestellt werden. Dr. Lars Svensson, Deutsche Bibliothek Frankfurt am Main, präsentierte anschaulichdie Möglichkeiten einer intelligenten Navigation über die DDC. Für die Dokumentenbestände Der Deutschen Bibliothek, des Gemeinsamen Bibliotheksverbundes (GBV) und der Niedersächsischen Staats- und Universitätsbibliothek Göttingen wurde prototypisch ein Webservice realisiert.
    Dieses DDC-Tool ermöglicht den Zugriff auf lokale, mit DDC-erschlossene Titeldaten. Für einige bereits übersetzte DDC-Klassen kann mithilfe eines Browsers gearbeitet werden. Auch die gezielte verbale Suche nach DDC-Elementen ist möglich. Die Frage nach Aspekten, wie z. B. geografischen, soll durch getrennte Ablage der Notationselemente in den Titeldatensätzen ermöglicht werden. Schließlich lassen sich künftig auch integrierte Suchen über DDC und SWD oder andere Erschließungssysteme denken, um Literatur zu einem Thema zu finden. Das von Lars Svensson vorgestellte Retrieval-Interface bietet eine zentrale Lösung: nicht für jeden lokalen OPAC müssen eigene Suchstrukturen entwickelt werden, um auf DDC-Daten zugreifen zu können. Wie Datenbestände mit verschiedenen Erschließungen unter einer Oberfläche zusammengeführt werden und dabei die DDC als Meta-Ebene genutzt wird, das ist heute schon im Subject Gateway Renardus sichtbar." Der Renardus-Broker ermöglicht das Cross-Browsen und Cross-Searchen über verteilte Internetquellen in Europa. Für die Navigation über die DDC mussten zunächst Crosswalks zwischen den lokalen Klassifikationsklassen und der DDC erstellt werden. Das an der Universitätsbibliothek Regensburg entwickelte Tool CarmenX wurde dazu von der Niedersächsischen Staats- und Universitätsbibliothek Göttingen weiterentwickelt und ermöglicht den Zugriff auf die ver schiedenen Klassifikationssysteme. Über diese Entwicklungen berichtete Dr. Friedrich Geißelmann, Universitäsbibliothek Regensburg. Er leitete das CARMEN-Teilprojekt »Grosskonkordanzen zwischen Thesauri und Klassifikationen«, in dem das Werkzeug CarmenX entstand. In diesem CARMEN-Arbeitspaket erfolgten sowohl grundlegende methodische Untersuchungen zu Crosskonkordanzen als auch prototypische Umsetzungen in den Fachgebieten Mathematik, Physik und Sozialwissenschaften. Ziel war es, bei Recherchen in verteilten Datenbanken mit unterschiedlichen Klassifikationen und Thesauri von einem vertrauten System auszugehen und in weitere wechseln zu können, ohne genaue Kenntnis von den Systemen haben zu müssen. So wurden z. B. im Bereich Crosskonkordanzen zwischen Allgemein- und Fachklassifikationen die RVK und die Mathematical Subject Classification (MSC) und Physics and Astronomy Classification Scheme (PACS) ausgewählt.
    Katja Heyke, Universitäts- und Stadtbibliothek Köln, und Manfred Faden, Bibliothek des HWWA-Instituts für Wirtschaftsforschung Hamburg, stellten ähnliche Entwicklungen für den Fachbereich Wirtschaftswissenschaften vor. Hier wird eine Crosskonkordanz zwischen dem Standard Thesaurus Wirtschaft (STW) und dem Bereich Wirtschaft der SWD aufgebaut." Diese Datenbank soll den Zugriff auf die mit STW und SWD erschlossenen Bestände ermöglichen. Sie wird dazu weitergegeben an die virtuelle Fachbibliothek EconBiz und an den Gemeinsamen Bibliotheksverbund. Die Crosskonkordanz Wirtschaft bietet aber auch die Chance zur kooperativen Sacherschließung, denn sie eröffnet die Möglichkeit der gegenseitigen Übernahme von Sacherschließungsdaten zwischen den Partnern Die Deutsche Bibliothek, Universitäts- und Stadtbibliothek Köln, HWWA und Bibliothek des Instituts für Weltwirtschaft Kiel. Am Beispiel der Wirtschaftswissenschaften zeigt sich der Gewinn solcher KonkordanzProjekte für Indexierer und Benutzer. Der Austausch über die Erschließungsregeln und die systematische Analyse der Normdaten führen zur Bereinigung von fachlichen Schwachstellen und Inkonsistenzen in den Systemen. Die Thesauri werden insgesamt verbessert und sogar angenähert. Die Vortragsreihe schloss mit einem Projekt, das die Heterogenität der Daten aus dem Blickwinkel der Mehrsprachigkeit betrachtet. Martin Kunz, Deutsche Bibliothek Frankfurt am Main, informierte über das Projekt MACS (Multilingual Access to Subject Headings). MACS bietet einen mehrsprachigen Zugriff auf Bibliothekskataloge. Dazu wurde eine Verbindung zwischen den Schlagwortnormdateien LCSH, RAMEAU und SWD erarbeitet. Äquivalente Vorzugsbezeichnungen der Normdateien werden intellektuell nachgewiesen und als Link abgelegt. Das Projekt beschränkte sich zunächst auf die Bereiche Sport und Theater und widmet sich in einer nächsten Stufe den am häufigsten verwendeten Schlagwörtern. MACS geht davon aus, dass ein Benutzer in der Sprache seiner Wahl (Deutsch, Englisch, Französisch) eine Schlagwortsuche startet, und ermöglicht ihm, seine Suche auf die affilierten Datenbanken im Ausland auszudehnen. Martin Kunz plädierte für einen Integrationsansatz, der auf dem gegenseitigen Respekt vor der Terminologie der kooperierenden Partner beruht. Er sprach sich dafür aus, in solchen Vorhaben den Begriff der Thesaurus föderation anzuwenden, der die Autonomie der Thesauri unterstreicht.
  8. Krause, J.: Heterogenität und Integration : Zur Weiterentwicklung von Inhaltserschließung und Retrieval in sich veränderten Kontexten (2001) 0.01
    0.011320861 = product of:
      0.052830685 = sum of:
        0.007245874 = weight(_text_:information in 6071) [ClassicSimilarity], result of:
          0.007245874 = score(doc=6071,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.16796975 = fieldWeight in 6071, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
        0.012421262 = weight(_text_:retrieval in 6071) [ClassicSimilarity], result of:
          0.012421262 = score(doc=6071,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.16710453 = fieldWeight in 6071, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
        0.033163548 = weight(_text_:frankfurt in 6071) [ClassicSimilarity], result of:
          0.033163548 = score(doc=6071,freq=4.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.32470825 = fieldWeight in 6071, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
      0.21428572 = coord(3/14)
    
    Abstract
    As an important support tool in science research, specialized information systems are rapidly changing their character. The potential for improvement compared with today's usual systems is enormous. This fact will be demonstrated by means of two problem complexes: - WWW search engines, which were developed without any government grants, are increasingly dominating the scene. Does the WWW displace information centers with their high quality databases? What are the results we can get nowadays using general WWW search engines? - In addition to the WWW and specialized databases, scientists now use WWW library catalogues of digital libraries, which combine the catalogues from an entire region or a country. At the same time, however, they are faced with highly decentralized heterogeneous databases which contain the widest range of textual sources and data, e.g. from surveys. One consequence is the presence of serious inconsistencies in quality, relevance and content analysis. Thus, the main problem to be solved is as follows: users must be supplied with heterogeneous data from different sources, modalities and content development processes via a visual user interface without inconsistencies in content development, for example, seriously impairing the quality of the search results, e. g. when phrasing their search inquiry in the terminology to which they are accustomed
    Imprint
    Frankfurt am Main : DGI
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
  9. Strötgen, R.; Kokkelink, S.: Metadatenextraktion aus Internetquellen : Heterogenitätsbehandlung im Projekt CARMEN (2001) 0.01
    0.010664618 = product of:
      0.049768217 = sum of:
        0.004183407 = weight(_text_:information in 5808) [ClassicSimilarity], result of:
          0.004183407 = score(doc=5808,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.09697737 = fieldWeight in 5808, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5808)
        0.012421262 = weight(_text_:retrieval in 5808) [ClassicSimilarity], result of:
          0.012421262 = score(doc=5808,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.16710453 = fieldWeight in 5808, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5808)
        0.033163548 = weight(_text_:frankfurt in 5808) [ClassicSimilarity], result of:
          0.033163548 = score(doc=5808,freq=4.0), product of:
            0.10213336 = queryWeight, product of:
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.024573348 = queryNorm
            0.32470825 = fieldWeight in 5808, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1562657 = idf(docFreq=1882, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5808)
      0.21428572 = coord(3/14)
    
    Abstract
    Die Sonderfördermaßnahme CARMEN (Content Analysis, Retrieval and Metadata: Effective Networking) zielt im Rahmen des vom BMB+F geförderten Programms GLOBAL INFO darauf ab, in der heutigen dezentralen Informationsweit geeignete Informationssysteme für die verteilten Datenbestände in Bibliotheken, Fachinformationszentren und im Internet zu schaffen. Diese Zusammenführung ist weniger technisch als inhaltlich und konzeptuell problematisch. Heterogenität tritt beispielsweise auf, wenn unterschiedliche Datenbestände zur Inhaltserschließung verschiedene Thesauri oder Klassifikationen benutzen, wenn Metadaten unterschiedlich oder überhaupt nicht erfasst werden oder wenn intellektuell aufgearbeitete Quellen mit in der Regel vollständig unerschlossenen Internetdokumenten zusammentreffen. Im Projekt CARMEN wird dieses Problem mit mehreren Methoden angegangen: Über deduktiv-heuristische Verfahren werden Metadaten automatisch aus Dokumenten generiert, außerdem lassen sich mit statistisch-quantitativen Methoden die unterschiedlichen Verwendungen von Termen in den verschiedenen Beständen aufeinander abbilden, und intellektuell erstellte Crosskonkordanzen schaffen sichere Übergänge von einer Dokumentationssprache in eine andere. Für die Extraktion von Metadaten gemäß Dublin Core (v. a. Autor, Titel, Institution, Abstract, Schlagworte) werden anhand typischer Dokumente (Dissertationen aus Math-Net im PostScript-Format und verschiedenste HTML-Dateien von WWW-Servern deutscher sozialwissenschaftlicher Institutionen) Heuristiken entwickelt. Die jeweilige Wahrscheinlichkeit, dass die so gewonnenen Metadaten korrekt und vertrauenswürdig sind, wird über Gewichte den einzelnen Daten zugeordnet. Die Heuristiken werden iterativ in ein Extraktionswerkzeug implementiert, getestet und verbessert, um die Zuverlässigkeit der Verfahren zu erhöhen. Derzeit werden an der Universität Osnabrück und im InformationsZentrum Sozialwissenschaften Bonn anhand mathematischer und sozialwissenschaftlicher Datenbestände erste Prototypen derartiger Transfermodule erstellt
    Imprint
    Frankfurt am Main : DGI
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
  10. Heery, R.: Information gateways : collaboration and content (2000) 0.01
    0.009322563 = product of:
      0.043505292 = sum of:
        0.020241255 = weight(_text_:web in 4866) [ClassicSimilarity], result of:
          0.020241255 = score(doc=4866,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.25239927 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.015495556 = weight(_text_:information in 4866) [ClassicSimilarity], result of:
          0.015495556 = score(doc=4866,freq=14.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.3592092 = fieldWeight in 4866, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.007768482 = product of:
          0.023305446 = sum of:
            0.023305446 = weight(_text_:22 in 4866) [ClassicSimilarity], result of:
              0.023305446 = score(doc=4866,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.2708308 = fieldWeight in 4866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4866)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Information subject gateways provide targeted discovery services for their users, giving access to Web resources selected according to quality and subject coverage criteria. Information gateways recognise that they must collaborate on a wide range of issues relating to content to ensure continued success. This report is informed by discussion of content activities at the 1999 Imesh Workshop. The author considers the implications for subject based gateways of co-operation regarding coverage policy, creation of metadata, and provision of searching and browsing across services. Other possibilities for co-operation include working more closely with information providers, and diclosure of information in joint metadata registries
    Date
    22. 6.2002 19:38:54
    Source
    Online information review. 24(2000) no.1, S.40-45
    Theme
    Information Gateway
  11. Stark, T.: ¬The Net and Z39.50 : toward a virtual union catalog (1997) 0.01
    0.009318813 = product of:
      0.04348779 = sum of:
        0.020241255 = weight(_text_:web in 3194) [ClassicSimilarity], result of:
          0.020241255 = score(doc=3194,freq=2.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.25239927 = fieldWeight in 3194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3194)
        0.00585677 = weight(_text_:information in 3194) [ClassicSimilarity], result of:
          0.00585677 = score(doc=3194,freq=2.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13576832 = fieldWeight in 3194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3194)
        0.017389767 = weight(_text_:retrieval in 3194) [ClassicSimilarity], result of:
          0.017389767 = score(doc=3194,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.23394634 = fieldWeight in 3194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3194)
      0.21428572 = coord(3/14)
    
    Abstract
    The State Library of Iowa, USA, received a Higher Education Act title II grant from the US Dept. of Education in 1994 to create a demonstration project of new library information technologies. Describes 2 interlinked components of the project: Web-based union catalogue development and statewide deployment of the ANSI/NISO Z39.50 standard for database search and retrieval. Z39.50 was chosen because of its ability to searching multiple remote databases in a single session and its common interface across a variety of implementations. Use of a distributed Z39.50 search makes the need for maintaining large union catalogues unnecessary
  12. Sarinder, K.K.S.; Lim, L.H.S.; Merican, A.F.; Dimyati, K.: Biodiversity information retrieval across networked data sets (2010) 0.01
    0.008539569 = product of:
      0.03985132 = sum of:
        0.005916231 = weight(_text_:information in 3951) [ClassicSimilarity], result of:
          0.005916231 = score(doc=3951,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.13714671 = fieldWeight in 3951, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3951)
        0.017566316 = weight(_text_:retrieval in 3951) [ClassicSimilarity], result of:
          0.017566316 = score(doc=3951,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.23632148 = fieldWeight in 3951, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3951)
        0.016368773 = product of:
          0.049106315 = sum of:
            0.049106315 = weight(_text_:2010 in 3951) [ClassicSimilarity], result of:
              0.049106315 = score(doc=3951,freq=5.0), product of:
                0.117538005 = queryWeight, product of:
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.024573348 = queryNorm
                0.41779095 = fieldWeight in 3951, product of:
                  2.236068 = tf(freq=5.0), with freq of:
                    5.0 = termFreq=5.0
                  4.7831497 = idf(docFreq=1005, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3951)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Purpose - Biodiversity resources are inevitably digital and stored in a wide variety of formats by researchers or stakeholders. In Malaysia, although digitizing biodiversity data has long been stressed, the interoperability of the biodiversity data is still an issue that requires attention. This is because, when data are shared, the question of copyright occurs, creating a setback among researchers wanting to promote or share data through online presentations. To solve this, the aim is to present an approach to integrate data through wrapping of datasets stored in relational databases located on networked platforms. Design/methodology/approach - The approach uses tools such as XML, PHP, ASP and HTML to integrate distributed databases in heterogeneous formats. Five current database integration systems were reviewed and all of them have common attributes such as query-oriented, using a mediator-based approach and integrating a structured data model. These common attributes were also adopted in the proposed solution. Distributed Generic Information Retrieval (DiGIR) was used as a model in designing the proposed solution. Findings - A new database integration system was developed, which is user-friendly and simple with common attributes found in current integration systems.
    Source
    Aslib proceedings. 62(2010) nos.4/5, S.514-522
    Year
    2010
  13. Dempsey, L.; Russell, R.; Kirriemur, J.W.: Towards distributed library systems : Z39.50 in a European context (1996) 0.01
    0.008189626 = product of:
      0.038218252 = sum of:
        0.009465969 = weight(_text_:information in 127) [ClassicSimilarity], result of:
          0.009465969 = score(doc=127,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.21943474 = fieldWeight in 127, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=127)
        0.01987402 = weight(_text_:retrieval in 127) [ClassicSimilarity], result of:
          0.01987402 = score(doc=127,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.26736724 = fieldWeight in 127, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=127)
        0.0088782655 = product of:
          0.026634796 = sum of:
            0.026634796 = weight(_text_:22 in 127) [ClassicSimilarity], result of:
              0.026634796 = score(doc=127,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.30952093 = fieldWeight in 127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=127)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Abstract
    Z39.50 is an information retrieval protocol. It has generated much interest but is so far little deployed in UK systems and services. Gives a functional overview of the protocol itself and the standards background, describes some European initiatives which make use of it, and outlines various issues to do with its future use and acceptance. Z39.50 is a crucial building block of future distributed information systems but it needs to be considered alongside other protocols and services to provide useful applications
    Source
    Program. 30(1996) no.1, S.1-22
  14. Callan, J.: Distributed information retrieval (2000) 0.01
    0.008134593 = product of:
      0.05694215 = sum of:
        0.014346098 = weight(_text_:information in 31) [ClassicSimilarity], result of:
          0.014346098 = score(doc=31,freq=12.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.3325631 = fieldWeight in 31, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=31)
        0.042596053 = weight(_text_:retrieval in 31) [ClassicSimilarity], result of:
          0.042596053 = score(doc=31,freq=12.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.5730491 = fieldWeight in 31, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=31)
      0.14285715 = coord(2/14)
    
    Abstract
    A multi-database model of distributed information retrieval is presented, in which people are assumed to have access to many searchable text databases. In such an environment, full-text information retrieval consists of discovering database contents, ranking databases by their expected ability to satisfy the query, searching a small number of databases, and merging results returned by different databases. This paper presents algorithms for each task. It also discusses how to reorganize conventional test collections into multi-database testbeds, and evaluation methodologies for multi-database experiments. A broad and diverse group of experimental results is presented to demonstrate that the algorithms are effective, efficient, robust, and scalable
    Series
    The Kluwer international series on information retrieval; 7
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  15. López Vargas, M.A.: "Ilmenauer Verteiltes Information REtrieval System" (IVIRES) : eine neue Architektur zur Informationsfilterung in einem verteilten Information Retrieval System (2002) 0.01
    0.00805116 = product of:
      0.056358118 = sum of:
        0.014198954 = weight(_text_:information in 4041) [ClassicSimilarity], result of:
          0.014198954 = score(doc=4041,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.3291521 = fieldWeight in 4041, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4041)
        0.042159162 = weight(_text_:retrieval in 4041) [ClassicSimilarity], result of:
          0.042159162 = score(doc=4041,freq=4.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.5671716 = fieldWeight in 4041, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=4041)
      0.14285715 = coord(2/14)
    
  16. Subject retrieval in a networked environment : Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC (2003) 0.01
    0.007687539 = product of:
      0.026906384 = sum of:
        0.010016824 = weight(_text_:web in 3964) [ClassicSimilarity], result of:
          0.010016824 = score(doc=3964,freq=6.0), product of:
            0.08019538 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.024573348 = queryNorm
            0.12490524 = fieldWeight in 3964, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
        0.0047329846 = weight(_text_:information in 3964) [ClassicSimilarity], result of:
          0.0047329846 = score(doc=3964,freq=16.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.10971737 = fieldWeight in 3964, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
        0.00993701 = weight(_text_:retrieval in 3964) [ClassicSimilarity], result of:
          0.00993701 = score(doc=3964,freq=8.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.13368362 = fieldWeight in 3964, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
        0.0022195664 = product of:
          0.006658699 = sum of:
            0.006658699 = weight(_text_:22 in 3964) [ClassicSimilarity], result of:
              0.006658699 = score(doc=3964,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.07738023 = fieldWeight in 3964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3964)
          0.33333334 = coord(1/3)
      0.2857143 = coord(4/14)
    
    Content
    Enthält die Beiträge: Devadason, F.J., N. Intaraksa u. P. Patamawongjariya u.a.: Faceted indexing application for organizing and accessing internet resources; Nicholson, D., S. Wake: HILT: subject retrieval in a distributed environment; Olson, T.: Integrating LCSH and MeSH in information systems; Kuhr, P.S.: Putting the world back together: mapping multiple vocabularies into a single thesaurus; Freyre, E., M. Naudi: MACS : subject access across languages and networks; McIlwaine, I.C.: The UDC and the World Wide Web; Garrison, W.A.: The Colorado Digitization Project: subject access issues; Vizine-Goetz, D., R. Thompson: Towards DDC-classified displays of Netfirst search results: subject access issues; Godby, C.J., J. Stuler: The Library of Congress Classification as a knowledge base for automatic subject categorization: subject access issues; O'Neill, E.T., E. Childress u. R. Dean u.a.: FAST: faceted application of subject terminology; Bean, C.A., R. Green: Improving subject retrieval with frame representation; Zeng, M.L., Y. Chen: Features of an integrated thesaurus management and search system for the networked environment; Hudon, M.: Subject access to Web resources in education; Qin, J., J. Chen: A multi-layered, multi-dimensional representation of digital educational resources; Riesthuis, G.J.A.: Information languages and multilingual subject access; Geisselmann, F.: Access methods in a database of e-journals; Beghtol, C.: The Iter Bibliography: International standard subject access to medieval and renaissance materials (400-1700); Slavic, A.: General library classification in learning material metadata: the application in IMS/LOM and CDMES metadata schemas; Cordeiro, M.I.: From library authority control to network authoritative metadata sources; Koch, T., H. Neuroth u. M. Day: Renardus: Cross-browsing European subject gateways via a common classification system (DDC); Olson, H.A., D.B. Ward: Mundane standards, everyday technologies, equitable access; Burke, M.A.: Personal Construct Theory as a research tool in Library and Information Science: case study: development of a user-driven classification of photographs
    Footnote
    Rez. in: KO 31(2004) no.2, S.117-118 (D. Campbell): "This excellent volume offers 22 papers delivered at an IFLA Satellite meeting in Dublin Ohio in 2001. The conference gathered together information and computer scientists to discuss an important and difficult question: in what specific ways can the accumulated skills, theories and traditions of librarianship be mobilized to face the challenges of providing subject access to information in present and future networked information environments? The papers which grapple with this question are organized in a surprisingly deft and coherent way. Many conferences and proceedings have unhappy sessions that contain a hodge-podge of papers that didn't quite fit any other categories. As befits a good classificationist, editor I.C. McIlwaine has kept this problem to a minimum. The papers are organized into eight sessions, which split into two broad categories. The first five sessions deal with subject domains, and the last three deal with subject access tools. The five sessions and thirteen papers that discuss access in different domains appear in order of in creasing intension. The first papers deal with access in multilingual environments, followed by papers an access across multiple vocabularies and across sectors, ending up with studies of domain-specific retrieval (primarily education). Some of the papers offer predictably strong work by scholars engaged in ongoing, long-term research. Gerard Riesthuis offers a clear analysis of the complexities of negotiating non-identical thesauri, particularly in cases where hierarchical structure varies across different languages. Hope Olson and Dennis Ward use Olson's familiar and welcome method of using provocative and unconventional theory to generate meliorative approaches to blas in general subject access schemes. Many papers, an the other hand, deal with specific ongoing projects: Renardus, The High Level Thesaurus Project, The Colorado Digitization Project and The Iter Bibliography for medieval and Renaissance material. Most of these papers display a similar structure: an explanation of the theory and purpose of the project, an account of problems encountered in the implementation, and a discussion of the results, both promising and disappointing, thus far. Of these papers, the account of the Multilanguage Access to Subjects Project in Europe (MACS) deserves special mention. In describing how the project is founded an the principle of the equality of languages, with each subject heading language maintained in its own database, and with no single language used as a pivot for the others, Elisabeth Freyre and Max Naudi offer a particularly vivid example of the way the ethics of librarianship translate into pragmatic contexts and concrete procedures. The three sessions and nine papers devoted to subject access tools split into two kinds: papers that discuss the use of theory and research to generate new tools for a networked environment, and those that discuss the transformation of traditional subject access tools in this environment. In the new tool development area, Mary Burke provides a promising example of the bidirectional approach that is so often necessary: in her case study of user-driven classification of photographs, she user personal construct theory to clarify the practice of classification, while at the same time using practice to test the theory. Carol Bean and Rebecca Green offer an intriguing combination of librarianship and computer science, importing frame representation technique from artificial intelligence to standardize syntagmatic relationships to enhance recall and precision.
    The papers discussing the transformation of traditional tools locate the point of transformation in different places. Some, like the papers an DDC, LCC and UDC, suggest that these schemes can be imported into the networked environment and used as a basis for improving access to networked resources, just as they improve access to physical resources. While many of these papers are intriguing, I suspect that convincing those outside the profession will be difficult. In particular, Edward O'Neill and his colleagues, while offering a fascinating suggestion for preserving the Library of Congress Subject Headings and their associated infrastructure by converting them into a faceted scheme, will have an uphill battle convincing the unconverted that LCSH has a place in the online networked environment. Two papers deserve mention for taking a different approach: both Francis Devadason and Maria Ines Cordeiro suggest that we import concepts and techniques rather than realized schemes. Devadason argues for the creation of a faceted pre-coordinate indexing scheme for Internet resources based an Deep Structure indexing, which originates in Bhattacharyya's Postulate-Based Permuted Subject Indexing and in Ranganathan's chain indexing techniques. Cordeiro takes up the vitally important role of authority control in Web environments, suggesting that the techniques of authority control be expanded to enhance user flexibility. By focusing her argument an the concepts rather than an the existing tools, and by making useful and important distinctions between library and non-library uses of authority control, Cordeiro suggests that librarianship's contribution to networked access has less to do with its tools and infrastructure, and more to do with concepts that need to be boldly reinvented. The excellence of this collection derives in part from the energy, insight and diversity of the papers. Credit also goes to the planning and forethought that went into the conference itself by OCLC, the IFLA Classification and Indexing Section, the IFLA Information Technology Section, and the Program Committee, headed by editor I.C. McIlwaine. This collection avoids many of the problems of conference proceedings, and instead offers the best of such proceedings: detail, diversity, and judicious mixtures of theory and practice. Some of the disadvantages that plague conference proceedings appear here. Busy scholars sometimes interpret the concept of "camera-ready copy" creatively, offering diagrams that could have used some streamlining, and label boxes that cut off the tops or bottoms of letters. The papers are necessarily short, and many of them raise issues that deserve more extensive treatment. The issue of subject access in networked environments is crying out for further synthesis at the conceptual and theoretical level. But no synthesis can afford to ignore the kind of energetic, imaginative and important work that the papers in these proceedings represent."
  17. Xu, J.; Croft, W.B.: Topic-based language models for distributed retrieval (2000) 0.01
    0.007630227 = product of:
      0.053411588 = sum of:
        0.008695048 = weight(_text_:information in 38) [ClassicSimilarity], result of:
          0.008695048 = score(doc=38,freq=6.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.20156369 = fieldWeight in 38, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
        0.04471654 = weight(_text_:retrieval in 38) [ClassicSimilarity], result of:
          0.04471654 = score(doc=38,freq=18.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.60157627 = fieldWeight in 38, product of:
              4.2426405 = tf(freq=18.0), with freq of:
                18.0 = termFreq=18.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.14285715 = coord(2/14)
    
    Abstract
    Effective retrieval in a distributed environment is an important but difficult problem. Lack of effectiveness appears to have two major causes. First, existing collection selection algorithms do not work well on heterogeneous collections. Second, relevant documents are scattered over many collections and searching a few collections misses many relevant documents. We propose a topic-oriented approach to distributed retrieval. With this approach, we structure the document set of a distributed retrieval environment around a set of topics. Retrieval for a query involves first selecting the right topics for the query and then dispatching the search process to collections that contain such topics. The content of a topic is characterized by a language model. In environments where the labeling of documents by topics is unavailable, document clustering is employed for topic identification. Based on these ideas, three methods are proposed to suit different environments. We show that all three methods improve effectiveness of distributed retrieval
    Series
    The Kluwer international series on information retrieval; 7
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  18. Crestani, F.; Wu, S.: Testing the cluster hypothesis in distributed information retrieval (2006) 0.01
    0.007575589 = product of:
      0.053029124 = sum of:
        0.011832462 = weight(_text_:information in 984) [ClassicSimilarity], result of:
          0.011832462 = score(doc=984,freq=16.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.27429342 = fieldWeight in 984, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=984)
        0.041196663 = weight(_text_:retrieval in 984) [ClassicSimilarity], result of:
          0.041196663 = score(doc=984,freq=22.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.554223 = fieldWeight in 984, product of:
              4.690416 = tf(freq=22.0), with freq of:
                22.0 = termFreq=22.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=984)
      0.14285715 = coord(2/14)
    
    Abstract
    How to merge and organise query results retrieved from different resources is one of the key issues in distributed information retrieval. Some previous research and experiments suggest that cluster-based document browsing is more effective than a single merged list. Cluster-based retrieval results presentation is based on the cluster hypothesis, which states that documents that cluster together have a similar relevance to a given query. However, while this hypothesis has been demonstrated to hold in classical information retrieval environments, it has never been fully tested in heterogeneous distributed information retrieval environments. Heterogeneous document representations, the presence of document duplicates, and disparate qualities of retrieval results, are major features of an heterogeneous distributed information retrieval environment that might disrupt the effectiveness of the cluster hypothesis. In this paper we report on an experimental investigation into the validity and effectiveness of the cluster hypothesis in highly heterogeneous distributed information retrieval environments. The results show that although clustering is affected by different retrieval results representations and quality, the cluster hypothesis still holds and that generating hierarchical clusters in highly heterogeneous distributed information retrieval environments is still a very effective way of presenting retrieval results to users.
    Source
    Information processing and management. 42(2006) no.5, S.1137-1150
  19. Fuhr, N.: Towards data abstraction in networked information retrieval systems (1999) 0.01
    0.0073349974 = product of:
      0.05134498 = sum of:
        0.016565446 = weight(_text_:information in 4517) [ClassicSimilarity], result of:
          0.016565446 = score(doc=4517,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.3840108 = fieldWeight in 4517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4517)
        0.034779534 = weight(_text_:retrieval in 4517) [ClassicSimilarity], result of:
          0.034779534 = score(doc=4517,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.46789268 = fieldWeight in 4517, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.109375 = fieldNorm(doc=4517)
      0.14285715 = coord(2/14)
    
    Source
    Information processing and management. 35(1999) no.2, S.101-119
  20. Meiert, M.: Elektronische Publikationen an Hochschulen : Modellierung des elektronischen Publikationsprozesses am Beispiel der Universität Hildesheim (2006) 0.01
    0.0061422195 = product of:
      0.028663691 = sum of:
        0.007099477 = weight(_text_:information in 5974) [ClassicSimilarity], result of:
          0.007099477 = score(doc=5974,freq=4.0), product of:
            0.04313797 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.024573348 = queryNorm
            0.16457605 = fieldWeight in 5974, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5974)
        0.014905514 = weight(_text_:retrieval in 5974) [ClassicSimilarity], result of:
          0.014905514 = score(doc=5974,freq=2.0), product of:
            0.07433229 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.024573348 = queryNorm
            0.20052543 = fieldWeight in 5974, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5974)
        0.006658699 = product of:
          0.019976096 = sum of:
            0.019976096 = weight(_text_:22 in 5974) [ClassicSimilarity], result of:
              0.019976096 = score(doc=5974,freq=2.0), product of:
                0.08605168 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.024573348 = queryNorm
                0.23214069 = fieldWeight in 5974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5974)
          0.33333334 = coord(1/3)
      0.21428572 = coord(3/14)
    
    Date
    1. 9.2006 13:22:15
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
    Theme
    Information Gateway

Languages

  • e 53
  • d 34
  • f 1
  • More… Less…

Types

  • a 78
  • el 10
  • x 5
  • m 4
  • r 1
  • s 1
  • More… Less…

Classifications