Search (56 results, page 1 of 3)

  • × theme_ss:"Verteilte bibliographische Datenbanken"
  • × year_i:[2000 TO 2010}
  1. Jahns, Y.; Trummer, M.: Sacherschließung - Informationsdienstleistung nach Maß : Kann Heterogenität beherrscht werden? (2004) 0.01
    0.014115498 = product of:
      0.04940424 = sum of:
        0.0025582663 = weight(_text_:information in 2789) [ClassicSimilarity], result of:
          0.0025582663 = score(doc=2789,freq=2.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.03879095 = fieldWeight in 2789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=2789)
        0.046845976 = weight(_text_:wirtschaft in 2789) [ClassicSimilarity], result of:
          0.046845976 = score(doc=2789,freq=6.0), product of:
            0.2144363 = queryWeight, product of:
              5.707926 = idf(docFreq=398, maxDocs=44218)
              0.037568163 = queryNorm
            0.21846104 = fieldWeight in 2789, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.707926 = idf(docFreq=398, maxDocs=44218)
              0.015625 = fieldNorm(doc=2789)
      0.2857143 = coord(2/7)
    
    Content
    "... unter diesem Motto hat die Deutsche Bücherei Leipzig am 23. März 2004 auf dem Leipziger Kongress für Bibliothek und Information eine Vortragsreihe initiiert. Vorgestellt wurden Projekte, die sich im Spannungsfeld von Standardisierung und Heterogenität der Sacherschließung bewegen. Die Benutzer unserer Bibliotheken und Informationseinrichtungen stehen heute einer Fülle von Informationen gegenüber, die sie aus zahlreichen Katalogen und Fachdatenbanken abfragen können. Diese Recherche kann schnell zeitraubend werden, wenn der Benutzer mit verschiedenen Suchbegriffen und -logiken arbeiten muss, um zur gewünschten Ressource zu gelangen. Ein Schlagwort A kann in jedem der durchsuchten Systeme eine andere Bedeutung annehmen. Homogenität erreicht man klassisch zunächst durch Normierung und Standardisierung. Für die zwei traditionellen Verfahren der inhaltlichen Erschließung - der klassifikatorischen und der verbalen - haben sich in Deutschland verschiedene Standards durchgesetzt. Klassifikatorische Erschließung wird mit ganz unterschiedlichen Systemen betrieben. Verbreitet sind etwa die Regensburger Verbundklassifikation (RVK) oder die Basisklassifikation (BK). Von Spezial- und Facheinrichtungen werden entsprechende Fachklassifikationen eingesetzt. Weltweit am häufigsten angewandt ist die Dewey Decimal Classification (DDC), die seit 2003 ins Deutsche übertragen wird. Im Bereich der verbalen Sacherschließung haben sich, vor allem bei den wissenschaftlichen Universalbibliotheken, die Regeln für den Schlagwortkatalog (RSWK) durchgesetzt, durch die zugleich die Schlagwortnormdatei (SWD) kooperativ aufgebaut wurde. Daneben erschließen wiederum viele Spezial- und Facheinrichtungen mit selbst entwickelten Fachthesauri.
    Katja Heyke, Universitäts- und Stadtbibliothek Köln, und Manfred Faden, Bibliothek des HWWA-Instituts für Wirtschaftsforschung Hamburg, stellten ähnliche Entwicklungen für den Fachbereich Wirtschaftswissenschaften vor. Hier wird eine Crosskonkordanz zwischen dem Standard Thesaurus Wirtschaft (STW) und dem Bereich Wirtschaft der SWD aufgebaut." Diese Datenbank soll den Zugriff auf die mit STW und SWD erschlossenen Bestände ermöglichen. Sie wird dazu weitergegeben an die virtuelle Fachbibliothek EconBiz und an den Gemeinsamen Bibliotheksverbund. Die Crosskonkordanz Wirtschaft bietet aber auch die Chance zur kooperativen Sacherschließung, denn sie eröffnet die Möglichkeit der gegenseitigen Übernahme von Sacherschließungsdaten zwischen den Partnern Die Deutsche Bibliothek, Universitäts- und Stadtbibliothek Köln, HWWA und Bibliothek des Instituts für Weltwirtschaft Kiel. Am Beispiel der Wirtschaftswissenschaften zeigt sich der Gewinn solcher KonkordanzProjekte für Indexierer und Benutzer. Der Austausch über die Erschließungsregeln und die systematische Analyse der Normdaten führen zur Bereinigung von fachlichen Schwachstellen und Inkonsistenzen in den Systemen. Die Thesauri werden insgesamt verbessert und sogar angenähert. Die Vortragsreihe schloss mit einem Projekt, das die Heterogenität der Daten aus dem Blickwinkel der Mehrsprachigkeit betrachtet. Martin Kunz, Deutsche Bibliothek Frankfurt am Main, informierte über das Projekt MACS (Multilingual Access to Subject Headings). MACS bietet einen mehrsprachigen Zugriff auf Bibliothekskataloge. Dazu wurde eine Verbindung zwischen den Schlagwortnormdateien LCSH, RAMEAU und SWD erarbeitet. Äquivalente Vorzugsbezeichnungen der Normdateien werden intellektuell nachgewiesen und als Link abgelegt. Das Projekt beschränkte sich zunächst auf die Bereiche Sport und Theater und widmet sich in einer nächsten Stufe den am häufigsten verwendeten Schlagwörtern. MACS geht davon aus, dass ein Benutzer in der Sprache seiner Wahl (Deutsch, Englisch, Französisch) eine Schlagwortsuche startet, und ermöglicht ihm, seine Suche auf die affilierten Datenbanken im Ausland auszudehnen. Martin Kunz plädierte für einen Integrationsansatz, der auf dem gegenseitigen Respekt vor der Terminologie der kooperierenden Partner beruht. Er sprach sich dafür aus, in solchen Vorhaben den Begriff der Thesaurus föderation anzuwenden, der die Autonomie der Thesauri unterstreicht.
  2. Arch-Int, N.; Sophatsathit, P.: ¬A semantic information gathering approach for heterogeneous information sources on WWW (2003) 0.01
    0.01346599 = product of:
      0.047130965 = sum of:
        0.026586283 = weight(_text_:information in 4694) [ClassicSimilarity], result of:
          0.026586283 = score(doc=4694,freq=6.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.40312737 = fieldWeight in 4694, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4694)
        0.020544684 = product of:
          0.06163405 = sum of:
            0.06163405 = weight(_text_:29 in 4694) [ClassicSimilarity], result of:
              0.06163405 = score(doc=4694,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.46638384 = fieldWeight in 4694, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4694)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Source
    Journal of information science. 29(2003) no.5, S.357-374
  3. Tappenbeck, I.; Wessel, C.: CARMEN : Content Analysis, Retrieval and Metadata: Effective Net-working. Ein Halbzeitbericht (2001) 0.01
    0.01246505 = product of:
      0.043627676 = sum of:
        0.03677945 = weight(_text_:medien in 5900) [ClassicSimilarity], result of:
          0.03677945 = score(doc=5900,freq=2.0), product of:
            0.17681947 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.037568163 = queryNorm
            0.20800565 = fieldWeight in 5900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.03125 = fieldNorm(doc=5900)
        0.006848227 = product of:
          0.020544682 = sum of:
            0.020544682 = weight(_text_:29 in 5900) [ClassicSimilarity], result of:
              0.020544682 = score(doc=5900,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.15546128 = fieldWeight in 5900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5900)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Das Projekt CARMEN startete als Sonderfördermaßnahme im Rahmen von Global lnfo im Oktober 1999 mit einer geplanten Laufzeit von 29 Monaten. Der Schwerpunkt des Projekts liegt in der Weiterentwicklung von Konzepten und Verfahren der Dokumenterschließung, die den Zugriff auf heterogene, dezentral verteilte Informationsbestände und deren Verwaltung nach gemeinsamen Prinzipien ermöglichen sollen. Dabei geht CARMEN gezielt einen anderen Weg als die meisten bisherigen Ansätze in diesem Bereich, die versuchen, Homogenität und Konsistenz in einer dezentralen Informationslandschaft technikorientiert herzustellen, indem Verfahren entwickelt werden, durch die physikalisch auf verschiedene Dokumentenräume gleichzeitig zugegriffen werden kann. Eine rein technische Parallelisierung von Zugriffsmöglichkeiten reicht jedoch nicht aus, denn das Hauptproblem der inhaltlichen, strukturellen und konzeptionellen Differenz der einzelnen Datenbestände wird damit nicht gelöst. Um diese Differenzen zu kompensieren, werden Problemlösungen und Weiterentwicklungen innerhalb des Projekts CARMEN in drei Bereichen erarbeitet: (1) Metadaten (Dokumentbeschreibung, Retrieval, Verwaltung, Archivierung) (2) Methoden des Umgangs mit der verbleibenden Heterogenität der Datenbestände (3) Retrieval für strukturierte Dokumente mit Metadaten und heterogenen Datentypen. Diese drei Aufgabenbereiche hängen eng zusammen. Durch die Entwicklungen im Bereich der Metadaten soll einerseits die verlorengegangene Konsistenz partiell wiederhergestellt und auf eine den neuen Medien gerechte Basis gestellt werden. Andererseits sollen durch Verfahren zur Heterogenitätsbehandlung Dokumente mit unterschiedlicher Datenrelevanz und Inhaltserschließung aufeinander bezogen und retrievalseitig durch ein Rechercheverfahren erganzt werden, das den unterschiedlichen Datentypen gerecht wird Innerhalb des Gesamtprojekts CARMEN werden diese Aspekte arbeitsteilig behandelt. Acht Arbeitspakete (APs) befassen sich in Abstimmung miteinander mit je verschiedenen Schwerpunkten. Um die Koordination der Arbeiten der verschiedenen APs untereinander zu unterstützen, trafen sich die ca. 40 Projektbearbeiter am 1. und 2. Februar 2001 zum "CARMEN middle OfTheRoad Workshop" in Bonn. Anlässlich dieses Workshops wurden die inhaltlichen und technischen Ergebnisse, die in der ersten Hälfte der Projektlaufzeit von den einzelnen APs erzielt worden sind, in insgesamt 17 Präsentationen vorgestellt
  4. Tappenbeck, I.; Wessel, C.: CARMEN : Content Analysis, Retrieval and Metadata: Effective Net-working. Bericht über den middleOfTheRoad Workshop (2001) 0.01
    0.01246505 = product of:
      0.043627676 = sum of:
        0.03677945 = weight(_text_:medien in 5901) [ClassicSimilarity], result of:
          0.03677945 = score(doc=5901,freq=2.0), product of:
            0.17681947 = queryWeight, product of:
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.037568163 = queryNorm
            0.20800565 = fieldWeight in 5901, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.7066307 = idf(docFreq=1085, maxDocs=44218)
              0.03125 = fieldNorm(doc=5901)
        0.006848227 = product of:
          0.020544682 = sum of:
            0.020544682 = weight(_text_:29 in 5901) [ClassicSimilarity], result of:
              0.020544682 = score(doc=5901,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.15546128 = fieldWeight in 5901, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5901)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Das Projekt CARMEN startete als Sonderfördermaßnahme im Rahmen von Global lnfo im Oktober 1999 mit einer geplanten Laufzeit von 29 Monaten. Der Schwerpunkt des Projekts liegt in der Weiterentwicklung von Konzepten und Verfahren der Dokumenterschließung, die den Zugriff auf heterogene, dezentral verteilte Informationsbestände und deren Verwaltung nach gemeinsamen Prinzipien ermöglichen sollen. Dabei geht CARMEN gezielt einen anderen Weg als die meisten bisherigen Ansätze in diesem Bereich, die versuchen, Homogenität und Konsistenz in einer dezentralen Informationslandschaft technikorientiert herzustellen, indem Verfahren entwickelt werden, durch die physikalisch auf verschiedene Dokumentenräume gleichzeitig zugegriffen werden kann. Eine rein technische Parallelisierung von Zugriffsmöglichkeiten reicht jedoch nicht aus, denn das Hauptproblem der inhaltlichen, strukturellen und konzeptionellen Differenz der einzelnen Datenbestände wird damit nicht gelöst. Um diese Differenzen zu kompensieren, werden Problemlösungen und Weiterentwicklungen innerhalb des Projekts CARMEN in drei Bereichen erarbeitet: (1) Metadaten (Dokumentbeschreibung, Retrieval, Verwaltung, Archivierung) (2) Methoden des Umgangs mit der verbleibenden Heterogenität der Datenbestände (3) Retrieval für strukturierte Dokumente mit Metadaten und heterogenen Datentypen. Diese drei Aufgabenbereiche hängen eng zusammen. Durch die Entwicklungen im Bereich der Metadaten soll einerseits die verlorengegangene Konsistenz partiell wiederhergestellt und auf eine den neuen Medien gerechte Basis gestellt werden. Andererseits sollen durch Verfahren zur Heterogenitätsbehandlung Dokumente mit unterschiedlicher Datenrelevanz und Inhaltserschließung aufeinander bezogen und retrievalseitig durch ein Rechercheverfahren erganzt werden, das den unterschiedlichen Datentypen gerecht wird Innerhalb des Gesamtprojekts CARMEN werden diese Aspekte arbeitsteilig behandelt. Acht Arbeitspakete (APs) befassen sich in Abstimmung miteinander mit je verschiedenen Schwerpunkten. Um die Koordination der Arbeiten der verschiedenen APs untereinander zu unterstützen, trafen sich die ca. 40 Projektbearbeiter am 1. und 2. Februar 2001 zum "CARMEN middle OfTheRoad Workshop" in Bonn. Anlässlich dieses Workshops wurden die inhaltlichen und technischen Ergebnisse, die in der ersten Hälfte der Projektlaufzeit von den einzelnen APs erzielt worden sind, in insgesamt 17 Präsentationen vorgestellt
  5. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.01
    0.010202705 = product of:
      0.035709467 = sum of:
        0.015349597 = weight(_text_:information in 4865) [ClassicSimilarity], result of:
          0.015349597 = score(doc=4865,freq=2.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.23274569 = fieldWeight in 4865, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4865)
        0.02035987 = product of:
          0.061079606 = sum of:
            0.061079606 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.061079606 = score(doc=4865,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    22. 6.2002 19:41:59
    Theme
    Information Gateway
  6. Heery, R.: Information gateways : collaboration and content (2000) 0.01
    0.010161849 = product of:
      0.035566468 = sum of:
        0.023689877 = weight(_text_:information in 4866) [ClassicSimilarity], result of:
          0.023689877 = score(doc=4866,freq=14.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.3592092 = fieldWeight in 4866, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.011876591 = product of:
          0.03562977 = sum of:
            0.03562977 = weight(_text_:22 in 4866) [ClassicSimilarity], result of:
              0.03562977 = score(doc=4866,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.2708308 = fieldWeight in 4866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4866)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Information subject gateways provide targeted discovery services for their users, giving access to Web resources selected according to quality and subject coverage criteria. Information gateways recognise that they must collaborate on a wide range of issues relating to content to ensure continued success. This report is informed by discussion of content activities at the 1999 Imesh Workshop. The author considers the implications for subject based gateways of co-operation regarding coverage policy, creation of metadata, and provision of searching and browsing across services. Other possibilities for co-operation include working more closely with information providers, and diclosure of information in joint metadata registries
    Date
    22. 6.2002 19:38:54
    Source
    Online information review. 24(2000) no.1, S.40-45
    Theme
    Information Gateway
  7. Friedrich, M.; Schimkat, R.-D.; Küchlin, W.: Information retrieval in distributed environments based on context-aware, proactive documents (2002) 0.01
    0.009144572 = product of:
      0.032006 = sum of:
        0.0200216 = weight(_text_:information in 3608) [ClassicSimilarity], result of:
          0.0200216 = score(doc=3608,freq=10.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.3035872 = fieldWeight in 3608, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3608)
        0.011984399 = product of:
          0.035953194 = sum of:
            0.035953194 = weight(_text_:29 in 3608) [ClassicSimilarity], result of:
              0.035953194 = score(doc=3608,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.27205724 = fieldWeight in 3608, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3608)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    In this position paper we propose a document-centric middleware component called Living Documents to support context-aware information retrieval in distributed communities. A Living Document acts as a micro server for a document which contains computational services, a semi-structured knowledge repository to uniformly store and access context-related information, and finally the document's digital content. Our initial prototype of Living Documents is based an the concept of mobile agents and implemented in Java and XML.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  8. Lopatenko, A.; Asserson, A.; Jeffery, K.G.: CERIF - Information retrieval of research information in a distributed heterogeneous environment (2002) 0.01
    0.008306195 = product of:
      0.029071681 = sum of:
        0.01879934 = weight(_text_:information in 3597) [ClassicSimilarity], result of:
          0.01879934 = score(doc=3597,freq=12.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.2850541 = fieldWeight in 3597, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3597)
        0.010272342 = product of:
          0.030817024 = sum of:
            0.030817024 = weight(_text_:29 in 3597) [ClassicSimilarity], result of:
              0.030817024 = score(doc=3597,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.23319192 = fieldWeight in 3597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3597)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    User demands to have access to complete and actual information about research may require integration of data from different CRISs. CRISs are rarely homogenous systems and problems of CRISs integration must be addressed from technological point of view. Implementation of CRIS providing access to heterogeneous data distributed among a number of CRISs is described. A few technologies - distributed databases, web services, semantic web are used for distributed CRIS to address different user requirements. Distributed databases serve to implement very efficient integration of homogenous systems, web services - to provide open access to research information, semantic web - to solve problems of integration semantically and structurally heterogeneous data sources and provide intelligent data retrieval interfaces. The problems of data completeness in distributed systems are addressed and CRIS-adequate solution for data completeness is suggested.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  9. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.01
    0.0068998276 = product of:
      0.024149396 = sum of:
        0.015666116 = weight(_text_:information in 6959) [ClassicSimilarity], result of:
          0.015666116 = score(doc=6959,freq=12.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.23754507 = fieldWeight in 6959, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.0084832795 = product of:
          0.025449837 = sum of:
            0.025449837 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
              0.025449837 = score(doc=6959,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.19345059 = fieldWeight in 6959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6959)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  10. Neuroth, H.; Lepschy, P.: ¬Das EU-Projekt Renardus (2001) 0.01
    0.0067065936 = product of:
      0.023473077 = sum of:
        0.0132931415 = weight(_text_:information in 5589) [ClassicSimilarity], result of:
          0.0132931415 = score(doc=5589,freq=6.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.20156369 = fieldWeight in 5589, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5589)
        0.010179935 = product of:
          0.030539803 = sum of:
            0.030539803 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.030539803 = score(doc=5589,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Der vollständige Projektname von Renardus lautet "Academic Subject Gateway Service Europe". Renardus wird von der Europäischen Union im 5. Rahmenprogramm mit dem Schwerpunktthema "Information Society Technologies" im zweiten Thematischen Programm "Benutzerfreundliche Informationsgesellschaft" ('Promoting a User-friendly Information Society') gefördert. Die Projektlaufzeit ist von Januar 2000 bis Juni 2002. Insgesamt zwölf Partner (Principal und Assistant Contractors) aus Finnland, Dänemark, Schweden, Großbritannien, den Niederlanden, Frankreich und Deutschland beteiligen sich an diesem Projekt. Die Europäische Union unterstützt das Projekt mit 1,7 Mio. EURO, die Gesamtkosten belaufen sich inklusive der Eigenbeteiligungen der Partner auf 2,3 Mio. EURO. Das Ziel des Projektes Renardus ist es, über eine Schnittstelle Zugriff auf verteilte Sammlungen von "High Quality" Internet Ressourcen in Europa zu ermöglichen. Diese Schnittstelle wird über den Renardus Broker realisiert, der das "Cross-Searchen" und "Cross-Browsen" über verteilte "Quality-Controlled Subject Gateways" ermöglicht. Ein weiteres Ziel von Renardus ist es, Möglichkeiten von "metadata sharing" zu evaluieren und in kleinen Experimenten zwischen z. B. Subject Gateways und Nationalbibliothek zu testen bzw. zu realisieren
    Date
    22. 6.2002 19:32:15
    Theme
    Information Gateway
  11. Kaizik, A.; Gödert, W.; Milanesi, C.: Erfahrungen und Ergebnisse aus der Evaluierung des EU-Projektes EULER im Rahmen des an der FH Köln angesiedelten Projektes EJECT (Evaluation von Subject Gateways des World Wide Web (2001) 0.01
    0.0060120015 = product of:
      0.021042004 = sum of:
        0.009044836 = weight(_text_:information in 5801) [ClassicSimilarity], result of:
          0.009044836 = score(doc=5801,freq=4.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.13714671 = fieldWeight in 5801, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5801)
        0.011997169 = product of:
          0.035991505 = sum of:
            0.035991505 = weight(_text_:22 in 5801) [ClassicSimilarity], result of:
              0.035991505 = score(doc=5801,freq=4.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.27358043 = fieldWeight in 5801, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5801)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    22. 6.2002 19:42:22
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
    Theme
    Information Gateway
  12. Avrahami, T.T.; Yau, L.; Si, L.; Callan, J.P.: ¬The FedLemur project : Federated search in the real world (2006) 0.01
    0.0060096397 = product of:
      0.021033738 = sum of:
        0.010853804 = weight(_text_:information in 5271) [ClassicSimilarity], result of:
          0.010853804 = score(doc=5271,freq=4.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.16457605 = fieldWeight in 5271, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5271)
        0.010179935 = product of:
          0.030539803 = sum of:
            0.030539803 = weight(_text_:22 in 5271) [ClassicSimilarity], result of:
              0.030539803 = score(doc=5271,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.23214069 = fieldWeight in 5271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5271)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Federated search and distributed information retrieval systems provide a single user interface for searching multiple full-text search engines. They have been an active area of research for more than a decade, but in spite of their success as a research topic, they are still rare in operational environments. This article discusses a prototype federated search system developed for the U.S. government's FedStats Web portal, and the issues addressed in adapting research solutions to this operational environment. A series of experiments explore how well prior research results, parameter settings, and heuristics apply in the FedStats environment. The article concludes with a set of lessons learned from this technology transfer effort, including observations about search engine quality in the real world.
    Date
    22. 7.2006 16:02:07
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.347-358
  13. Meiert, M.: Elektronische Publikationen an Hochschulen : Modellierung des elektronischen Publikationsprozesses am Beispiel der Universität Hildesheim (2006) 0.01
    0.0060096397 = product of:
      0.021033738 = sum of:
        0.010853804 = weight(_text_:information in 5974) [ClassicSimilarity], result of:
          0.010853804 = score(doc=5974,freq=4.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.16457605 = fieldWeight in 5974, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5974)
        0.010179935 = product of:
          0.030539803 = sum of:
            0.030539803 = weight(_text_:22 in 5974) [ClassicSimilarity], result of:
              0.030539803 = score(doc=5974,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.23214069 = fieldWeight in 5974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5974)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    1. 9.2006 13:22:15
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
    Theme
    Information Gateway
  14. Teets, M.; Murray, P.: Metasearch authentication and access management (2006) 0.01
    0.005030035 = product of:
      0.017605122 = sum of:
        0.009044836 = weight(_text_:information in 1154) [ClassicSimilarity], result of:
          0.009044836 = score(doc=1154,freq=4.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.13714671 = fieldWeight in 1154, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1154)
        0.008560285 = product of:
          0.025680853 = sum of:
            0.025680853 = weight(_text_:29 in 1154) [ClassicSimilarity], result of:
              0.025680853 = score(doc=1154,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.19432661 = fieldWeight in 1154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1154)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Metasearch - also called parallel search, federated search, broadcast search, and cross-database search - has become commonplace in the information community's vocabulary. All speak to a common theme of searching and retrieving from multiple databases, sources, platforms, protocols, and vendors at the point of the user's request. Metasearch services rely on a variety of approaches including open standards (such as NISO's Z39.50 and SRU/SRW), proprietary programming interfaces, and "screen scraping." However, the absence of widely supported standards, best practices, and tools makes the metasearch environment less efficient for the metasearch provider, the content provider, and ultimately the end-user. To spur the development of widely supported standards and best practices, the National Information Standards Organization (NISO) sponsored a Metasearch Initiative in 2003 to enable: * metasearch service providers to offer more effective and responsive services, * content providers to deliver enhanced content and protect their intellectual property, and * libraries to deliver a simple search (a.k.a. "Google") that covers the breadth of their vetted commercial and free resources. The Access Management Task Group was one of three groups chartered by NISO as part of the Metasearch Initiative. The focus of the group was on gathering requirements for Metasearch authentication and access needs, inventorying existing processes, developing a series of formal use cases describing the access needs, recommending best practices given today's processes, and recommending and pursing changes to current solutions to better support metasearch applications. In September 2005, the group issued their final report and recommendation. This article summarizes the group's work and final recommendation.
    Date
    26.12.2011 16:29:10
  15. Kuberek, M.: KOBV: institutionalisiert (2001) 0.00
    0.003424114 = product of:
      0.023968797 = sum of:
        0.023968797 = product of:
          0.07190639 = sum of:
            0.07190639 = weight(_text_:29 in 6511) [ClassicSimilarity], result of:
              0.07190639 = score(doc=6511,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.5441145 = fieldWeight in 6511, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.109375 = fieldNorm(doc=6511)
          0.33333334 = coord(1/3)
      0.14285715 = coord(1/7)
    
    Date
    29. 9.2001 11:33:57
  16. Woldering, B.: Aufbau einer virtuellen europäischen Nationalbibliothek : Von Gabriel zu The European Library (2004) 0.00
    0.0034185029 = product of:
      0.01196476 = sum of:
        0.0051165326 = weight(_text_:information in 4950) [ClassicSimilarity], result of:
          0.0051165326 = score(doc=4950,freq=2.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.0775819 = fieldWeight in 4950, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=4950)
        0.006848227 = product of:
          0.020544682 = sum of:
            0.020544682 = weight(_text_:29 in 4950) [ClassicSimilarity], result of:
              0.020544682 = score(doc=4950,freq=2.0), product of:
                0.13215305 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.037568163 = queryNorm
                0.15546128 = fieldWeight in 4950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4950)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    15. 2.2006 11:25:29
    Theme
    Information Gateway
  17. Callan, J.: Distributed information retrieval (2000) 0.00
    0.0031332234 = product of:
      0.021932563 = sum of:
        0.021932563 = weight(_text_:information in 31) [ClassicSimilarity], result of:
          0.021932563 = score(doc=31,freq=12.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.3325631 = fieldWeight in 31, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=31)
      0.14285715 = coord(1/7)
    
    Abstract
    A multi-database model of distributed information retrieval is presented, in which people are assumed to have access to many searchable text databases. In such an environment, full-text information retrieval consists of discovering database contents, ranking databases by their expected ability to satisfy the query, searching a small number of databases, and merging results returned by different databases. This paper presents algorithms for each task. It also discusses how to reorganize conventional test collections into multi-database testbeds, and evaluation methodologies for multi-database experiments. A broad and diverse group of experimental results is presented to demonstrate that the algorithms are effective, efficient, robust, and scalable
    Series
    The Kluwer international series on information retrieval; 7
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  18. López Vargas, M.A.: "Ilmenauer Verteiltes Information REtrieval System" (IVIRES) : eine neue Architektur zur Informationsfilterung in einem verteilten Information Retrieval System (2002) 0.00
    0.003101087 = product of:
      0.021707607 = sum of:
        0.021707607 = weight(_text_:information in 4041) [ClassicSimilarity], result of:
          0.021707607 = score(doc=4041,freq=4.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.3291521 = fieldWeight in 4041, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4041)
      0.14285715 = coord(1/7)
    
  19. Subject retrieval in a networked environment : Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC (2003) 0.00
    0.003036909 = product of:
      0.010629181 = sum of:
        0.0072358693 = weight(_text_:information in 3964) [ClassicSimilarity], result of:
          0.0072358693 = score(doc=3964,freq=16.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.10971737 = fieldWeight in 3964, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
        0.0033933115 = product of:
          0.010179934 = sum of:
            0.010179934 = weight(_text_:22 in 3964) [ClassicSimilarity], result of:
              0.010179934 = score(doc=3964,freq=2.0), product of:
                0.1315573 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.037568163 = queryNorm
                0.07738023 = fieldWeight in 3964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3964)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Content
    Enthält die Beiträge: Devadason, F.J., N. Intaraksa u. P. Patamawongjariya u.a.: Faceted indexing application for organizing and accessing internet resources; Nicholson, D., S. Wake: HILT: subject retrieval in a distributed environment; Olson, T.: Integrating LCSH and MeSH in information systems; Kuhr, P.S.: Putting the world back together: mapping multiple vocabularies into a single thesaurus; Freyre, E., M. Naudi: MACS : subject access across languages and networks; McIlwaine, I.C.: The UDC and the World Wide Web; Garrison, W.A.: The Colorado Digitization Project: subject access issues; Vizine-Goetz, D., R. Thompson: Towards DDC-classified displays of Netfirst search results: subject access issues; Godby, C.J., J. Stuler: The Library of Congress Classification as a knowledge base for automatic subject categorization: subject access issues; O'Neill, E.T., E. Childress u. R. Dean u.a.: FAST: faceted application of subject terminology; Bean, C.A., R. Green: Improving subject retrieval with frame representation; Zeng, M.L., Y. Chen: Features of an integrated thesaurus management and search system for the networked environment; Hudon, M.: Subject access to Web resources in education; Qin, J., J. Chen: A multi-layered, multi-dimensional representation of digital educational resources; Riesthuis, G.J.A.: Information languages and multilingual subject access; Geisselmann, F.: Access methods in a database of e-journals; Beghtol, C.: The Iter Bibliography: International standard subject access to medieval and renaissance materials (400-1700); Slavic, A.: General library classification in learning material metadata: the application in IMS/LOM and CDMES metadata schemas; Cordeiro, M.I.: From library authority control to network authoritative metadata sources; Koch, T., H. Neuroth u. M. Day: Renardus: Cross-browsing European subject gateways via a common classification system (DDC); Olson, H.A., D.B. Ward: Mundane standards, everyday technologies, equitable access; Burke, M.A.: Personal Construct Theory as a research tool in Library and Information Science: case study: development of a user-driven classification of photographs
    Footnote
    Rez. in: KO 31(2004) no.2, S.117-118 (D. Campbell): "This excellent volume offers 22 papers delivered at an IFLA Satellite meeting in Dublin Ohio in 2001. The conference gathered together information and computer scientists to discuss an important and difficult question: in what specific ways can the accumulated skills, theories and traditions of librarianship be mobilized to face the challenges of providing subject access to information in present and future networked information environments? The papers which grapple with this question are organized in a surprisingly deft and coherent way. Many conferences and proceedings have unhappy sessions that contain a hodge-podge of papers that didn't quite fit any other categories. As befits a good classificationist, editor I.C. McIlwaine has kept this problem to a minimum. The papers are organized into eight sessions, which split into two broad categories. The first five sessions deal with subject domains, and the last three deal with subject access tools. The five sessions and thirteen papers that discuss access in different domains appear in order of in creasing intension. The first papers deal with access in multilingual environments, followed by papers an access across multiple vocabularies and across sectors, ending up with studies of domain-specific retrieval (primarily education). Some of the papers offer predictably strong work by scholars engaged in ongoing, long-term research. Gerard Riesthuis offers a clear analysis of the complexities of negotiating non-identical thesauri, particularly in cases where hierarchical structure varies across different languages. Hope Olson and Dennis Ward use Olson's familiar and welcome method of using provocative and unconventional theory to generate meliorative approaches to blas in general subject access schemes. Many papers, an the other hand, deal with specific ongoing projects: Renardus, The High Level Thesaurus Project, The Colorado Digitization Project and The Iter Bibliography for medieval and Renaissance material. Most of these papers display a similar structure: an explanation of the theory and purpose of the project, an account of problems encountered in the implementation, and a discussion of the results, both promising and disappointing, thus far. Of these papers, the account of the Multilanguage Access to Subjects Project in Europe (MACS) deserves special mention. In describing how the project is founded an the principle of the equality of languages, with each subject heading language maintained in its own database, and with no single language used as a pivot for the others, Elisabeth Freyre and Max Naudi offer a particularly vivid example of the way the ethics of librarianship translate into pragmatic contexts and concrete procedures. The three sessions and nine papers devoted to subject access tools split into two kinds: papers that discuss the use of theory and research to generate new tools for a networked environment, and those that discuss the transformation of traditional subject access tools in this environment. In the new tool development area, Mary Burke provides a promising example of the bidirectional approach that is so often necessary: in her case study of user-driven classification of photographs, she user personal construct theory to clarify the practice of classification, while at the same time using practice to test the theory. Carol Bean and Rebecca Green offer an intriguing combination of librarianship and computer science, importing frame representation technique from artificial intelligence to standardize syntagmatic relationships to enhance recall and precision.
    The papers discussing the transformation of traditional tools locate the point of transformation in different places. Some, like the papers an DDC, LCC and UDC, suggest that these schemes can be imported into the networked environment and used as a basis for improving access to networked resources, just as they improve access to physical resources. While many of these papers are intriguing, I suspect that convincing those outside the profession will be difficult. In particular, Edward O'Neill and his colleagues, while offering a fascinating suggestion for preserving the Library of Congress Subject Headings and their associated infrastructure by converting them into a faceted scheme, will have an uphill battle convincing the unconverted that LCSH has a place in the online networked environment. Two papers deserve mention for taking a different approach: both Francis Devadason and Maria Ines Cordeiro suggest that we import concepts and techniques rather than realized schemes. Devadason argues for the creation of a faceted pre-coordinate indexing scheme for Internet resources based an Deep Structure indexing, which originates in Bhattacharyya's Postulate-Based Permuted Subject Indexing and in Ranganathan's chain indexing techniques. Cordeiro takes up the vitally important role of authority control in Web environments, suggesting that the techniques of authority control be expanded to enhance user flexibility. By focusing her argument an the concepts rather than an the existing tools, and by making useful and important distinctions between library and non-library uses of authority control, Cordeiro suggests that librarianship's contribution to networked access has less to do with its tools and infrastructure, and more to do with concepts that need to be boldly reinvented. The excellence of this collection derives in part from the energy, insight and diversity of the papers. Credit also goes to the planning and forethought that went into the conference itself by OCLC, the IFLA Classification and Indexing Section, the IFLA Information Technology Section, and the Program Committee, headed by editor I.C. McIlwaine. This collection avoids many of the problems of conference proceedings, and instead offers the best of such proceedings: detail, diversity, and judicious mixtures of theory and practice. Some of the disadvantages that plague conference proceedings appear here. Busy scholars sometimes interpret the concept of "camera-ready copy" creatively, offering diagrams that could have used some streamlining, and label boxes that cut off the tops or bottoms of letters. The papers are necessarily short, and many of them raise issues that deserve more extensive treatment. The issue of subject access in networked environments is crying out for further synthesis at the conceptual and theoretical level. But no synthesis can afford to ignore the kind of energetic, imaginative and important work that the papers in these proceedings represent."
  20. Croft, W.B.: Combining approaches to information retrieval (2000) 0.00
    0.0026856202 = product of:
      0.01879934 = sum of:
        0.01879934 = weight(_text_:information in 6862) [ClassicSimilarity], result of:
          0.01879934 = score(doc=6862,freq=12.0), product of:
            0.06595008 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.037568163 = queryNorm
            0.2850541 = fieldWeight in 6862, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
      0.14285715 = coord(1/7)
    
    Abstract
    The combination of different text representations and search strategies has become a standard technique for improving the effectiveness of information retrieval. Combination, for example, has been studied extensively in the TREC evaluations and is the basis of the "meta-search" engines used on the Web. This paper examines the development of this technique, including both experimental results and the retrieval models that have been proposed as formal frameworks for combination. We show that combining approaches for information retrieval can be modeled as combining the outputs of multiple classifiers based on one or more representations, and that this simple model can provide explanations for many of the experimental results. We also show that this view of combination is very similar to the inference net model, and that a new approach to retrieval based on language models supports combination and can be integrated with the inference net model
    Series
    The Kluwer international series on information retrieval; 7
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft

Languages

  • e 31
  • d 25

Types

  • a 50
  • el 7
  • m 3
  • x 3
  • r 1
  • s 1
  • More… Less…

Classifications