Search (24 results, page 1 of 2)

  • × theme_ss:"Verteilte bibliographische Datenbanken"
  1. Neuroth, H.: Suche in verteilten "Quality-controlled Subject Gateways" : Entwicklung eines Metadatenprofils (2002) 0.03
    0.025447162 = product of:
      0.050894324 = sum of:
        0.050894324 = product of:
          0.10178865 = sum of:
            0.10178865 = weight(_text_:core in 2522) [ClassicSimilarity], result of:
              0.10178865 = score(doc=2522,freq=4.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.39457005 = fieldWeight in 2522, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2522)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die seit ca. 1996 rasche Entwicklung des Internet bzw. des World Wide Web (WWW) hat die Praxis der Veröffentlichung, Verbreitung und Nutzung wissenschaftlicher Informationen grundlegend verändert. Um diese Informationen suchbar und retrievalfähig zu gestalten, ist in den letzten Jahren international viel diskutiert worden. Ein vielversprechender Ansatz, diesen neuen Herausforderungen zu begegnen, liegt in der Entwicklung von Metadatenprofilen. Da durch das Internet verschiedene Datenbestände, die von unterschiedlichen Bereichen wie Museen, Bibliotheken, Archiven etc. vorgehalten werden, unter einer Oberfläche durchsucht werden können, können Metadaten auch in diesem Bereich dazu beitragen, ein einheitliches Konzept zur Beschreibung und zum Retrieval von Online-Ressourcen zu entwickeln. Um die verteilt liegenden Dokumente unter einer Oberfläche für eine qualitativ hochwertige Recherche ("Cross-Search`) anbieten zu können, ist die Verständigung auf ein Core-Set an Metadaten und daran anschließend verschiedene Mappingprozesse ("Cross-walk`) von den lokalen Metadatenformaten zu dem Format des Core-Set an Metadaten notwendig. Ziel des Artikels' ist es, die einzelnen Schritte, die für die Entwicklung eines Metadatenprofils für die gemeinsame Suche über verteilte Metadatensammlungen notwendig sind, aufzuzeigen.
  2. Becker, H.-J.; Hengel, C.; Neuroth, H.; Weiß, B.; Wessel, C.: ¬Die Virtuelle Fachbibliothek als Schnittstelle für eine fachübergreifende Suche in den einzelnen Virtuellen Fachbibliotheken : Definition eines Metadaten-Kernsets (VLib Application Profile) (2002) 0.03
    0.025447162 = product of:
      0.050894324 = sum of:
        0.050894324 = product of:
          0.10178865 = sum of:
            0.10178865 = weight(_text_:core in 2856) [ClassicSimilarity], result of:
              0.10178865 = score(doc=2856,freq=4.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.39457005 = fieldWeight in 2856, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2856)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Der folgende Beitrag beschäftigt sich nicht mit einer konkreten Virtuellen Fachbibliothek, sondern mit dem übergreifenden Themenkomplex "Metadaten" und der Fragestellung, wie Metadaten für eine fachübergreifende Suche über alle Virtuellen Fachbibliotheken eingesetzt werden können. Im Rahmen des Aufbaus der Virtuellen Fachbibliotheken hat die Projektkoordinierung Unterarbeitsgruppen zur Lösung spezifischer Fragestellungen ins Leben gerufen. Der Arbeitsbereich "Metadaten" ist von dem von der DFG geförderten Projekt META-LIB (Metadaten-Initiative deutscher Bibliotheken) mit den Teilprojekten an Der Deutschen Bibliothek und der SUB Göttingen übernommen worden. META-LIB erhielt die Aufgabe, "Empfehlungen zur Definition eines Metadaten-Core-Sets für die verteilte Suche über die Virtuellen Fachbibliotheken" zu entwickeln. Im folgenden werden die Empfehlungen vorgestellt. Sie basieren auf den Ergebnissen und der Auswertung von Antworteng eines Internet-Fragebogens, in dem die Datenelemente erfragt wurden, die in den einzelnen Virtuellen Fachbibliotheken zur Erschließung verwendet bzw. benötigt werden. Für die Formulierung der Empfehlungen und zur Abstimmung sind zwei MetadatenWorkshops (am 16. Mai 2001 in der SUB Göttingen und am 9./10. August 2001 in der Deutschen Bibliothek Frankfurt am Main) durchgeführt worden, deren Ergebnisse und Diskussionen als Grundlage dienen.
    Object
    Dublin Core
  3. Becker, H.J.; Neuroth, H.: Crosssearchen und crossbrowsen von "Quality-controlled Subject Gateways" im EU-Projekt Renardus (2002) 0.02
    0.021592634 = product of:
      0.043185268 = sum of:
        0.043185268 = product of:
          0.086370535 = sum of:
            0.086370535 = weight(_text_:core in 630) [ClassicSimilarity], result of:
              0.086370535 = score(doc=630,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.3348038 = fieldWeight in 630, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.046875 = fieldNorm(doc=630)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Das von der Europäischen Union seit Januar 2000 geförderte Projekt Renardus hat das Ziel, einen Service zur Nutzung der in Europa vorhandenen "Quality-controlled Subject Gateways" aufzubauen, d.h. über einen Zugang bzw. eine Schnittstelle crosssearchen und crossbrowsen anzubieten. Für das crossbrowsen wird dabei zum Navigieren die Dewey Decimal Classification (DDC) verwendet. Der Beitrag beschreibt die einzelnen Entwicklungsschritte und stellt detailliert die nötigen Mappingprozesse vor. Dabei handelt es sich einmal um Mappingprozesse von den lokalen Metadatenformaten der einzelnen Subject Gateways zu dem gemeinsamen Kernset an Metadaten in Renardus für die Suche, wobei dieses Kernset auf dem Dublin Core Metadata Set basiert. Zum anderen geht es um die Erstellung von Konkordanzen zwischen den lokalen Klassen der Klassifikationssysteme der Partner und den DDC-Klassen für das Browsen. Der Beitrag beschreibt auch neue zugrunde liegende Definitionen bzw.theoretische Konzepte, die in der Metadatengemeinschaft zurzeit diskutiert werden (z.B. Application Profile, Namespace, Registry). Zum Schluss werden die Funktionalitäten des Renardus-Services (suchen, browsen) näher vorgestellt.
  4. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.02
    0.02076144 = product of:
      0.04152288 = sum of:
        0.04152288 = product of:
          0.08304576 = sum of:
            0.08304576 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.08304576 = score(doc=4865,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:41:59
  5. Wei, W.: SOAP als Basis für verteilte, heterogene virtuelle OPACs (2002) 0.02
    0.01869977 = product of:
      0.03739954 = sum of:
        0.03739954 = product of:
          0.07479908 = sum of:
            0.07479908 = weight(_text_:core in 4097) [ClassicSimilarity], result of:
              0.07479908 = score(doc=4097,freq=6.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.2899486 = fieldWeight in 4097, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=4097)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    Überblick über die Kapitel In Kapitel l. Simple Object Acces Protocol (SOAP) wird zuerst der Hintergrund der Entwicklung von SOAP untersucht. Mit einer kurzen Vorstellung der Entwicklung von verteilter Anwendung bis Web Service wird die Situation dargestellt, dass die vorhandenen Standards wie CORBA, DCOM sowie RMI die Ansprüche der stark heterogenen Umgebung wie Internet nicht erfüllen können. Um diesen Mangel der vorhandenen Lösungen zu überwinden, wurde SOAP mit dem Ziel der Unterstützung des plattformenunabhängigen Nachrichtenaustausches entwickelt. Anschließend wird der Begriff Web Service eingeführt, mit dem SOAP stark verbunden ist. Dabei wird über die Möglichkeit des Einsatzes von SOAP in den Bibliothekssystemen diskutiert. Schließlich wird SOAP durch unterschiedliche Aspekte wie SOAP und XML, SOAP Nachricht, Fehler Behandlung usw. untersucht. In Kapitel 3. Die durch Internet erweiterte Bibliothek wird die Beziehung zwischen dem Internet und der Bibliothek aus zwei Sichten, die verteilte Suche und Metadaten, beschrieben. In dem Teil über die verteilte Suche wird vorwiegend das Protokoll Z39.50, womit die bisherigen verteilten Bibliothekssysteme realisiert werden, dargestellt. In dem Teil der Metadaten wird sich zuerst mit der Bedeutung der Metadaten für die Bibliothek sowie für das Internet auseinandergesetzt. Anschließend wird über die existierenden Probleme der Metadaten und die Lösungsmöglichkeiten diskutiert. Schließlich wird eine Untersuchung einiger Metadatenstandards mit Dublin Core als Schwerpunkt durchgeführt, weil Dublin Core zur Zeit der Standard für das Internet und aus diesem Grund auch fir die Internet bezogene Bibliotheksanwendung wichtig ist. In Kapitel 4. Die Entwicklung eines verteilten Bibliothekssystems mit dem SOAP-Einsatz wird die Entwicklung des praktischen Projektes beschrieben. Zuerst wird das Ziel und die Funktionalität des Projektes festgelegt, dass ein verteiltes Bibliothekssystem mit dem Einsatz von SOAP entwickelt wird und das System eine verteilte Suche auf mehreren entfernten Bibliotheksdatenbanken ermöglichen soll. Anschließend wird beschrieben, in welchen Schritten das System entworfen und implementiert wird. Mit dem ersten System kann man nur in einer Datenbank suchen, während man mit dem zweiten System parallel in zwei Datenbanken suchen kann. Dublin Core wird als der Metadatenstandard im gesamten System eingesetzt. Die im System verwendeten Softwarepakete und die Softwarestandardtechnologien werden vorgestellt. Es wird untersucht, wie einzelne technische Komponenten zusammenarbeiten. Schließlich wird die Entwicklung der einzelnen Programmmodule und die Kommunikation zwischen ihnen beschrieben.
  6. Strötgen, R.; Kokkelink, S.: Metadatenextraktion aus Internetquellen : Heterogenitätsbehandlung im Projekt CARMEN (2001) 0.02
    0.01799386 = product of:
      0.03598772 = sum of:
        0.03598772 = product of:
          0.07197544 = sum of:
            0.07197544 = weight(_text_:core in 5808) [ClassicSimilarity], result of:
              0.07197544 = score(doc=5808,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27900314 = fieldWeight in 5808, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5808)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Die Sonderfördermaßnahme CARMEN (Content Analysis, Retrieval and Metadata: Effective Networking) zielt im Rahmen des vom BMB+F geförderten Programms GLOBAL INFO darauf ab, in der heutigen dezentralen Informationsweit geeignete Informationssysteme für die verteilten Datenbestände in Bibliotheken, Fachinformationszentren und im Internet zu schaffen. Diese Zusammenführung ist weniger technisch als inhaltlich und konzeptuell problematisch. Heterogenität tritt beispielsweise auf, wenn unterschiedliche Datenbestände zur Inhaltserschließung verschiedene Thesauri oder Klassifikationen benutzen, wenn Metadaten unterschiedlich oder überhaupt nicht erfasst werden oder wenn intellektuell aufgearbeitete Quellen mit in der Regel vollständig unerschlossenen Internetdokumenten zusammentreffen. Im Projekt CARMEN wird dieses Problem mit mehreren Methoden angegangen: Über deduktiv-heuristische Verfahren werden Metadaten automatisch aus Dokumenten generiert, außerdem lassen sich mit statistisch-quantitativen Methoden die unterschiedlichen Verwendungen von Termen in den verschiedenen Beständen aufeinander abbilden, und intellektuell erstellte Crosskonkordanzen schaffen sichere Übergänge von einer Dokumentationssprache in eine andere. Für die Extraktion von Metadaten gemäß Dublin Core (v. a. Autor, Titel, Institution, Abstract, Schlagworte) werden anhand typischer Dokumente (Dissertationen aus Math-Net im PostScript-Format und verschiedenste HTML-Dateien von WWW-Servern deutscher sozialwissenschaftlicher Institutionen) Heuristiken entwickelt. Die jeweilige Wahrscheinlichkeit, dass die so gewonnenen Metadaten korrekt und vertrauenswürdig sind, wird über Gewichte den einzelnen Daten zugeordnet. Die Heuristiken werden iterativ in ein Extraktionswerkzeug implementiert, getestet und verbessert, um die Zuverlässigkeit der Verfahren zu erhöhen. Derzeit werden an der Universität Osnabrück und im InformationsZentrum Sozialwissenschaften Bonn anhand mathematischer und sozialwissenschaftlicher Datenbestände erste Prototypen derartiger Transfermodule erstellt
  7. SRW/U erleichtert verteilte Datenbankrecherchen (2005) 0.02
    0.01799386 = product of:
      0.03598772 = sum of:
        0.03598772 = product of:
          0.07197544 = sum of:
            0.07197544 = weight(_text_:core in 3972) [ClassicSimilarity], result of:
              0.07197544 = score(doc=3972,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27900314 = fieldWeight in 3972, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3972)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Content
    "Seit zwei Jahrzehnten nutzen vor allem Bibliotheksverbünde das Protokoll Z39.50, um ihren Benutzern im Internet die simultane Abfrage mehrerer Datenbanken zu ermöglichen. Jetzt gibt es einen Nachfolger dieses Protokolls, der eine einfachere Implementierung verspricht. Damit ist auch eine größere Verbreitung für die Suche in verteilten Datenbeständen anderer Institutionen, wie z.B. Archiven und Museen, wahrscheinlich. SRW/U (Search and Retrieve Web Service bzw. Search and Retrieve URL Service, www.loc.90v/z3950/agency/zing/srw) wurde von einer an der Library of Congress angesiedelten Initiative entwickelt und beruht auf etablierten Standards wie URI und XML. Die mit SRW und SRU möglichen Abfragen und Ergebnisse unterscheiden sich nur in der Art der Übertragung, verwenden aber beide dieselben Prozeduren. Davon gibt es nur drei: explain, scan und searchRetrieve. Die beiden Erstgenannten dienen dazu, allgemeine Informationen über den Datenanbieter bzw. die verfügbaren Indexe zubekommen. Das Herzstück ist die search-Retrieve-Anweisung. Damit werden Anfragen direkt an die Datenbank gesendet und die Parameter des Suchergebnisses definiert. Verwendet wird dafür die Retrievalsprache CQL (Common Query Language), die simple Freitextsuchen, aber auch mit Boolschen Operatoren verknüpfte Recherchen ermöglicht. Bei SRU werden die Suchbefehle mittels einfacher HTTP GET -Anfragen übermittelt, die Ergebnisse in XML zurückgeliefert. Zur Strukturierung der Daten dienen z.B. Dublin Core, MARC oder EAD. Welches Format von der jeweiligen Datenbank bereitgestellt wird, kann durch die explain-Anweisung ermittelt gebracht werden."
  8. Veen, T. van; Oldroyd, B.: Search and retrieval in The European Library : a new approach (2004) 0.02
    0.01799386 = product of:
      0.03598772 = sum of:
        0.03598772 = product of:
          0.07197544 = sum of:
            0.07197544 = weight(_text_:core in 1164) [ClassicSimilarity], result of:
              0.07197544 = score(doc=1164,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27900314 = fieldWeight in 1164, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1164)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The objective of the European Library (TEL) project [TEL] was to set up a co-operative framework and specify a system for integrated access to the major collections of the European national libraries. This has been achieved by successfully applying a new approach for search and retrieval via URLs (SRU) [ZiNG] combined with a new metadata paradigm. One aim of the TEL approach is to have a low barrier of entry into TEL, and this has driven our choice for the technical solution described here. The solution comprises portal and client functionality running completely in the browser, resulting in a low implementation barrier and maximum scalability, as well as giving users control over the search interface and what collections to search. In this article we will describe, step by step, the development of both the search and retrieval architecture and the metadata infrastructure in the European Library project. We will show that SRU is a good alternative to the Z39.50 protocol and can be implemented without losing investments in current Z39.50 implementations. The metadata model being used by TEL is a Dublin Core Application Profile, and we have taken into account that functional requirements will change over time and therefore the metadata model will need to be able to evolve in a controlled way. We make this possible by means of a central metadata registry containing all characteristics of the metadata in TEL. Finally, we provide two scenarios to show how the TEL concept can be developed and extended, with applications capable of increasing their functionality by "learning" new metadata or protocol options.
  9. Zia, L.L.: new projects and a progress report : ¬The NSF National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program (2001) 0.02
    0.017813014 = product of:
      0.035626028 = sum of:
        0.035626028 = product of:
          0.071252055 = sum of:
            0.071252055 = weight(_text_:core in 1227) [ClassicSimilarity], result of:
              0.071252055 = score(doc=1227,freq=4.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27619904 = fieldWeight in 1227, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1227)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The National Science Foundation's (NSF) National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program comprises a set of projects engaged in a collective effort to build a national digital library of high quality science, technology, engineering, and mathematics (STEM) educational materials for students and teachers at all levels, in both formal and informal settings. By providing broad access to a rich, reliable, and authoritative collection of interactive learning and teaching resources and associated services in a digital environment, the NSDL will encourage and sustain continual improvements in the quality of STEM education for all students, and serve as a resource for lifelong learning. Though the program is relatively new, its vision and operational framework have been developed over a number of years through various workshops and planning meetings. The NSDL program held its first formal funding cycle during fiscal year 2000 (FY00), accepting proposals in four tracks: Core Integration System, Collections, Services, and Targeted Research. Twenty-nine awards were made across these tracks in September 2000. Brief descriptions of each FY00 project appeared in an October 2000 D-Lib Magazine article; full abstracts are available from the Awards Section at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl/>. In FY01 the program received one hundred-nine proposals across its four tracks with the number of proposals in the collections, services, and targeted research tracks increasing to one hundred-one from the eighty received in FY00. In September 2001 grants were awarded to support 35 new projects: 1 project in the core integration track, 18 projects in the collections track, 13 in the services track, and 3 in targeted research. Two NSF directorates, the Directorate for Geosciences (GEO) and the Directorate for Mathematical and Physical Sciences (MPS) are both providing significant co-funding on several projects, illustrating the NSDL program's facilitation of the integration of research and education, an important strategic objective of the NSF. Thus far across both fiscal years of the program fifteen projects have enjoyed this joint support. Following is a list of the FY01 awards indicating the official NSF award number (each beginning with DUE), the project title, the grantee institution, and the name of the Principal Investigator (PI). A condensed description of the project is also included. Full abstracts are available from the Awards Section at the NSDL program site at <http://www.ehr.nsf.gov/ehr/due/programs/nsdl/>. (Grants with shared titles are formal collaborations and are grouped together.) The projects are displayed by track and are listed by award number. In addition, six of these projects have explicit relevance and application to K-12 education. Six others clearly have potential for application to the K-12 arena. The NSDL program will have another funding cycle in fiscal year 2002 with the next program solicitation expected to be available in January 2002, and an anticipated deadline for proposals in mid-April 2002.
  10. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.01
    0.014395089 = product of:
      0.028790178 = sum of:
        0.028790178 = product of:
          0.057580356 = sum of:
            0.057580356 = weight(_text_:core in 1256) [ClassicSimilarity], result of:
              0.057580356 = score(doc=1256,freq=2.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.22320253 = fieldWeight in 1256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1256)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    This article describes an approach for linking geographically distributed collections of metadata so that they are searchable as a single collection. We describe the infrastructure, which uses standard Internet protocols such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP), to distribute queries, return results, and exchange index information. We discuss the advantages of using linked collections of authoritative metadata as an alternative to using a keyword indexing search-engine for resource discovery. We examine other architectures that use metadata for resource discovery, such as Dienst/NCSTRL, the AHDS HTTP/Z39.50 Gateway, and the ROADS initiative. Finally, we discuss research issues and future directions of the project. The Internet Scout Project, which is funded by the National Science Foundation and is located in the Computer Sciences Department at the University of Wisconsin-Madison, is charged with assisting the higher education community in resource discovery on the Internet. To that end, the Scout Report and subsequent subject-specific Scout Reports were developed to guide the U.S. higher education community to research-quality resources. The Scout Report Signpost utilizes the content from the Scout Reports as the basis of a metadata collection. Signpost consists of more than 2000 cataloged Internet sites using established standards such as Library of Congress subject headings and abbreviated call letters, and emerging standards such as the Dublin Core (DC). This searchable and browseable collection is free and freely accessible, as are all of the Internet Scout Project's services.
  11. Dupuis, P.; Lapointe, J.: Developpement d'un outil documentaire à Hydro-Quebec : le Thesaurus HQ (1997) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 3173) [ClassicSimilarity], result of:
              0.05536384 = score(doc=3173,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 3173, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3173)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Argus. 26(1997) no.3, S.16-22
  12. Dempsey, L.; Russell, R.; Kirriemur, J.W.: Towards distributed library systems : Z39.50 in a European context (1996) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 127) [ClassicSimilarity], result of:
              0.05536384 = score(doc=127,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=127)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Program. 30(1996) no.1, S.1-22
  13. Ashton, J.: ONE: the final OPAC frontier (1998) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 2588) [ClassicSimilarity], result of:
              0.05536384 = score(doc=2588,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 2588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2588)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Source
    Select newsletter. 1998, no.22, Spring, S.5-6
  14. Lunau, C.D.: Z39.50: a critical component of the Canadian resource sharing infrastructure : implementation activities and results achieved (1997) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 3193) [ClassicSimilarity], result of:
              0.05536384 = score(doc=3193,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 3193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3193)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    3. 3.1999 17:22:57
  15. Burrows, T.: ¬The virtual catalogue : bibliographic access for the virtual library (1993) 0.01
    0.01384096 = product of:
      0.02768192 = sum of:
        0.02768192 = product of:
          0.05536384 = sum of:
            0.05536384 = weight(_text_:22 in 5286) [ClassicSimilarity], result of:
              0.05536384 = score(doc=5286,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.30952093 = fieldWeight in 5286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5286)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    8.10.2000 14:47:22
  16. Kochtanek, T.R.; Matthews, J.R.: Library information systems : from library automation to distributed information systems (2002) 0.01
    0.012723581 = product of:
      0.025447162 = sum of:
        0.025447162 = product of:
          0.050894324 = sum of:
            0.050894324 = weight(_text_:core in 1792) [ClassicSimilarity], result of:
              0.050894324 = score(doc=1792,freq=4.0), product of:
                0.25797358 = queryWeight, product of:
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.051078856 = queryNorm
                0.19728503 = fieldWeight in 1792, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.0504966 = idf(docFreq=769, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1792)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Specifically designed for core units in library automation and information systems, this long awaited new text gives students a comprehensive overview of one of the most critical areas of library operations. Produced by two internationally known scholars, Thomas Kochtanek and Joseph Matthews, this book will enable students to take the lead in managing an immense diversity of information resources and at the same time handle the complexities that information technology brings to the library. Giving important insight into library information systems-from the historical background to the latest technological trends and developments-the book is organized into 14 chapters, each presenting helpful information on such topics as systems design, types of systems, coverage of standards and standards organizations, technology axioms, system selection and implementation, usability of systems, library information systems management, technology trends, digital libraries, and more. New to the acclaimed Library and Information Science Text Series, this book will prove an indispensable resource to students preparing for a career in today's ever-evolving library environment. Complete with charts and illustrations, chapter summaries, suggested print and electronic resources, a glossary of terms, and an index, this text will be of central importance to libraries and library schools everywhere.
    Footnote
    Rez. in: JASIST 54(2003) no.12, S.1166-1167 (Brenda Chawner): "Kochtanek and Matthews have written a welcome addition to the small set of introductory texts an applications of information technology to library and information Services. The book has fourteen chapters grouped into four sections: "The Broader Context," "The Technologies," "Management Issues," and "Future Considerations." Two chapters provide the broad content, with the first giving a historical overview of the development and adoption of "library information systems." Kochtanek and Matthews define this as "a wide array of solutions that previously might have been considered separate industries with distinctly different marketplaces" (p. 3), referring specifically to integrated library systems (ILS, and offen called library management systems in this part of the world), and online databases, plus the more recent developments of Web-based resources, digital libraries, ebooks, and ejournals. They characterize technology adoption patterns in libraries as ranging from "bleeding edge" to "leading edge" to "in the wedge" to "trailing edge"-this is a catchy restatement of adopter categories from Rogers' diffusion of innovation theory, where they are more conventionally known as "early adopters," "early majority," "late majority," and "laggards." This chapter concludes with a look at more general technology trends that have affected library applications, including developments in hardware (moving from mainframes to minicomputers to personal Computers), changes in software development (from in-house to packages), and developments in communications technology (from dedicated host Computers to more open networks to the current distributed environment found with the Internet). This is followed by a chapter describing the ILS and online database industries in some detail. "The Technologies" begins with a chapter an the structure and functionality of integrated library systems, which also includes a brief discussion of precision versus recall, managing access to internal documents, indexing and searching, and catalogue maintenance. This is followed by a chapter an open systems, which concludes with a useful list of questions to consider to determine an organization's readiness to adopt open source solutions. As one world expect, this section also includes a detailed chapter an telecommunications and networking, which includes types of networks, transmission media, network topologies, switching techniques (ranging from dial up and leased lines to ISDN/DSL, frame relay, and ATM). It concludes with a chapter an the role and importance of standards, which covers the need for standards and standards organizations, and gives examples of different types of standards, such as MARC, Dublin Core, Z39.50, and markup standards such as SGML, HTML, and XML. Unicode is also covered but only briefly. This section world be strengthened by a chapter an hardware concepts-the authors assume that their reader is already familiar with these, which may not be true in all cases (for example, the phrase "client-Server" is first used an page 11, but only given a brief definition in the glossary). Burke's Library Technology Companion: A Basic Guide for Library Staff (New York: Neal-Schuman, 2001) might be useful to fill this gap at an introductory level, and Saffady's Introduction to Automation for Librarians, 4th ed. (Chicago: American Library Association, 1999) world be better for those interested in more detail. The final two sections, however, are the book's real strength, with a strong focus an management issues, and this content distinguishes it from other books an this topic such as Ferguson and Hebels Computers for Librarians: an Introduction to Systems and Applications (Waggawagga, NSW: Centre for Information Studies, Charles Sturt University, 1998). ...
  17. Kaizik, A.; Gödert, W.; Milanesi, C.: Erfahrungen und Ergebnisse aus der Evaluierung des EU-Projektes EULER im Rahmen des an der FH Köln angesiedelten Projektes EJECT (Evaluation von Subject Gateways des World Wide Web (2001) 0.01
    0.012233796 = product of:
      0.024467591 = sum of:
        0.024467591 = product of:
          0.048935182 = sum of:
            0.048935182 = weight(_text_:22 in 5801) [ClassicSimilarity], result of:
              0.048935182 = score(doc=5801,freq=4.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.27358043 = fieldWeight in 5801, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5801)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:42:22
  18. Heery, R.: Information gateways : collaboration and content (2000) 0.01
    0.01211084 = product of:
      0.02422168 = sum of:
        0.02422168 = product of:
          0.04844336 = sum of:
            0.04844336 = weight(_text_:22 in 4866) [ClassicSimilarity], result of:
              0.04844336 = score(doc=4866,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.2708308 = fieldWeight in 4866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4866)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:38:54
  19. Neuroth, H.; Lepschy, P.: ¬Das EU-Projekt Renardus (2001) 0.01
    0.01038072 = product of:
      0.02076144 = sum of:
        0.02076144 = product of:
          0.04152288 = sum of:
            0.04152288 = weight(_text_:22 in 5589) [ClassicSimilarity], result of:
              0.04152288 = score(doc=5589,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.23214069 = fieldWeight in 5589, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5589)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 6.2002 19:32:15
  20. Avrahami, T.T.; Yau, L.; Si, L.; Callan, J.P.: ¬The FedLemur project : Federated search in the real world (2006) 0.01
    0.01038072 = product of:
      0.02076144 = sum of:
        0.02076144 = product of:
          0.04152288 = sum of:
            0.04152288 = weight(_text_:22 in 5271) [ClassicSimilarity], result of:
              0.04152288 = score(doc=5271,freq=2.0), product of:
                0.17886946 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.051078856 = queryNorm
                0.23214069 = fieldWeight in 5271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5271)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 16:02:07