Search (43 results, page 1 of 3)

  • × theme_ss:"Verteilte bibliographische Datenbanken"
  1. Avrahami, T.T.; Yau, L.; Si, L.; Callan, J.P.: ¬The FedLemur project : Federated search in the real world (2006) 0.19
    0.19306254 = product of:
      0.25741673 = sum of:
        0.03490599 = weight(_text_:web in 5271) [ClassicSimilarity], result of:
          0.03490599 = score(doc=5271,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 5271, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5271)
        0.08853068 = weight(_text_:search in 5271) [ClassicSimilarity], result of:
          0.08853068 = score(doc=5271,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.51520574 = fieldWeight in 5271, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5271)
        0.13398007 = sum of:
          0.09378988 = weight(_text_:engine in 5271) [ClassicSimilarity], result of:
            0.09378988 = score(doc=5271,freq=2.0), product of:
              0.26447627 = queryWeight, product of:
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.049439456 = queryNorm
              0.35462496 = fieldWeight in 5271, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.046875 = fieldNorm(doc=5271)
          0.04019018 = weight(_text_:22 in 5271) [ClassicSimilarity], result of:
            0.04019018 = score(doc=5271,freq=2.0), product of:
              0.17312855 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049439456 = queryNorm
              0.23214069 = fieldWeight in 5271, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=5271)
      0.75 = coord(3/4)
    
    Abstract
    Federated search and distributed information retrieval systems provide a single user interface for searching multiple full-text search engines. They have been an active area of research for more than a decade, but in spite of their success as a research topic, they are still rare in operational environments. This article discusses a prototype federated search system developed for the U.S. government's FedStats Web portal, and the issues addressed in adapting research solutions to this operational environment. A series of experiments explore how well prior research results, parameter settings, and heuristics apply in the FedStats environment. The article concludes with a set of lessons learned from this technology transfer effort, including observations about search engine quality in the real world.
    Date
    22. 7.2006 16:02:07
  2. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.18
    0.17537987 = product of:
      0.23383984 = sum of:
        0.06504348 = weight(_text_:web in 6959) [ClassicSimilarity], result of:
          0.06504348 = score(doc=6959,freq=10.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.40312994 = fieldWeight in 6959, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.057146307 = weight(_text_:search in 6959) [ClassicSimilarity], result of:
          0.057146307 = score(doc=6959,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.33256388 = fieldWeight in 6959, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.11165006 = sum of:
          0.07815824 = weight(_text_:engine in 6959) [ClassicSimilarity], result of:
            0.07815824 = score(doc=6959,freq=2.0), product of:
              0.26447627 = queryWeight, product of:
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.049439456 = queryNorm
              0.29552078 = fieldWeight in 6959, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                5.349498 = idf(docFreq=570, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6959)
          0.03349182 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
            0.03349182 = score(doc=6959,freq=2.0), product of:
              0.17312855 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.049439456 = queryNorm
              0.19345059 = fieldWeight in 6959, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=6959)
      0.75 = coord(3/4)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05
  3. Stark, T.: ¬The Net and Z39.50 : toward a virtual union catalog (1997) 0.05
    0.053023666 = product of:
      0.10604733 = sum of:
        0.04072366 = weight(_text_:web in 3194) [ClassicSimilarity], result of:
          0.04072366 = score(doc=3194,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 3194, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3194)
        0.06532367 = weight(_text_:search in 3194) [ClassicSimilarity], result of:
          0.06532367 = score(doc=3194,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.38015217 = fieldWeight in 3194, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3194)
      0.5 = coord(2/4)
    
    Abstract
    The State Library of Iowa, USA, received a Higher Education Act title II grant from the US Dept. of Education in 1994 to create a demonstration project of new library information technologies. Describes 2 interlinked components of the project: Web-based union catalogue development and statewide deployment of the ANSI/NISO Z39.50 standard for database search and retrieval. Z39.50 was chosen because of its ability to searching multiple remote databases in a single session and its common interface across a variety of implementations. Use of a distributed Z39.50 search makes the need for maintaining large union catalogues unnecessary
  4. Croft, W.B.: Combining approaches to information retrieval (2000) 0.05
    0.045448855 = product of:
      0.09089771 = sum of:
        0.03490599 = weight(_text_:web in 6862) [ClassicSimilarity], result of:
          0.03490599 = score(doc=6862,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.21634221 = fieldWeight in 6862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
        0.055991717 = weight(_text_:search in 6862) [ClassicSimilarity], result of:
          0.055991717 = score(doc=6862,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3258447 = fieldWeight in 6862, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
      0.5 = coord(2/4)
    
    Abstract
    The combination of different text representations and search strategies has become a standard technique for improving the effectiveness of information retrieval. Combination, for example, has been studied extensively in the TREC evaluations and is the basis of the "meta-search" engines used on the Web. This paper examines the development of this technique, including both experimental results and the retrieval models that have been proposed as formal frameworks for combination. We show that combining approaches for information retrieval can be modeled as combining the outputs of multiple classifiers based on one or more representations, and that this simple model can provide explanations for many of the experimental results. We also show that this view of combination is very similar to the inference net model, and that a new approach to retrieval based on language models supports combination and can be integrated with the inference net model
  5. SRW/U erleichtert verteilte Datenbankrecherchen (2005) 0.04
    0.043117315 = product of:
      0.08623463 = sum of:
        0.029088326 = weight(_text_:web in 3972) [ClassicSimilarity], result of:
          0.029088326 = score(doc=3972,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 3972, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3972)
        0.057146307 = weight(_text_:search in 3972) [ClassicSimilarity], result of:
          0.057146307 = score(doc=3972,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.33256388 = fieldWeight in 3972, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3972)
      0.5 = coord(2/4)
    
    Content
    "Seit zwei Jahrzehnten nutzen vor allem Bibliotheksverbünde das Protokoll Z39.50, um ihren Benutzern im Internet die simultane Abfrage mehrerer Datenbanken zu ermöglichen. Jetzt gibt es einen Nachfolger dieses Protokolls, der eine einfachere Implementierung verspricht. Damit ist auch eine größere Verbreitung für die Suche in verteilten Datenbeständen anderer Institutionen, wie z.B. Archiven und Museen, wahrscheinlich. SRW/U (Search and Retrieve Web Service bzw. Search and Retrieve URL Service, www.loc.90v/z3950/agency/zing/srw) wurde von einer an der Library of Congress angesiedelten Initiative entwickelt und beruht auf etablierten Standards wie URI und XML. Die mit SRW und SRU möglichen Abfragen und Ergebnisse unterscheiden sich nur in der Art der Übertragung, verwenden aber beide dieselben Prozeduren. Davon gibt es nur drei: explain, scan und searchRetrieve. Die beiden Erstgenannten dienen dazu, allgemeine Informationen über den Datenanbieter bzw. die verfügbaren Indexe zubekommen. Das Herzstück ist die search-Retrieve-Anweisung. Damit werden Anfragen direkt an die Datenbank gesendet und die Parameter des Suchergebnisses definiert. Verwendet wird dafür die Retrievalsprache CQL (Common Query Language), die simple Freitextsuchen, aber auch mit Boolschen Operatoren verknüpfte Recherchen ermöglicht. Bei SRU werden die Suchbefehle mittels einfacher HTTP GET -Anfragen übermittelt, die Ergebnisse in XML zurückgeliefert. Zur Strukturierung der Daten dienen z.B. Dublin Core, MARC oder EAD. Welches Format von der jeweiligen Datenbank bereitgestellt wird, kann durch die explain-Anweisung ermittelt gebracht werden."
  6. Ashton, J.: ONE: the final OPAC frontier (1998) 0.04
    0.039791476 = product of:
      0.07958295 = sum of:
        0.052789498 = weight(_text_:search in 2588) [ClassicSimilarity], result of:
          0.052789498 = score(doc=2588,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 2588, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=2588)
        0.026793454 = product of:
          0.053586908 = sum of:
            0.053586908 = weight(_text_:22 in 2588) [ClassicSimilarity], result of:
              0.053586908 = score(doc=2588,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.30952093 = fieldWeight in 2588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2588)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Describes the European Commission's OPAC Network in Europe (ONE) project which attempts to make it simpler to search a number of major European OPACs crossing all frontiers via online interface. Explains how this is done and the British Library's involvement in it, assessment of the project and plans for the future
    Source
    Select newsletter. 1998, no.22, Spring, S.5-6
  7. Burrows, T.: ¬The virtual catalogue : bibliographic access for the virtual library (1993) 0.04
    0.039791476 = product of:
      0.07958295 = sum of:
        0.052789498 = weight(_text_:search in 5286) [ClassicSimilarity], result of:
          0.052789498 = score(doc=5286,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 5286, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=5286)
        0.026793454 = product of:
          0.053586908 = sum of:
            0.053586908 = weight(_text_:22 in 5286) [ClassicSimilarity], result of:
              0.053586908 = score(doc=5286,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.30952093 = fieldWeight in 5286, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5286)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Proposes a new model for bibliographic access, the virtual catalogue, to serve the virtual library. Suggests the use of current software and networks to build links between bibliographic databases of all kinds, including full text, to enable the user to search a specified subset of databases. Suggests that local data be limited to holdings information linked to, but separate from, bibliographic databases both local and remote
    Date
    8.10.2000 14:47:22
  8. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.03
    0.03429555 = product of:
      0.0685911 = sum of:
        0.03732781 = weight(_text_:search in 1256) [ClassicSimilarity], result of:
          0.03732781 = score(doc=1256,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.21722981 = fieldWeight in 1256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=1256)
        0.031263296 = product of:
          0.06252659 = sum of:
            0.06252659 = weight(_text_:engine in 1256) [ClassicSimilarity], result of:
              0.06252659 = score(doc=1256,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23641664 = fieldWeight in 1256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1256)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article describes an approach for linking geographically distributed collections of metadata so that they are searchable as a single collection. We describe the infrastructure, which uses standard Internet protocols such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP), to distribute queries, return results, and exchange index information. We discuss the advantages of using linked collections of authoritative metadata as an alternative to using a keyword indexing search-engine for resource discovery. We examine other architectures that use metadata for resource discovery, such as Dienst/NCSTRL, the AHDS HTTP/Z39.50 Gateway, and the ROADS initiative. Finally, we discuss research issues and future directions of the project. The Internet Scout Project, which is funded by the National Science Foundation and is located in the Computer Sciences Department at the University of Wisconsin-Madison, is charged with assisting the higher education community in resource discovery on the Internet. To that end, the Scout Report and subsequent subject-specific Scout Reports were developed to guide the U.S. higher education community to research-quality resources. The Scout Report Signpost utilizes the content from the Scout Reports as the basis of a metadata collection. Signpost consists of more than 2000 cataloged Internet sites using established standards such as Library of Congress subject headings and abbreviated call letters, and emerging standards such as the Dublin Core (DC). This searchable and browseable collection is free and freely accessible, as are all of the Internet Scout Project's services.
    As well developed as both the Scout Reports and Signpost are, they cannot capture the wealth of high-quality content that is available on the Internet. An obvious next step toward increasing the usefulness of our own collection and its value to our customer base is to partner with other high-quality content providers who have developed similar collections and to develop a single, virtual collection. Project Isaac (working title) is the Internet Scout Project's latest resource discovery effort. Project Isaac involves the development of a research testbed that allows experimentation with protocols and algorithms for creating, maintaining, indexing and searching distributed collections of metadata. Project Isaac's infrastructure uses standard Internet protocols, such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP) to distribute queries, return results, and exchange index or centroid information. The overall goal is to support a single-search interface to geographically distributed and independently maintained metadata collections.
  9. Subject retrieval in a networked environment : Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC (2003) 0.03
    0.03413644 = product of:
      0.045515254 = sum of:
        0.020152984 = weight(_text_:web in 3964) [ClassicSimilarity], result of:
          0.020152984 = score(doc=3964,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.12490524 = fieldWeight in 3964, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
        0.018663906 = weight(_text_:search in 3964) [ClassicSimilarity], result of:
          0.018663906 = score(doc=3964,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.10861491 = fieldWeight in 3964, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
        0.0066983635 = product of:
          0.013396727 = sum of:
            0.013396727 = weight(_text_:22 in 3964) [ClassicSimilarity], result of:
              0.013396727 = score(doc=3964,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.07738023 = fieldWeight in 3964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3964)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Content
    Enthält die Beiträge: Devadason, F.J., N. Intaraksa u. P. Patamawongjariya u.a.: Faceted indexing application for organizing and accessing internet resources; Nicholson, D., S. Wake: HILT: subject retrieval in a distributed environment; Olson, T.: Integrating LCSH and MeSH in information systems; Kuhr, P.S.: Putting the world back together: mapping multiple vocabularies into a single thesaurus; Freyre, E., M. Naudi: MACS : subject access across languages and networks; McIlwaine, I.C.: The UDC and the World Wide Web; Garrison, W.A.: The Colorado Digitization Project: subject access issues; Vizine-Goetz, D., R. Thompson: Towards DDC-classified displays of Netfirst search results: subject access issues; Godby, C.J., J. Stuler: The Library of Congress Classification as a knowledge base for automatic subject categorization: subject access issues; O'Neill, E.T., E. Childress u. R. Dean u.a.: FAST: faceted application of subject terminology; Bean, C.A., R. Green: Improving subject retrieval with frame representation; Zeng, M.L., Y. Chen: Features of an integrated thesaurus management and search system for the networked environment; Hudon, M.: Subject access to Web resources in education; Qin, J., J. Chen: A multi-layered, multi-dimensional representation of digital educational resources; Riesthuis, G.J.A.: Information languages and multilingual subject access; Geisselmann, F.: Access methods in a database of e-journals; Beghtol, C.: The Iter Bibliography: International standard subject access to medieval and renaissance materials (400-1700); Slavic, A.: General library classification in learning material metadata: the application in IMS/LOM and CDMES metadata schemas; Cordeiro, M.I.: From library authority control to network authoritative metadata sources; Koch, T., H. Neuroth u. M. Day: Renardus: Cross-browsing European subject gateways via a common classification system (DDC); Olson, H.A., D.B. Ward: Mundane standards, everyday technologies, equitable access; Burke, M.A.: Personal Construct Theory as a research tool in Library and Information Science: case study: development of a user-driven classification of photographs
    Footnote
    Rez. in: KO 31(2004) no.2, S.117-118 (D. Campbell): "This excellent volume offers 22 papers delivered at an IFLA Satellite meeting in Dublin Ohio in 2001. The conference gathered together information and computer scientists to discuss an important and difficult question: in what specific ways can the accumulated skills, theories and traditions of librarianship be mobilized to face the challenges of providing subject access to information in present and future networked information environments? The papers which grapple with this question are organized in a surprisingly deft and coherent way. Many conferences and proceedings have unhappy sessions that contain a hodge-podge of papers that didn't quite fit any other categories. As befits a good classificationist, editor I.C. McIlwaine has kept this problem to a minimum. The papers are organized into eight sessions, which split into two broad categories. The first five sessions deal with subject domains, and the last three deal with subject access tools. The five sessions and thirteen papers that discuss access in different domains appear in order of in creasing intension. The first papers deal with access in multilingual environments, followed by papers an access across multiple vocabularies and across sectors, ending up with studies of domain-specific retrieval (primarily education). Some of the papers offer predictably strong work by scholars engaged in ongoing, long-term research. Gerard Riesthuis offers a clear analysis of the complexities of negotiating non-identical thesauri, particularly in cases where hierarchical structure varies across different languages. Hope Olson and Dennis Ward use Olson's familiar and welcome method of using provocative and unconventional theory to generate meliorative approaches to blas in general subject access schemes. Many papers, an the other hand, deal with specific ongoing projects: Renardus, The High Level Thesaurus Project, The Colorado Digitization Project and The Iter Bibliography for medieval and Renaissance material. Most of these papers display a similar structure: an explanation of the theory and purpose of the project, an account of problems encountered in the implementation, and a discussion of the results, both promising and disappointing, thus far. Of these papers, the account of the Multilanguage Access to Subjects Project in Europe (MACS) deserves special mention. In describing how the project is founded an the principle of the equality of languages, with each subject heading language maintained in its own database, and with no single language used as a pivot for the others, Elisabeth Freyre and Max Naudi offer a particularly vivid example of the way the ethics of librarianship translate into pragmatic contexts and concrete procedures. The three sessions and nine papers devoted to subject access tools split into two kinds: papers that discuss the use of theory and research to generate new tools for a networked environment, and those that discuss the transformation of traditional subject access tools in this environment. In the new tool development area, Mary Burke provides a promising example of the bidirectional approach that is so often necessary: in her case study of user-driven classification of photographs, she user personal construct theory to clarify the practice of classification, while at the same time using practice to test the theory. Carol Bean and Rebecca Green offer an intriguing combination of librarianship and computer science, importing frame representation technique from artificial intelligence to standardize syntagmatic relationships to enhance recall and precision.
    The papers discussing the transformation of traditional tools locate the point of transformation in different places. Some, like the papers an DDC, LCC and UDC, suggest that these schemes can be imported into the networked environment and used as a basis for improving access to networked resources, just as they improve access to physical resources. While many of these papers are intriguing, I suspect that convincing those outside the profession will be difficult. In particular, Edward O'Neill and his colleagues, while offering a fascinating suggestion for preserving the Library of Congress Subject Headings and their associated infrastructure by converting them into a faceted scheme, will have an uphill battle convincing the unconverted that LCSH has a place in the online networked environment. Two papers deserve mention for taking a different approach: both Francis Devadason and Maria Ines Cordeiro suggest that we import concepts and techniques rather than realized schemes. Devadason argues for the creation of a faceted pre-coordinate indexing scheme for Internet resources based an Deep Structure indexing, which originates in Bhattacharyya's Postulate-Based Permuted Subject Indexing and in Ranganathan's chain indexing techniques. Cordeiro takes up the vitally important role of authority control in Web environments, suggesting that the techniques of authority control be expanded to enhance user flexibility. By focusing her argument an the concepts rather than an the existing tools, and by making useful and important distinctions between library and non-library uses of authority control, Cordeiro suggests that librarianship's contribution to networked access has less to do with its tools and infrastructure, and more to do with concepts that need to be boldly reinvented. The excellence of this collection derives in part from the energy, insight and diversity of the papers. Credit also goes to the planning and forethought that went into the conference itself by OCLC, the IFLA Classification and Indexing Section, the IFLA Information Technology Section, and the Program Committee, headed by editor I.C. McIlwaine. This collection avoids many of the problems of conference proceedings, and instead offers the best of such proceedings: detail, diversity, and judicious mixtures of theory and practice. Some of the disadvantages that plague conference proceedings appear here. Busy scholars sometimes interpret the concept of "camera-ready copy" creatively, offering diagrams that could have used some streamlining, and label boxes that cut off the tops or bottoms of letters. The papers are necessarily short, and many of them raise issues that deserve more extensive treatment. The issue of subject access in networked environments is crying out for further synthesis at the conceptual and theoretical level. But no synthesis can afford to ignore the kind of energetic, imaginative and important work that the papers in these proceedings represent."
  10. Heery, R.: Information gateways : collaboration and content (2000) 0.03
    0.032083966 = product of:
      0.06416793 = sum of:
        0.04072366 = weight(_text_:web in 4866) [ClassicSimilarity], result of:
          0.04072366 = score(doc=4866,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 4866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4866)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 4866) [ClassicSimilarity], result of:
              0.046888545 = score(doc=4866,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 4866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4866)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Information subject gateways provide targeted discovery services for their users, giving access to Web resources selected according to quality and subject coverage criteria. Information gateways recognise that they must collaborate on a wide range of issues relating to content to ensure continued success. This report is informed by discussion of content activities at the 1999 Imesh Workshop. The author considers the implications for subject based gateways of co-operation regarding coverage policy, creation of metadata, and provision of searching and browsing across services. Other possibilities for co-operation include working more closely with information providers, and diclosure of information in joint metadata registries
    Date
    22. 6.2002 19:38:54
  11. Neuroth, H.: Suche in verteilten "Quality-controlled Subject Gateways" : Entwicklung eines Metadatenprofils (2002) 0.03
    0.03104088 = product of:
      0.06208176 = sum of:
        0.029088326 = weight(_text_:web in 2522) [ClassicSimilarity], result of:
          0.029088326 = score(doc=2522,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 2522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2522)
        0.032993436 = weight(_text_:search in 2522) [ClassicSimilarity], result of:
          0.032993436 = score(doc=2522,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 2522, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2522)
      0.5 = coord(2/4)
    
    Abstract
    Die seit ca. 1996 rasche Entwicklung des Internet bzw. des World Wide Web (WWW) hat die Praxis der Veröffentlichung, Verbreitung und Nutzung wissenschaftlicher Informationen grundlegend verändert. Um diese Informationen suchbar und retrievalfähig zu gestalten, ist in den letzten Jahren international viel diskutiert worden. Ein vielversprechender Ansatz, diesen neuen Herausforderungen zu begegnen, liegt in der Entwicklung von Metadatenprofilen. Da durch das Internet verschiedene Datenbestände, die von unterschiedlichen Bereichen wie Museen, Bibliotheken, Archiven etc. vorgehalten werden, unter einer Oberfläche durchsucht werden können, können Metadaten auch in diesem Bereich dazu beitragen, ein einheitliches Konzept zur Beschreibung und zum Retrieval von Online-Ressourcen zu entwickeln. Um die verteilt liegenden Dokumente unter einer Oberfläche für eine qualitativ hochwertige Recherche ("Cross-Search`) anbieten zu können, ist die Verständigung auf ein Core-Set an Metadaten und daran anschließend verschiedene Mappingprozesse ("Cross-walk`) von den lokalen Metadatenformaten zu dem Format des Core-Set an Metadaten notwendig. Ziel des Artikels' ist es, die einzelnen Schritte, die für die Entwicklung eines Metadatenprofils für die gemeinsame Suche über verteilte Metadatensammlungen notwendig sind, aufzuzeigen.
  12. Nicholson, D.; Steele, M.: CATRIONA : a distributed, locally-oriented, Z39.50 OPAC-based approach to cataloguing the Internet (1996) 0.03
    0.029843606 = product of:
      0.059687212 = sum of:
        0.03959212 = weight(_text_:search in 603) [ClassicSimilarity], result of:
          0.03959212 = score(doc=603,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 603, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=603)
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 603) [ClassicSimilarity], result of:
              0.04019018 = score(doc=603,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 603, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=603)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The aims of CATRIONA were: (1) to investigate the requirements for developing procedures and applications for cataloguing and retrieval of networked resources, and (2) to explore the feasibility of a collaborative project to develop such applications and procedures and integrate them with existing library systems. The project established that a distributed catalogue of networked resources integrated with standard Z39.50 library system OPAC interfaces with information on hard-copy resources is already a practical proposition at a basic level. At least one Z39.50 OPAC client can search remote Z39.50 OPACs, retrieve USMARC records with URLs in 856$u, load a viewer like Netscape, and use it to retrieve and display the remotely held electronic resource on the local workstation. A follow-up project on related issues is being finalised.
    Source
    Cataloging and classification quarterly. 22(1996) nos.3/4, S.127-141
  13. Polleres, A.; Lausen, H.; Lara, R.: Semantische Beschreibung von Web Services (2006) 0.03
    0.026936168 = product of:
      0.10774467 = sum of:
        0.10774467 = weight(_text_:web in 5813) [ClassicSimilarity], result of:
          0.10774467 = score(doc=5813,freq=14.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.6677857 = fieldWeight in 5813, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5813)
      0.25 = coord(1/4)
    
    Abstract
    In diesem Kapitel werden Anwendungsgebiete und Ansätze für die semantische Beschreibung von Web Services behandelt. Bestehende Web Service Technologien leisten einen entscheidenden Beitrag zur Entwicklung verteilter Anwendungen dadurch, dass weithin akzeptierte Standards vorliegen, die die Kommunikation zwischen Anwendungen bestimmen und womit deren Kombination zu komplexeren Einheiten ermöglicht wird. Automatisierte Mechanismen zum Auffinden geeigneter Web Services und deren Komposition dagegen werden von bestehenden Technologien in vergleichsweise geringem Maß unterstützt. Ähnlich wie bei der Annotation statischer Daten im "Semantic Web" setzen Forschung und Industrie große Hoffnungen in die semantische Beschreibung von Web Services zur weitgehenden Automatisierung dieser Aufgaben.
    Source
    Semantic Web: Wege zur vernetzten Wissensgesellschaft. Hrsg.: T. Pellegrini, u. A. Blumauer
  14. Kaizik, A.; Gödert, W.; Milanesi, C.: Erfahrungen und Ergebnisse aus der Evaluierung des EU-Projektes EULER im Rahmen des an der FH Köln angesiedelten Projektes EJECT (Evaluation von Subject Gateways des World Wide Web (2001) 0.03
    0.026385307 = product of:
      0.052770615 = sum of:
        0.029088326 = weight(_text_:web in 5801) [ClassicSimilarity], result of:
          0.029088326 = score(doc=5801,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 5801, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5801)
        0.02368229 = product of:
          0.04736458 = sum of:
            0.04736458 = weight(_text_:22 in 5801) [ClassicSimilarity], result of:
              0.04736458 = score(doc=5801,freq=4.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.27358043 = fieldWeight in 5801, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5801)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 6.2002 19:42:22
  15. Teets, M.; Murray, P.: Metasearch authentication and access management (2006) 0.02
    0.018443892 = product of:
      0.07377557 = sum of:
        0.07377557 = weight(_text_:search in 1154) [ClassicSimilarity], result of:
          0.07377557 = score(doc=1154,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 1154, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1154)
      0.25 = coord(1/4)
    
    Abstract
    Metasearch - also called parallel search, federated search, broadcast search, and cross-database search - has become commonplace in the information community's vocabulary. All speak to a common theme of searching and retrieving from multiple databases, sources, platforms, protocols, and vendors at the point of the user's request. Metasearch services rely on a variety of approaches including open standards (such as NISO's Z39.50 and SRU/SRW), proprietary programming interfaces, and "screen scraping." However, the absence of widely supported standards, best practices, and tools makes the metasearch environment less efficient for the metasearch provider, the content provider, and ultimately the end-user. To spur the development of widely supported standards and best practices, the National Information Standards Organization (NISO) sponsored a Metasearch Initiative in 2003 to enable: * metasearch service providers to offer more effective and responsive services, * content providers to deliver enhanced content and protect their intellectual property, and * libraries to deliver a simple search (a.k.a. "Google") that covers the breadth of their vetted commercial and free resources. The Access Management Task Group was one of three groups chartered by NISO as part of the Metasearch Initiative. The focus of the group was on gathering requirements for Metasearch authentication and access needs, inventorying existing processes, developing a series of formal use cases describing the access needs, recommending best practices given today's processes, and recommending and pursing changes to current solutions to better support metasearch applications. In September 2005, the group issued their final report and recommendation. This article summarizes the group's work and final recommendation.
  16. Veen, T. van; Oldroyd, B.: Search and retrieval in The European Library : a new approach (2004) 0.02
    0.018443892 = product of:
      0.07377557 = sum of:
        0.07377557 = weight(_text_:search in 1164) [ClassicSimilarity], result of:
          0.07377557 = score(doc=1164,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 1164, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1164)
      0.25 = coord(1/4)
    
    Abstract
    The objective of the European Library (TEL) project [TEL] was to set up a co-operative framework and specify a system for integrated access to the major collections of the European national libraries. This has been achieved by successfully applying a new approach for search and retrieval via URLs (SRU) [ZiNG] combined with a new metadata paradigm. One aim of the TEL approach is to have a low barrier of entry into TEL, and this has driven our choice for the technical solution described here. The solution comprises portal and client functionality running completely in the browser, resulting in a low implementation barrier and maximum scalability, as well as giving users control over the search interface and what collections to search. In this article we will describe, step by step, the development of both the search and retrieval architecture and the metadata infrastructure in the European Library project. We will show that SRU is a good alternative to the Z39.50 protocol and can be implemented without losing investments in current Z39.50 implementations. The metadata model being used by TEL is a Dublin Core Application Profile, and we have taken into account that functional requirements will change over time and therefore the metadata model will need to be able to evolve in a controlled way. We make this possible by means of a central metadata registry containing all characteristics of the metadata in TEL. Finally, we provide two scenarios to show how the TEL concept can be developed and extended, with applications capable of increasing their functionality by "learning" new metadata or protocol options.
  17. Groenbaek, K.; Trigg, R.H.: From Web to workplace : designing open hypermedia systems (1999) 0.02
    0.017452994 = product of:
      0.06981198 = sum of:
        0.06981198 = weight(_text_:web in 6096) [ClassicSimilarity], result of:
          0.06981198 = score(doc=6096,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 6096, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=6096)
      0.25 = coord(1/4)
    
  18. Lopatenko, A.; Asserson, A.; Jeffery, K.G.: CERIF - Information retrieval of research information in a distributed heterogeneous environment (2002) 0.02
    0.017452994 = product of:
      0.06981198 = sum of:
        0.06981198 = weight(_text_:web in 3597) [ClassicSimilarity], result of:
          0.06981198 = score(doc=3597,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 3597, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3597)
      0.25 = coord(1/4)
    
    Abstract
    User demands to have access to complete and actual information about research may require integration of data from different CRISs. CRISs are rarely homogenous systems and problems of CRISs integration must be addressed from technological point of view. Implementation of CRIS providing access to heterogeneous data distributed among a number of CRISs is described. A few technologies - distributed databases, web services, semantic web are used for distributed CRIS to address different user requirements. Distributed databases serve to implement very efficient integration of homogenous systems, web services - to provide open access to research information, semantic web - to solve problems of integration semantically and structurally heterogeneous data sources and provide intelligent data retrieval interfaces. The problems of data completeness in distributed systems are addressed and CRIS-adequate solution for data completeness is suggested.
  19. Krause, J.: Heterogenität und Integration : Zur Weiterentwicklung von Inhaltserschließung und Retrieval in sich veränderten Kontexten (2001) 0.02
    0.016496718 = product of:
      0.06598687 = sum of:
        0.06598687 = weight(_text_:search in 6071) [ClassicSimilarity], result of:
          0.06598687 = score(doc=6071,freq=8.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.3840117 = fieldWeight in 6071, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6071)
      0.25 = coord(1/4)
    
    Abstract
    As an important support tool in science research, specialized information systems are rapidly changing their character. The potential for improvement compared with today's usual systems is enormous. This fact will be demonstrated by means of two problem complexes: - WWW search engines, which were developed without any government grants, are increasingly dominating the scene. Does the WWW displace information centers with their high quality databases? What are the results we can get nowadays using general WWW search engines? - In addition to the WWW and specialized databases, scientists now use WWW library catalogues of digital libraries, which combine the catalogues from an entire region or a country. At the same time, however, they are faced with highly decentralized heterogeneous databases which contain the widest range of textual sources and data, e.g. from surveys. One consequence is the presence of serious inconsistencies in quality, relevance and content analysis. Thus, the main problem to be solved is as follows: users must be supplied with heterogeneous data from different sources, modalities and content development processes via a visual user interface without inconsistencies in content development, for example, seriously impairing the quality of the search results, e. g. when phrasing their search inquiry in the terminology to which they are accustomed
  20. Lügger, J.: Offene Navigation und Suchmaschinen in Verbünden, Konsortien und den Wissenschaften (2004) 0.02
    0.016454842 = product of:
      0.06581937 = sum of:
        0.06581937 = weight(_text_:web in 4980) [ClassicSimilarity], result of:
          0.06581937 = score(doc=4980,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4079388 = fieldWeight in 4980, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4980)
      0.25 = coord(1/4)
    
    Abstract
    Integration der Navigation und Suche in lizenzierten Journalen und gleichzeitig in freien digitalen Dokumenten unter einer einheitlichen konsistenten Nutzeroberflache ist eines der ungelösten F&E-Probleme der Fachinformation. Hierbei müssen Elemente des Invisible Web und des Visible Web unter Berücksichtigung offener Standards nahtlos miteinander verbunden werden. Der Artikel beschreibt Ausgangspunkt und Entwicklungsgeschichte eines kooperativen Vorhabens "Verteilter Zeitschriftenserver", das sich auf dem Wege über eine Generalisierung zum "Verteilten Dokumentenserver" zur Basis der Kooperation von vascoda und AGVerbund mit dem Ziel der Realisierung einer Offenen Digitalen Bibliothek der Wissenschaften entwickelte.

Languages

  • e 28
  • d 13
  • f 1
  • More… Less…

Types

  • a 38
  • el 6
  • m 3
  • x 2
  • s 1
  • More… Less…

Classifications