Search (27 results, page 1 of 2)

  • × theme_ss:"Verteilte bibliographische Datenbanken"
  1. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.04
    0.041401878 = product of:
      0.096604384 = sum of:
        0.055571415 = weight(_text_:personal in 6959) [ClassicSimilarity], result of:
          0.055571415 = score(doc=6959,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.27857435 = fieldWeight in 6959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.027633758 = weight(_text_:ed in 6959) [ClassicSimilarity], result of:
          0.027633758 = score(doc=6959,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.19644247 = fieldWeight in 6959, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.013399213 = product of:
          0.026798425 = sum of:
            0.026798425 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
              0.026798425 = score(doc=6959,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.19345059 = fieldWeight in 6959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6959)
          0.5 = coord(1/2)
      0.42857143 = coord(3/7)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05
    Source
    Saving the time of the library user through subject access innovation: Papers in honor of Pauline Atherton Cochrane. Ed.: W.J. Wheeler
  2. CARMEN : Content Analysis, Retrieval und Metadata: Effective Networking (1999) 0.02
    0.021873739 = product of:
      0.15311617 = sum of:
        0.15311617 = weight(_text_:global in 5748) [ClassicSimilarity], result of:
          0.15311617 = score(doc=5748,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.77375764 = fieldWeight in 5748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.109375 = fieldNorm(doc=5748)
      0.14285715 = coord(1/7)
    
    Content
    Projektbeschreibung; im Rahmen von Global-Info
  3. Global Info : Nachfolgeprojekte (2001) 0.02
    0.020961931 = product of:
      0.14673351 = sum of:
        0.14673351 = weight(_text_:global in 5805) [ClassicSimilarity], result of:
          0.14673351 = score(doc=5805,freq=10.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.7415035 = fieldWeight in 5805, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.046875 = fieldNorm(doc=5805)
      0.14285715 = coord(1/7)
    
    Abstract
    Mit dem 31. Dezember 2000 wurde das Förderkonzept "Global Info" des Bundesforschungsministeriums (BMBF) beendet. Hierdurch wird weder die Förderung der Fachinformation beendet, noch wird der Bereich der Digitalen Bibliotheken aufgegeben. Einerseits wird durch die "lnformationsverbünde" die Versorgung mit fachspezifischen elektronischen Angeboten gezielt verbessert werden. Andererseits wird im Bereich der Digitalen Bibliotheken eine "Leitvision" formuliert werden, an der sich künftige Vorhaben ausrichten. Zur Vorbereitung dieser Leitvision wurde am 1.1.2001 ein Planungsprojekt gestartet, das unter dem Titel "Digital Library-Forum" für die Dauer eines Jahres die inhaltliche Ausgestaltung dieser Leitvision moderieren wird. Entsprechende Workshops und Arbeitsgruppen werden in nächster Zeit durchgeführt bzw. eingerichtet. Projekte im Digital Library-Bereich können weiterhin eingereicht werden. Entsprechend dem in Global Info etablierten Verfahren können Projektskizzen, nun aber beim Projektträger Fachinformation (PTF), eingereicht werden. Anträge werden weiterhin durch ein Gutachtergremium, das zweimal im Jahr zusammen kommt, begutachtet. Der Global Info-Server wird im Laufe des Jahres 2001 durch einen vom Projektträger Fachinformation betriebenen Server ersetzt, der darüber hinaus neue und weitreichendere Aufgaben übernimmt. Dieser Server unter der URL <www.di-forum.de> soll in Kooperation mit der DFG und weiteren Förderern betrieben werden und als Forum aller Digital Library-Aktivitäten in Deutschland fungieren
    Object
    Global Info
  4. Kochtanek, T.R.; Matthews, J.R.: Library information systems : from library automation to distributed information systems (2002) 0.01
    0.011886453 = product of:
      0.041602585 = sum of:
        0.027785707 = weight(_text_:personal in 1792) [ClassicSimilarity], result of:
          0.027785707 = score(doc=1792,freq=2.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.13928717 = fieldWeight in 1792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1792)
        0.013816879 = weight(_text_:ed in 1792) [ClassicSimilarity], result of:
          0.013816879 = score(doc=1792,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.098221235 = fieldWeight in 1792, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1792)
      0.2857143 = coord(2/7)
    
    Footnote
    Rez. in: JASIST 54(2003) no.12, S.1166-1167 (Brenda Chawner): "Kochtanek and Matthews have written a welcome addition to the small set of introductory texts an applications of information technology to library and information Services. The book has fourteen chapters grouped into four sections: "The Broader Context," "The Technologies," "Management Issues," and "Future Considerations." Two chapters provide the broad content, with the first giving a historical overview of the development and adoption of "library information systems." Kochtanek and Matthews define this as "a wide array of solutions that previously might have been considered separate industries with distinctly different marketplaces" (p. 3), referring specifically to integrated library systems (ILS, and offen called library management systems in this part of the world), and online databases, plus the more recent developments of Web-based resources, digital libraries, ebooks, and ejournals. They characterize technology adoption patterns in libraries as ranging from "bleeding edge" to "leading edge" to "in the wedge" to "trailing edge"-this is a catchy restatement of adopter categories from Rogers' diffusion of innovation theory, where they are more conventionally known as "early adopters," "early majority," "late majority," and "laggards." This chapter concludes with a look at more general technology trends that have affected library applications, including developments in hardware (moving from mainframes to minicomputers to personal Computers), changes in software development (from in-house to packages), and developments in communications technology (from dedicated host Computers to more open networks to the current distributed environment found with the Internet). This is followed by a chapter describing the ILS and online database industries in some detail. "The Technologies" begins with a chapter an the structure and functionality of integrated library systems, which also includes a brief discussion of precision versus recall, managing access to internal documents, indexing and searching, and catalogue maintenance. This is followed by a chapter an open systems, which concludes with a useful list of questions to consider to determine an organization's readiness to adopt open source solutions. As one world expect, this section also includes a detailed chapter an telecommunications and networking, which includes types of networks, transmission media, network topologies, switching techniques (ranging from dial up and leased lines to ISDN/DSL, frame relay, and ATM). It concludes with a chapter an the role and importance of standards, which covers the need for standards and standards organizations, and gives examples of different types of standards, such as MARC, Dublin Core, Z39.50, and markup standards such as SGML, HTML, and XML. Unicode is also covered but only briefly. This section world be strengthened by a chapter an hardware concepts-the authors assume that their reader is already familiar with these, which may not be true in all cases (for example, the phrase "client-Server" is first used an page 11, but only given a brief definition in the glossary). Burke's Library Technology Companion: A Basic Guide for Library Staff (New York: Neal-Schuman, 2001) might be useful to fill this gap at an introductory level, and Saffady's Introduction to Automation for Librarians, 4th ed. (Chicago: American Library Association, 1999) world be better for those interested in more detail. The final two sections, however, are the book's real strength, with a strong focus an management issues, and this content distinguishes it from other books an this topic such as Ferguson and Hebels Computers for Librarians: an Introduction to Systems and Applications (Waggawagga, NSW: Centre for Information Studies, Charles Sturt University, 1998). ...
  5. Subject retrieval in a networked environment : Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC (2003) 0.01
    0.010513036 = product of:
      0.036795624 = sum of:
        0.03143594 = weight(_text_:personal in 3964) [ClassicSimilarity], result of:
          0.03143594 = score(doc=3964,freq=4.0), product of:
            0.19948503 = queryWeight, product of:
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.0395589 = queryNorm
            0.15758546 = fieldWeight in 3964, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.0427346 = idf(docFreq=775, maxDocs=44218)
              0.015625 = fieldNorm(doc=3964)
        0.0053596846 = product of:
          0.010719369 = sum of:
            0.010719369 = weight(_text_:22 in 3964) [ClassicSimilarity], result of:
              0.010719369 = score(doc=3964,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.07738023 = fieldWeight in 3964, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.015625 = fieldNorm(doc=3964)
          0.5 = coord(1/2)
      0.2857143 = coord(2/7)
    
    Content
    Enthält die Beiträge: Devadason, F.J., N. Intaraksa u. P. Patamawongjariya u.a.: Faceted indexing application for organizing and accessing internet resources; Nicholson, D., S. Wake: HILT: subject retrieval in a distributed environment; Olson, T.: Integrating LCSH and MeSH in information systems; Kuhr, P.S.: Putting the world back together: mapping multiple vocabularies into a single thesaurus; Freyre, E., M. Naudi: MACS : subject access across languages and networks; McIlwaine, I.C.: The UDC and the World Wide Web; Garrison, W.A.: The Colorado Digitization Project: subject access issues; Vizine-Goetz, D., R. Thompson: Towards DDC-classified displays of Netfirst search results: subject access issues; Godby, C.J., J. Stuler: The Library of Congress Classification as a knowledge base for automatic subject categorization: subject access issues; O'Neill, E.T., E. Childress u. R. Dean u.a.: FAST: faceted application of subject terminology; Bean, C.A., R. Green: Improving subject retrieval with frame representation; Zeng, M.L., Y. Chen: Features of an integrated thesaurus management and search system for the networked environment; Hudon, M.: Subject access to Web resources in education; Qin, J., J. Chen: A multi-layered, multi-dimensional representation of digital educational resources; Riesthuis, G.J.A.: Information languages and multilingual subject access; Geisselmann, F.: Access methods in a database of e-journals; Beghtol, C.: The Iter Bibliography: International standard subject access to medieval and renaissance materials (400-1700); Slavic, A.: General library classification in learning material metadata: the application in IMS/LOM and CDMES metadata schemas; Cordeiro, M.I.: From library authority control to network authoritative metadata sources; Koch, T., H. Neuroth u. M. Day: Renardus: Cross-browsing European subject gateways via a common classification system (DDC); Olson, H.A., D.B. Ward: Mundane standards, everyday technologies, equitable access; Burke, M.A.: Personal Construct Theory as a research tool in Library and Information Science: case study: development of a user-driven classification of photographs
    Footnote
    Rez. in: KO 31(2004) no.2, S.117-118 (D. Campbell): "This excellent volume offers 22 papers delivered at an IFLA Satellite meeting in Dublin Ohio in 2001. The conference gathered together information and computer scientists to discuss an important and difficult question: in what specific ways can the accumulated skills, theories and traditions of librarianship be mobilized to face the challenges of providing subject access to information in present and future networked information environments? The papers which grapple with this question are organized in a surprisingly deft and coherent way. Many conferences and proceedings have unhappy sessions that contain a hodge-podge of papers that didn't quite fit any other categories. As befits a good classificationist, editor I.C. McIlwaine has kept this problem to a minimum. The papers are organized into eight sessions, which split into two broad categories. The first five sessions deal with subject domains, and the last three deal with subject access tools. The five sessions and thirteen papers that discuss access in different domains appear in order of in creasing intension. The first papers deal with access in multilingual environments, followed by papers an access across multiple vocabularies and across sectors, ending up with studies of domain-specific retrieval (primarily education). Some of the papers offer predictably strong work by scholars engaged in ongoing, long-term research. Gerard Riesthuis offers a clear analysis of the complexities of negotiating non-identical thesauri, particularly in cases where hierarchical structure varies across different languages. Hope Olson and Dennis Ward use Olson's familiar and welcome method of using provocative and unconventional theory to generate meliorative approaches to blas in general subject access schemes. Many papers, an the other hand, deal with specific ongoing projects: Renardus, The High Level Thesaurus Project, The Colorado Digitization Project and The Iter Bibliography for medieval and Renaissance material. Most of these papers display a similar structure: an explanation of the theory and purpose of the project, an account of problems encountered in the implementation, and a discussion of the results, both promising and disappointing, thus far. Of these papers, the account of the Multilanguage Access to Subjects Project in Europe (MACS) deserves special mention. In describing how the project is founded an the principle of the equality of languages, with each subject heading language maintained in its own database, and with no single language used as a pivot for the others, Elisabeth Freyre and Max Naudi offer a particularly vivid example of the way the ethics of librarianship translate into pragmatic contexts and concrete procedures. The three sessions and nine papers devoted to subject access tools split into two kinds: papers that discuss the use of theory and research to generate new tools for a networked environment, and those that discuss the transformation of traditional subject access tools in this environment. In the new tool development area, Mary Burke provides a promising example of the bidirectional approach that is so often necessary: in her case study of user-driven classification of photographs, she user personal construct theory to clarify the practice of classification, while at the same time using practice to test the theory. Carol Bean and Rebecca Green offer an intriguing combination of librarianship and computer science, importing frame representation technique from artificial intelligence to standardize syntagmatic relationships to enhance recall and precision.
  6. Strötgen, R.; Kokkelink, S.: Metadatenextraktion aus Internetquellen : Heterogenitätsbehandlung im Projekt CARMEN (2001) 0.01
    0.0078120497 = product of:
      0.054684345 = sum of:
        0.054684345 = weight(_text_:global in 5808) [ClassicSimilarity], result of:
          0.054684345 = score(doc=5808,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.276342 = fieldWeight in 5808, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5808)
      0.14285715 = coord(1/7)
    
    Abstract
    Die Sonderfördermaßnahme CARMEN (Content Analysis, Retrieval and Metadata: Effective Networking) zielt im Rahmen des vom BMB+F geförderten Programms GLOBAL INFO darauf ab, in der heutigen dezentralen Informationsweit geeignete Informationssysteme für die verteilten Datenbestände in Bibliotheken, Fachinformationszentren und im Internet zu schaffen. Diese Zusammenführung ist weniger technisch als inhaltlich und konzeptuell problematisch. Heterogenität tritt beispielsweise auf, wenn unterschiedliche Datenbestände zur Inhaltserschließung verschiedene Thesauri oder Klassifikationen benutzen, wenn Metadaten unterschiedlich oder überhaupt nicht erfasst werden oder wenn intellektuell aufgearbeitete Quellen mit in der Regel vollständig unerschlossenen Internetdokumenten zusammentreffen. Im Projekt CARMEN wird dieses Problem mit mehreren Methoden angegangen: Über deduktiv-heuristische Verfahren werden Metadaten automatisch aus Dokumenten generiert, außerdem lassen sich mit statistisch-quantitativen Methoden die unterschiedlichen Verwendungen von Termen in den verschiedenen Beständen aufeinander abbilden, und intellektuell erstellte Crosskonkordanzen schaffen sichere Übergänge von einer Dokumentationssprache in eine andere. Für die Extraktion von Metadaten gemäß Dublin Core (v. a. Autor, Titel, Institution, Abstract, Schlagworte) werden anhand typischer Dokumente (Dissertationen aus Math-Net im PostScript-Format und verschiedenste HTML-Dateien von WWW-Servern deutscher sozialwissenschaftlicher Institutionen) Heuristiken entwickelt. Die jeweilige Wahrscheinlichkeit, dass die so gewonnenen Metadaten korrekt und vertrauenswürdig sind, wird über Gewichte den einzelnen Daten zugeordnet. Die Heuristiken werden iterativ in ein Extraktionswerkzeug implementiert, getestet und verbessert, um die Zuverlässigkeit der Verfahren zu erhöhen. Derzeit werden an der Universität Osnabrück und im InformationsZentrum Sozialwissenschaften Bonn anhand mathematischer und sozialwissenschaftlicher Datenbestände erste Prototypen derartiger Transfermodule erstellt
  7. Tappenbeck, I.; Wessel, C.: CARMEN : Content Analysis, Retrieval and Metadata: Effective Net-working. Ein Halbzeitbericht (2001) 0.01
    0.0062496397 = product of:
      0.043747477 = sum of:
        0.043747477 = weight(_text_:global in 5900) [ClassicSimilarity], result of:
          0.043747477 = score(doc=5900,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.22107361 = fieldWeight in 5900, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.03125 = fieldNorm(doc=5900)
      0.14285715 = coord(1/7)
    
    Abstract
    Das Projekt CARMEN startete als Sonderfördermaßnahme im Rahmen von Global lnfo im Oktober 1999 mit einer geplanten Laufzeit von 29 Monaten. Der Schwerpunkt des Projekts liegt in der Weiterentwicklung von Konzepten und Verfahren der Dokumenterschließung, die den Zugriff auf heterogene, dezentral verteilte Informationsbestände und deren Verwaltung nach gemeinsamen Prinzipien ermöglichen sollen. Dabei geht CARMEN gezielt einen anderen Weg als die meisten bisherigen Ansätze in diesem Bereich, die versuchen, Homogenität und Konsistenz in einer dezentralen Informationslandschaft technikorientiert herzustellen, indem Verfahren entwickelt werden, durch die physikalisch auf verschiedene Dokumentenräume gleichzeitig zugegriffen werden kann. Eine rein technische Parallelisierung von Zugriffsmöglichkeiten reicht jedoch nicht aus, denn das Hauptproblem der inhaltlichen, strukturellen und konzeptionellen Differenz der einzelnen Datenbestände wird damit nicht gelöst. Um diese Differenzen zu kompensieren, werden Problemlösungen und Weiterentwicklungen innerhalb des Projekts CARMEN in drei Bereichen erarbeitet: (1) Metadaten (Dokumentbeschreibung, Retrieval, Verwaltung, Archivierung) (2) Methoden des Umgangs mit der verbleibenden Heterogenität der Datenbestände (3) Retrieval für strukturierte Dokumente mit Metadaten und heterogenen Datentypen. Diese drei Aufgabenbereiche hängen eng zusammen. Durch die Entwicklungen im Bereich der Metadaten soll einerseits die verlorengegangene Konsistenz partiell wiederhergestellt und auf eine den neuen Medien gerechte Basis gestellt werden. Andererseits sollen durch Verfahren zur Heterogenitätsbehandlung Dokumente mit unterschiedlicher Datenrelevanz und Inhaltserschließung aufeinander bezogen und retrievalseitig durch ein Rechercheverfahren erganzt werden, das den unterschiedlichen Datentypen gerecht wird Innerhalb des Gesamtprojekts CARMEN werden diese Aspekte arbeitsteilig behandelt. Acht Arbeitspakete (APs) befassen sich in Abstimmung miteinander mit je verschiedenen Schwerpunkten. Um die Koordination der Arbeiten der verschiedenen APs untereinander zu unterstützen, trafen sich die ca. 40 Projektbearbeiter am 1. und 2. Februar 2001 zum "CARMEN middle OfTheRoad Workshop" in Bonn. Anlässlich dieses Workshops wurden die inhaltlichen und technischen Ergebnisse, die in der ersten Hälfte der Projektlaufzeit von den einzelnen APs erzielt worden sind, in insgesamt 17 Präsentationen vorgestellt
  8. Tappenbeck, I.; Wessel, C.: CARMEN : Content Analysis, Retrieval and Metadata: Effective Net-working. Bericht über den middleOfTheRoad Workshop (2001) 0.01
    0.0062496397 = product of:
      0.043747477 = sum of:
        0.043747477 = weight(_text_:global in 5901) [ClassicSimilarity], result of:
          0.043747477 = score(doc=5901,freq=2.0), product of:
            0.19788647 = queryWeight, product of:
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.0395589 = queryNorm
            0.22107361 = fieldWeight in 5901, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.002325 = idf(docFreq=807, maxDocs=44218)
              0.03125 = fieldNorm(doc=5901)
      0.14285715 = coord(1/7)
    
    Abstract
    Das Projekt CARMEN startete als Sonderfördermaßnahme im Rahmen von Global lnfo im Oktober 1999 mit einer geplanten Laufzeit von 29 Monaten. Der Schwerpunkt des Projekts liegt in der Weiterentwicklung von Konzepten und Verfahren der Dokumenterschließung, die den Zugriff auf heterogene, dezentral verteilte Informationsbestände und deren Verwaltung nach gemeinsamen Prinzipien ermöglichen sollen. Dabei geht CARMEN gezielt einen anderen Weg als die meisten bisherigen Ansätze in diesem Bereich, die versuchen, Homogenität und Konsistenz in einer dezentralen Informationslandschaft technikorientiert herzustellen, indem Verfahren entwickelt werden, durch die physikalisch auf verschiedene Dokumentenräume gleichzeitig zugegriffen werden kann. Eine rein technische Parallelisierung von Zugriffsmöglichkeiten reicht jedoch nicht aus, denn das Hauptproblem der inhaltlichen, strukturellen und konzeptionellen Differenz der einzelnen Datenbestände wird damit nicht gelöst. Um diese Differenzen zu kompensieren, werden Problemlösungen und Weiterentwicklungen innerhalb des Projekts CARMEN in drei Bereichen erarbeitet: (1) Metadaten (Dokumentbeschreibung, Retrieval, Verwaltung, Archivierung) (2) Methoden des Umgangs mit der verbleibenden Heterogenität der Datenbestände (3) Retrieval für strukturierte Dokumente mit Metadaten und heterogenen Datentypen. Diese drei Aufgabenbereiche hängen eng zusammen. Durch die Entwicklungen im Bereich der Metadaten soll einerseits die verlorengegangene Konsistenz partiell wiederhergestellt und auf eine den neuen Medien gerechte Basis gestellt werden. Andererseits sollen durch Verfahren zur Heterogenitätsbehandlung Dokumente mit unterschiedlicher Datenrelevanz und Inhaltserschließung aufeinander bezogen und retrievalseitig durch ein Rechercheverfahren erganzt werden, das den unterschiedlichen Datentypen gerecht wird Innerhalb des Gesamtprojekts CARMEN werden diese Aspekte arbeitsteilig behandelt. Acht Arbeitspakete (APs) befassen sich in Abstimmung miteinander mit je verschiedenen Schwerpunkten. Um die Koordination der Arbeiten der verschiedenen APs untereinander zu unterstützen, trafen sich die ca. 40 Projektbearbeiter am 1. und 2. Februar 2001 zum "CARMEN middle OfTheRoad Workshop" in Bonn. Anlässlich dieses Workshops wurden die inhaltlichen und technischen Ergebnisse, die in der ersten Hälfte der Projektlaufzeit von den einzelnen APs erzielt worden sind, in insgesamt 17 Präsentationen vorgestellt
  9. Laegreid, J.A.: ¬The Nordic SR-net project : implementation of the SR/Z39.50 standards in the Nordic countries (1994) 0.01
    0.005526752 = product of:
      0.038687263 = sum of:
        0.038687263 = weight(_text_:ed in 3196) [ClassicSimilarity], result of:
          0.038687263 = score(doc=3196,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.27501947 = fieldWeight in 3196, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3196)
      0.14285715 = coord(1/7)
    
    Source
    Resource sharing: new technologies as a must for Universal Availability of Information. Proceedings of the 16th International Essen Symposium, 18-21 Oct 1993. Ed.: A.H. Helal u. J.W. Weiss
  10. Callan, J.: Distributed information retrieval (2000) 0.01
    0.005526752 = product of:
      0.038687263 = sum of:
        0.038687263 = weight(_text_:ed in 31) [ClassicSimilarity], result of:
          0.038687263 = score(doc=31,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.27501947 = fieldWeight in 31, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0546875 = fieldNorm(doc=31)
      0.14285715 = coord(1/7)
    
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  11. Barker, P.: ¬A study of the use of the X.500 directory for bibliographic querying (1995) 0.00
    0.004737216 = product of:
      0.03316051 = sum of:
        0.03316051 = weight(_text_:ed in 1505) [ClassicSimilarity], result of:
          0.03316051 = score(doc=1505,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.23573098 = fieldWeight in 1505, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.046875 = fieldNorm(doc=1505)
      0.14285715 = coord(1/7)
    
    Source
    Electronic library and visual information research: Proceedings of the First ELVIRA Conference (ELVIRA 1), Electronic Library and Visual Information Research, De Montfort University, Milton Keynes, May 1994. Ed. by M. Collier u, K. Arnold
  12. Croft, W.B.: Combining approaches to information retrieval (2000) 0.00
    0.004737216 = product of:
      0.03316051 = sum of:
        0.03316051 = weight(_text_:ed in 6862) [ClassicSimilarity], result of:
          0.03316051 = score(doc=6862,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.23573098 = fieldWeight in 6862, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
      0.14285715 = coord(1/7)
    
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  13. Xu, J.; Croft, W.B.: Topic-based language models for distributed retrieval (2000) 0.00
    0.004737216 = product of:
      0.03316051 = sum of:
        0.03316051 = weight(_text_:ed in 38) [ClassicSimilarity], result of:
          0.03316051 = score(doc=38,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.23573098 = fieldWeight in 38, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.046875 = fieldNorm(doc=38)
      0.14285715 = coord(1/7)
    
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  14. Nicholson, D.; Wake, S.: HILT: subject retrieval in a distributed environment (2003) 0.00
    0.004737216 = product of:
      0.03316051 = sum of:
        0.03316051 = weight(_text_:ed in 3810) [ClassicSimilarity], result of:
          0.03316051 = score(doc=3810,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.23573098 = fieldWeight in 3810, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.046875 = fieldNorm(doc=3810)
      0.14285715 = coord(1/7)
    
    Source
    Subject retrieval in a networked environment: Proceedings of the IFLA Satellite Meeting held in Dublin, OH, 14-16 August 2001 and sponsored by the IFLA Classification and Indexing Section, the IFLA Information Technology Section and OCLC. Ed.: I.C. McIlwaine
  15. Milanesi, C.: Möglichkeiten der Kooperation im Rahmen von Subject Gateways : das Euler-Projekt im Vergleich mit weiteren europäischen Projekten (2001) 0.00
    0.0045940154 = product of:
      0.032158107 = sum of:
        0.032158107 = product of:
          0.06431621 = sum of:
            0.06431621 = weight(_text_:22 in 4865) [ClassicSimilarity], result of:
              0.06431621 = score(doc=4865,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.46428138 = fieldWeight in 4865, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=4865)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    22. 6.2002 19:41:59
  16. Banwell, L.: Developing and evaluation framework for a supranational digital library (2003) 0.00
    0.003158144 = product of:
      0.022107007 = sum of:
        0.022107007 = weight(_text_:ed in 2769) [ClassicSimilarity], result of:
          0.022107007 = score(doc=2769,freq=2.0), product of:
            0.140671 = queryWeight, product of:
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.0395589 = queryNorm
            0.15715398 = fieldWeight in 2769, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.5559888 = idf(docFreq=3431, maxDocs=44218)
              0.03125 = fieldNorm(doc=2769)
      0.14285715 = coord(1/7)
    
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
  17. Dupuis, P.; Lapointe, J.: Developpement d'un outil documentaire à Hydro-Quebec : le Thesaurus HQ (1997) 0.00
    0.003062677 = product of:
      0.021438738 = sum of:
        0.021438738 = product of:
          0.042877477 = sum of:
            0.042877477 = weight(_text_:22 in 3173) [ClassicSimilarity], result of:
              0.042877477 = score(doc=3173,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.30952093 = fieldWeight in 3173, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3173)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Argus. 26(1997) no.3, S.16-22
  18. Dempsey, L.; Russell, R.; Kirriemur, J.W.: Towards distributed library systems : Z39.50 in a European context (1996) 0.00
    0.003062677 = product of:
      0.021438738 = sum of:
        0.021438738 = product of:
          0.042877477 = sum of:
            0.042877477 = weight(_text_:22 in 127) [ClassicSimilarity], result of:
              0.042877477 = score(doc=127,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.30952093 = fieldWeight in 127, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=127)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Program. 30(1996) no.1, S.1-22
  19. Ashton, J.: ONE: the final OPAC frontier (1998) 0.00
    0.003062677 = product of:
      0.021438738 = sum of:
        0.021438738 = product of:
          0.042877477 = sum of:
            0.042877477 = weight(_text_:22 in 2588) [ClassicSimilarity], result of:
              0.042877477 = score(doc=2588,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.30952093 = fieldWeight in 2588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2588)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Source
    Select newsletter. 1998, no.22, Spring, S.5-6
  20. Lunau, C.D.: Z39.50: a critical component of the Canadian resource sharing infrastructure : implementation activities and results achieved (1997) 0.00
    0.003062677 = product of:
      0.021438738 = sum of:
        0.021438738 = product of:
          0.042877477 = sum of:
            0.042877477 = weight(_text_:22 in 3193) [ClassicSimilarity], result of:
              0.042877477 = score(doc=3193,freq=2.0), product of:
                0.13852853 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0395589 = queryNorm
                0.30952093 = fieldWeight in 3193, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3193)
          0.5 = coord(1/2)
      0.14285715 = coord(1/7)
    
    Date
    3. 3.1999 17:22:57