Search (79 results, page 2 of 4)

  • × theme_ss:"Verteilte bibliographische Datenbanken"
  1. Johnson, E.H.: Objects for distributed heterogeneous information retrieval (2000) 0.02
    0.01905007 = product of:
      0.03810014 = sum of:
        0.02102358 = weight(_text_:information in 6959) [ClassicSimilarity], result of:
          0.02102358 = score(doc=6959,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.23754507 = fieldWeight in 6959, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=6959)
        0.01707656 = product of:
          0.03415312 = sum of:
            0.03415312 = weight(_text_:22 in 6959) [ClassicSimilarity], result of:
              0.03415312 = score(doc=6959,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19345059 = fieldWeight in 6959, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6959)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The success of the World Wide Web Shows that we can access, search, and retrieve information from globally distributed databases. lf a database, such as a library catalog, has some sort of Web-based front end, we can type its URL into a Web browser and use its HTML-based forms to search for items in that database. Depending an how well the query conforms to the database content, how the search engine interprets the query, and how the server formats the results into HTML, we might actually find something usable. While the first two issues depend an ourselves and the server, an the Web the latter falls to the mercy of HTML, which we all know as a great destroyer of information because it codes for display but not for content description. When looking at an HTML-formatted display, we must depend an our own interpretation to recognize such entities as author names, titles, and subject identifiers. The Web browser can do nothing but display the information. lf we want some other view of the result, such as sorting the records by date (provided it offers such an option to begin with), the server must do it. This makes poor use of the computing power we have at the desktop (or even laptop), which, unless it involves retrieving more records, could easily do the result Set manipulation that we currently send back to the server. Despite having personal computers wich immense computational power, as far as information retrieval goes, we still essentially use them as dumb terminals.
    Date
    22. 9.1997 19:16:05
    Imprint
    Urbana-Champaign, IL : Illinois University at Urbana-Champaign, Graduate School of Library and Information Science
  2. Kaizik, A.; Gödert, W.; Milanesi, C.: Erfahrungen und Ergebnisse aus der Evaluierung des EU-Projektes EULER im Rahmen des an der FH Köln angesiedelten Projektes EJECT (Evaluation von Subject Gateways des World Wide Web (2001) 0.02
    0.018143935 = product of:
      0.03628787 = sum of:
        0.01213797 = weight(_text_:information in 5801) [ClassicSimilarity], result of:
          0.01213797 = score(doc=5801,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13714671 = fieldWeight in 5801, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5801)
        0.0241499 = product of:
          0.0482998 = sum of:
            0.0482998 = weight(_text_:22 in 5801) [ClassicSimilarity], result of:
              0.0482998 = score(doc=5801,freq=4.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.27358043 = fieldWeight in 5801, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5801)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 6.2002 19:42:22
    Source
    Information Research & Content Management: Orientierung, Ordnung und Organisation im Wissensmarkt; 23. DGI-Online-Tagung der DGI und 53. Jahrestagung der Deutschen Gesellschaft für Informationswissenschaft und Informationspraxis e.V. DGI, Frankfurt am Main, 8.-10.5.2001. Proceedings. Hrsg.: R. Schmidt
    Theme
    Information Gateway
  3. Vikor, D.L.; Gaumond, G.; Heath, F.M.: Building electronic cooperation in the 1990s : the Maryland, Georgia, and Texas experiences (1997) 0.02
    0.017903835 = product of:
      0.03580767 = sum of:
        0.014565565 = weight(_text_:information in 1680) [ClassicSimilarity], result of:
          0.014565565 = score(doc=1680,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 1680, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=1680)
        0.021242103 = product of:
          0.042484205 = sum of:
            0.042484205 = weight(_text_:organization in 1680) [ClassicSimilarity], result of:
              0.042484205 = score(doc=1680,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23635197 = fieldWeight in 1680, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1680)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    During the 1990s statewide cooperative use of networks in the USA has moved towards providing mainly access to bibliographic and full-text resources not held locally and usually provided by commercial vendors for use by libraries. Describes 3 academic library networks: the University System of Maryland's Library Information Management System serving the information needs of users throughout the state; Georgia's GALILEO (Georgia Library Learning On-Line) which provides a set of electronic resources and services for the 34 colleges and universities of the University System of Georgia; and TexShare in which all 52 libraries from the public educational institutions in Texas participate. Although the development of funding sources, the technical implementations and support, and the management organization differ from state to state, all three reflect an incremental shift towards the electronic library
  4. Avrahami, T.T.; Yau, L.; Si, L.; Callan, J.P.: ¬The FedLemur project : Federated search in the real world (2006) 0.02
    0.017528716 = product of:
      0.035057433 = sum of:
        0.014565565 = weight(_text_:information in 5271) [ClassicSimilarity], result of:
          0.014565565 = score(doc=5271,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 5271, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5271)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 5271) [ClassicSimilarity], result of:
              0.04098374 = score(doc=5271,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 5271, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5271)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Federated search and distributed information retrieval systems provide a single user interface for searching multiple full-text search engines. They have been an active area of research for more than a decade, but in spite of their success as a research topic, they are still rare in operational environments. This article discusses a prototype federated search system developed for the U.S. government's FedStats Web portal, and the issues addressed in adapting research solutions to this operational environment. A series of experiments explore how well prior research results, parameter settings, and heuristics apply in the FedStats environment. The article concludes with a set of lessons learned from this technology transfer effort, including observations about search engine quality in the real world.
    Date
    22. 7.2006 16:02:07
    Source
    Journal of the American Society for Information Science and Technology. 57(2006) no.3, S.347-358
  5. Meiert, M.: Elektronische Publikationen an Hochschulen : Modellierung des elektronischen Publikationsprozesses am Beispiel der Universität Hildesheim (2006) 0.02
    0.017528716 = product of:
      0.035057433 = sum of:
        0.014565565 = weight(_text_:information in 5974) [ClassicSimilarity], result of:
          0.014565565 = score(doc=5974,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.16457605 = fieldWeight in 5974, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=5974)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 5974) [ClassicSimilarity], result of:
              0.04098374 = score(doc=5974,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 5974, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5974)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    1. 9.2006 13:22:15
    Source
    Effektive Information Retrieval Verfahren in Theorie und Praxis: ausgewählte und erweiterte Beiträge des Vierten Hildesheimer Evaluierungs- und Retrievalworkshop (HIER 2005), Hildesheim, 20.7.2005. Hrsg.: T. Mandl u. C. Womser-Hacker
    Theme
    Information Gateway
  6. Jahns, Y.; Trummer, M.: Sacherschließung - Informationsdienstleistung nach Maß : Kann Heterogenität beherrscht werden? (2004) 0.02
    0.017364673 = product of:
      0.034729347 = sum of:
        0.0034331365 = weight(_text_:information in 2789) [ClassicSimilarity], result of:
          0.0034331365 = score(doc=2789,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.03879095 = fieldWeight in 2789, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.015625 = fieldNorm(doc=2789)
        0.03129621 = weight(_text_:standards in 2789) [ClassicSimilarity], result of:
          0.03129621 = score(doc=2789,freq=4.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.13927983 = fieldWeight in 2789, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.015625 = fieldNorm(doc=2789)
      0.5 = coord(2/4)
    
    Content
    "... unter diesem Motto hat die Deutsche Bücherei Leipzig am 23. März 2004 auf dem Leipziger Kongress für Bibliothek und Information eine Vortragsreihe initiiert. Vorgestellt wurden Projekte, die sich im Spannungsfeld von Standardisierung und Heterogenität der Sacherschließung bewegen. Die Benutzer unserer Bibliotheken und Informationseinrichtungen stehen heute einer Fülle von Informationen gegenüber, die sie aus zahlreichen Katalogen und Fachdatenbanken abfragen können. Diese Recherche kann schnell zeitraubend werden, wenn der Benutzer mit verschiedenen Suchbegriffen und -logiken arbeiten muss, um zur gewünschten Ressource zu gelangen. Ein Schlagwort A kann in jedem der durchsuchten Systeme eine andere Bedeutung annehmen. Homogenität erreicht man klassisch zunächst durch Normierung und Standardisierung. Für die zwei traditionellen Verfahren der inhaltlichen Erschließung - der klassifikatorischen und der verbalen - haben sich in Deutschland verschiedene Standards durchgesetzt. Klassifikatorische Erschließung wird mit ganz unterschiedlichen Systemen betrieben. Verbreitet sind etwa die Regensburger Verbundklassifikation (RVK) oder die Basisklassifikation (BK). Von Spezial- und Facheinrichtungen werden entsprechende Fachklassifikationen eingesetzt. Weltweit am häufigsten angewandt ist die Dewey Decimal Classification (DDC), die seit 2003 ins Deutsche übertragen wird. Im Bereich der verbalen Sacherschließung haben sich, vor allem bei den wissenschaftlichen Universalbibliotheken, die Regeln für den Schlagwortkatalog (RSWK) durchgesetzt, durch die zugleich die Schlagwortnormdatei (SWD) kooperativ aufgebaut wurde. Daneben erschließen wiederum viele Spezial- und Facheinrichtungen mit selbst entwickelten Fachthesauri.
    Neben die Pflege der Standards tritt als Herausforderung die Vernetzung der Systeme, um heterogene Dokumentenbestände zu verbinden. »Standardisierung muss von der verbleibenden Heterogenität her gedacht werden«." Diese Aufgaben können nur in Kooperation von Bibliotheken und Informationseinrichtungen gelöst werden. Die vorgestellten Projekte zeigen, wie dies gelingen kann. Sie verfolgen alle das Ziel, Informationen über Inhalte schneller und besser für die Nutzer zur Verfügung zu stellen. Fachliche Recherchen über mehrere Informationsanbieter werden durch die Heterogenität überwindende Suchdienste ermöglicht. Die Einführung der DDC im deutschen Sprachraum steht genau im Kern des Spannungsfeldes. Die DDC stellt durch ihren universalen Charakter nicht nur einen übergreifenden Standard her. Ihre Anwendung ist nur nutzbringend, wenn zugleich die Vernetzung mit den in Deutschland bewährten Klassifikationen und Thesauri erfolgt. Ziel des Projektes DDC Deutsch ist nicht nur eine Übersetzung ins Deutsche, die DDC soll auch in Form elektronischer Dienste zur Verfügung gestellt werden. Dr. Lars Svensson, Deutsche Bibliothek Frankfurt am Main, präsentierte anschaulichdie Möglichkeiten einer intelligenten Navigation über die DDC. Für die Dokumentenbestände Der Deutschen Bibliothek, des Gemeinsamen Bibliotheksverbundes (GBV) und der Niedersächsischen Staats- und Universitätsbibliothek Göttingen wurde prototypisch ein Webservice realisiert.
  7. Banwell, L.: Developing and evaluation framework for a supranational digital library (2003) 0.02
    0.01595999 = product of:
      0.03191998 = sum of:
        0.011892734 = weight(_text_:information in 2769) [ClassicSimilarity], result of:
          0.011892734 = score(doc=2769,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.1343758 = fieldWeight in 2769, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.03125 = fieldNorm(doc=2769)
        0.020027246 = product of:
          0.040054493 = sum of:
            0.040054493 = weight(_text_:organization in 2769) [ClassicSimilarity], result of:
              0.040054493 = score(doc=2769,freq=4.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.22283478 = fieldWeight in 2769, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2769)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The paper will explore the issues surrounding the development of an evaluation framework for a supranational digital library system, as seen through the TEL (The European Library) project. It will describe work an the project to date, and seek to establish what are the key drivers, priorities and barriers encountered, in developing such a framework. TEL is being funded by the EU as an Accompanying Measure in the IST program. Its main focus of is an consensus building, and also includes preparatory technical work to develop testbeds, which will gauge to what extent interoperability is achievable. In order for TEL to take its place as a major Information Society initiative of the EU, it needs to be closely attuned to the needs, expectations and realities of its user communities, which comprise the citizens of the project's national partners. To this end the evaluation framework described in this paper, is being developed by establishing the users' viewpoints and priorities in relation to the key project themes. A summary of the issues to be used in the baseline, and to be expanded upon in the paper, follows: - Establishing the differing contexts of the national library partners, and the differing national priorities which will impact an TEL - Exploring the differing expectations relating to building and using the hybrid library - Exploring the differing expectations relating to TEL. TEL needs to add value - what does this mean in each partner state, and for the individuals within them? 1. Introduction to TEL TEL (The European Library) is a thirty month project, funded by the European Commission as part of its Fifth Framework Programme for research. It aims to set up a co-operative framework for access to the major national, mainly digital, collections in European national libraries. TEL is funded as an Accompanying Measure, designed to support the work of the IST (Information Society Technologies) Programme an the development of access to cultural and scientific knowledge. TEL will stop short of becoming a live service during the lifetime of the project, and is focused an ensuring co-operative and concerted approaches to technical and business issues associated with large-scale content development. It will lay the policy and technical groundwork towards a pan European digital library based an distributed digital collections, and providing seamless access to the digital resources of major European national libraries. It began in February, 2001, and has eight national library partners: Finland, Germany, Italy, the Netherlands, Portugal, Slovenia, Switzerland and the United Kingdom. It is also seeking to encourage the participation of all European national libraries in due course.
    Series
    Advances in knowledge organization; vol.8
    Source
    Challenges in knowledge representation and organization for the 21st century: Integration of knowledge across boundaries. Proceedings of the 7th ISKO International Conference Granada, Spain, July 10-13, 2002. Ed.: M. López-Huertas
    Theme
    Information Gateway
  8. Nicholson, D.; Steele, M.: CATRIONA : a distributed, locally-oriented, Z39.50 OPAC-based approach to cataloguing the Internet (1996) 0.02
    0.015395639 = product of:
      0.030791279 = sum of:
        0.01029941 = weight(_text_:information in 603) [ClassicSimilarity], result of:
          0.01029941 = score(doc=603,freq=2.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.116372846 = fieldWeight in 603, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=603)
        0.02049187 = product of:
          0.04098374 = sum of:
            0.04098374 = weight(_text_:22 in 603) [ClassicSimilarity], result of:
              0.04098374 = score(doc=603,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.23214069 = fieldWeight in 603, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=603)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The aims of CATRIONA were: (1) to investigate the requirements for developing procedures and applications for cataloguing and retrieval of networked resources, and (2) to explore the feasibility of a collaborative project to develop such applications and procedures and integrate them with existing library systems. The project established that a distributed catalogue of networked resources integrated with standard Z39.50 library system OPAC interfaces with information on hard-copy resources is already a practical proposition at a basic level. At least one Z39.50 OPAC client can search remote Z39.50 OPACs, retrieve USMARC records with URLs in 856$u, load a viewer like Netscape, and use it to retrieve and display the remotely held electronic resource on the local workstation. A follow-up project on related issues is being finalised.
    Source
    Cataloging and classification quarterly. 22(1996) nos.3/4, S.127-141
  9. Sarinder, K.K.S.; Lim, L.H.S.; Merican, A.F.; Dimyati, K.: Biodiversity information retrieval across networked data sets (2010) 0.01
    0.014919861 = product of:
      0.029839722 = sum of:
        0.01213797 = weight(_text_:information in 3951) [ClassicSimilarity], result of:
          0.01213797 = score(doc=3951,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.13714671 = fieldWeight in 3951, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3951)
        0.017701752 = product of:
          0.035403505 = sum of:
            0.035403505 = weight(_text_:organization in 3951) [ClassicSimilarity], result of:
              0.035403505 = score(doc=3951,freq=2.0), product of:
                0.17974974 = queryWeight, product of:
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.050415643 = queryNorm
                0.19695997 = fieldWeight in 3951, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5653565 = idf(docFreq=3399, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3951)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - Biodiversity resources are inevitably digital and stored in a wide variety of formats by researchers or stakeholders. In Malaysia, although digitizing biodiversity data has long been stressed, the interoperability of the biodiversity data is still an issue that requires attention. This is because, when data are shared, the question of copyright occurs, creating a setback among researchers wanting to promote or share data through online presentations. To solve this, the aim is to present an approach to integrate data through wrapping of datasets stored in relational databases located on networked platforms. Design/methodology/approach - The approach uses tools such as XML, PHP, ASP and HTML to integrate distributed databases in heterogeneous formats. Five current database integration systems were reviewed and all of them have common attributes such as query-oriented, using a mediator-based approach and integrating a structured data model. These common attributes were also adopted in the proposed solution. Distributed Generic Information Retrieval (DiGIR) was used as a model in designing the proposed solution. Findings - A new database integration system was developed, which is user-friendly and simple with common attributes found in current integration systems.
    Footnote
    Beitrag in einem Special Issue: Content architecture: exploiting and managing diverse resources: proceedings of the first national conference of the United Kingdom chapter of the International Society for Knowedge Organization (ISKO)
  10. Arch-Int, N.; Sophatsathit, P.: ¬A semantic information gathering approach for heterogeneous information sources on WWW (2003) 0.01
    0.008919551 = product of:
      0.035678204 = sum of:
        0.035678204 = weight(_text_:information in 4694) [ClassicSimilarity], result of:
          0.035678204 = score(doc=4694,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.40312737 = fieldWeight in 4694, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4694)
      0.25 = coord(1/4)
    
    Source
    Journal of information science. 29(2003) no.5, S.357-374
  11. Fuhr, N.: Towards data abstraction in networked information retrieval systems (1999) 0.01
    0.00849658 = product of:
      0.03398632 = sum of:
        0.03398632 = weight(_text_:information in 4517) [ClassicSimilarity], result of:
          0.03398632 = score(doc=4517,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3840108 = fieldWeight in 4517, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.109375 = fieldNorm(doc=4517)
      0.25 = coord(1/4)
    
    Source
    Information processing and management. 35(1999) no.2, S.101-119
  12. Wei, W.: SOAP als Basis für verteilte, heterogene virtuelle OPACs (2002) 0.01
    0.008298661 = product of:
      0.033194643 = sum of:
        0.033194643 = weight(_text_:standards in 4097) [ClassicSimilarity], result of:
          0.033194643 = score(doc=4097,freq=2.0), product of:
            0.22470023 = queryWeight, product of:
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.050415643 = queryNorm
            0.14772856 = fieldWeight in 4097, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.4569545 = idf(docFreq=1393, maxDocs=44218)
              0.0234375 = fieldNorm(doc=4097)
      0.25 = coord(1/4)
    
    Content
    Überblick über die Kapitel In Kapitel l. Simple Object Acces Protocol (SOAP) wird zuerst der Hintergrund der Entwicklung von SOAP untersucht. Mit einer kurzen Vorstellung der Entwicklung von verteilter Anwendung bis Web Service wird die Situation dargestellt, dass die vorhandenen Standards wie CORBA, DCOM sowie RMI die Ansprüche der stark heterogenen Umgebung wie Internet nicht erfüllen können. Um diesen Mangel der vorhandenen Lösungen zu überwinden, wurde SOAP mit dem Ziel der Unterstützung des plattformenunabhängigen Nachrichtenaustausches entwickelt. Anschließend wird der Begriff Web Service eingeführt, mit dem SOAP stark verbunden ist. Dabei wird über die Möglichkeit des Einsatzes von SOAP in den Bibliothekssystemen diskutiert. Schließlich wird SOAP durch unterschiedliche Aspekte wie SOAP und XML, SOAP Nachricht, Fehler Behandlung usw. untersucht. In Kapitel 3. Die durch Internet erweiterte Bibliothek wird die Beziehung zwischen dem Internet und der Bibliothek aus zwei Sichten, die verteilte Suche und Metadaten, beschrieben. In dem Teil über die verteilte Suche wird vorwiegend das Protokoll Z39.50, womit die bisherigen verteilten Bibliothekssysteme realisiert werden, dargestellt. In dem Teil der Metadaten wird sich zuerst mit der Bedeutung der Metadaten für die Bibliothek sowie für das Internet auseinandergesetzt. Anschließend wird über die existierenden Probleme der Metadaten und die Lösungsmöglichkeiten diskutiert. Schließlich wird eine Untersuchung einiger Metadatenstandards mit Dublin Core als Schwerpunkt durchgeführt, weil Dublin Core zur Zeit der Standard für das Internet und aus diesem Grund auch fir die Internet bezogene Bibliotheksanwendung wichtig ist. In Kapitel 4. Die Entwicklung eines verteilten Bibliothekssystems mit dem SOAP-Einsatz wird die Entwicklung des praktischen Projektes beschrieben. Zuerst wird das Ziel und die Funktionalität des Projektes festgelegt, dass ein verteiltes Bibliothekssystem mit dem Einsatz von SOAP entwickelt wird und das System eine verteilte Suche auf mehreren entfernten Bibliotheksdatenbanken ermöglichen soll. Anschließend wird beschrieben, in welchen Schritten das System entworfen und implementiert wird. Mit dem ersten System kann man nur in einer Datenbank suchen, während man mit dem zweiten System parallel in zwei Datenbanken suchen kann. Dublin Core wird als der Metadatenstandard im gesamten System eingesetzt. Die im System verwendeten Softwarepakete und die Softwarestandardtechnologien werden vorgestellt. Es wird untersucht, wie einzelne technische Komponenten zusammenarbeiten. Schließlich wird die Entwicklung der einzelnen Programmmodule und die Kommunikation zwischen ihnen beschrieben.
  13. Callan, J.: Distributed information retrieval (2000) 0.01
    0.0073582535 = product of:
      0.029433014 = sum of:
        0.029433014 = weight(_text_:information in 31) [ClassicSimilarity], result of:
          0.029433014 = score(doc=31,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3325631 = fieldWeight in 31, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=31)
      0.25 = coord(1/4)
    
    Abstract
    A multi-database model of distributed information retrieval is presented, in which people are assumed to have access to many searchable text databases. In such an environment, full-text information retrieval consists of discovering database contents, ranking databases by their expected ability to satisfy the query, searching a small number of databases, and merging results returned by different databases. This paper presents algorithms for each task. It also discusses how to reorganize conventional test collections into multi-database testbeds, and evaluation methodologies for multi-database experiments. A broad and diverse group of experimental results is presented to demonstrate that the algorithms are effective, efficient, robust, and scalable
    Series
    The Kluwer international series on information retrieval; 7
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  14. López Vargas, M.A.: "Ilmenauer Verteiltes Information REtrieval System" (IVIRES) : eine neue Architektur zur Informationsfilterung in einem verteilten Information Retrieval System (2002) 0.01
    0.0072827823 = product of:
      0.02913113 = sum of:
        0.02913113 = weight(_text_:information in 4041) [ClassicSimilarity], result of:
          0.02913113 = score(doc=4041,freq=4.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3291521 = fieldWeight in 4041, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.09375 = fieldNorm(doc=4041)
      0.25 = coord(1/4)
    
  15. Ashton, J.: ONE: the final OPAC frontier (1998) 0.01
    0.0068306234 = product of:
      0.027322493 = sum of:
        0.027322493 = product of:
          0.054644987 = sum of:
            0.054644987 = weight(_text_:22 in 2588) [ClassicSimilarity], result of:
              0.054644987 = score(doc=2588,freq=2.0), product of:
                0.17654699 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.050415643 = queryNorm
                0.30952093 = fieldWeight in 2588, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2588)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Source
    Select newsletter. 1998, no.22, Spring, S.5-6
  16. Friedrich, M.; Schimkat, R.-D.; Küchlin, W.: Information retrieval in distributed environments based on context-aware, proactive documents (2002) 0.01
    0.006717136 = product of:
      0.026868545 = sum of:
        0.026868545 = weight(_text_:information in 3608) [ClassicSimilarity], result of:
          0.026868545 = score(doc=3608,freq=10.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.3035872 = fieldWeight in 3608, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3608)
      0.25 = coord(1/4)
    
    Abstract
    In this position paper we propose a document-centric middleware component called Living Documents to support context-aware information retrieval in distributed communities. A Living Document acts as a micro server for a document which contains computational services, a semi-structured knowledge repository to uniformly store and access context-related information, and finally the document's digital content. Our initial prototype of Living Documents is based an the concept of mobile agents and implemented in Java and XML.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  17. Croft, W.B.: Combining approaches to information retrieval (2000) 0.01
    0.0063070743 = product of:
      0.025228297 = sum of:
        0.025228297 = weight(_text_:information in 6862) [ClassicSimilarity], result of:
          0.025228297 = score(doc=6862,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2850541 = fieldWeight in 6862, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=6862)
      0.25 = coord(1/4)
    
    Abstract
    The combination of different text representations and search strategies has become a standard technique for improving the effectiveness of information retrieval. Combination, for example, has been studied extensively in the TREC evaluations and is the basis of the "meta-search" engines used on the Web. This paper examines the development of this technique, including both experimental results and the retrieval models that have been proposed as formal frameworks for combination. We show that combining approaches for information retrieval can be modeled as combining the outputs of multiple classifiers based on one or more representations, and that this simple model can provide explanations for many of the experimental results. We also show that this view of combination is very similar to the inference net model, and that a new approach to retrieval based on language models supports combination and can be integrated with the inference net model
    Series
    The Kluwer international series on information retrieval; 7
    Source
    Advances in information retrieval: Recent research from the Center for Intelligent Information Retrieval. Ed.: W.B. Croft
  18. Lopatenko, A.; Asserson, A.; Jeffery, K.G.: CERIF - Information retrieval of research information in a distributed heterogeneous environment (2002) 0.01
    0.0063070743 = product of:
      0.025228297 = sum of:
        0.025228297 = weight(_text_:information in 3597) [ClassicSimilarity], result of:
          0.025228297 = score(doc=3597,freq=12.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2850541 = fieldWeight in 3597, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.046875 = fieldNorm(doc=3597)
      0.25 = coord(1/4)
    
    Abstract
    User demands to have access to complete and actual information about research may require integration of data from different CRISs. CRISs are rarely homogenous systems and problems of CRISs integration must be addressed from technological point of view. Implementation of CRIS providing access to heterogeneous data distributed among a number of CRISs is described. A few technologies - distributed databases, web services, semantic web are used for distributed CRIS to address different user requirements. Distributed databases serve to implement very efficient integration of homogenous systems, web services - to provide open access to research information, semantic web - to solve problems of integration semantically and structurally heterogeneous data sources and provide intelligent data retrieval interfaces. The problems of data completeness in distributed systems are addressed and CRIS-adequate solution for data completeness is suggested.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  19. Crestani, F.; Wu, S.: Testing the cluster hypothesis in distributed information retrieval (2006) 0.01
    0.006068985 = product of:
      0.02427594 = sum of:
        0.02427594 = weight(_text_:information in 984) [ClassicSimilarity], result of:
          0.02427594 = score(doc=984,freq=16.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.27429342 = fieldWeight in 984, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0390625 = fieldNorm(doc=984)
      0.25 = coord(1/4)
    
    Abstract
    How to merge and organise query results retrieved from different resources is one of the key issues in distributed information retrieval. Some previous research and experiments suggest that cluster-based document browsing is more effective than a single merged list. Cluster-based retrieval results presentation is based on the cluster hypothesis, which states that documents that cluster together have a similar relevance to a given query. However, while this hypothesis has been demonstrated to hold in classical information retrieval environments, it has never been fully tested in heterogeneous distributed information retrieval environments. Heterogeneous document representations, the presence of document duplicates, and disparate qualities of retrieval results, are major features of an heterogeneous distributed information retrieval environment that might disrupt the effectiveness of the cluster hypothesis. In this paper we report on an experimental investigation into the validity and effectiveness of the cluster hypothesis in highly heterogeneous distributed information retrieval environments. The results show that although clustering is affected by different retrieval results representations and quality, the cluster hypothesis still holds and that generating hierarchical clusters in highly heterogeneous distributed information retrieval environments is still a very effective way of presenting retrieval results to users.
    Source
    Information processing and management. 42(2006) no.5, S.1137-1150
  20. Hellweg, H.; Krause, J.; Mandl, T.; Marx, J.; Müller, M.N.O.; Mutschke, P.; Strötgen, R.: Treatment of semantic heterogeneity in information retrieval (2001) 0.01
    0.005946367 = product of:
      0.023785468 = sum of:
        0.023785468 = weight(_text_:information in 6560) [ClassicSimilarity], result of:
          0.023785468 = score(doc=6560,freq=6.0), product of:
            0.08850355 = queryWeight, product of:
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.050415643 = queryNorm
            0.2687516 = fieldWeight in 6560, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              1.7554779 = idf(docFreq=20772, maxDocs=44218)
              0.0625 = fieldNorm(doc=6560)
      0.25 = coord(1/4)
    
    Abstract
    Nowadays, users of information services are faced with highly decentralised, heterogeneous document sources with different content analysis. Semantic heterogeneity occurs e.g. when resources using different systems for content description are searched using a simple query system. This report describes several approaches of handling semantic heterogeneity used in projects of the German Social Science Information Centre

Languages

  • e 52
  • d 25
  • f 1
  • More… Less…

Types

  • a 71
  • el 9
  • x 4
  • m 3
  • r 1
  • s 1
  • More… Less…

Classifications