Search (97 results, page 5 of 5)

  • × theme_ss:"Verteilte bibliographische Datenbanken"
  1. Klas, C.-P.; Kriewel, S.; Schaefer, A.; Fischer, G.: ¬Das DAFFODIL System : strategische Literaturrecherche in Digitalen Bibliotheken (2006) 0.00
    0.0012674211 = product of:
      0.0025348421 = sum of:
        0.0025348421 = product of:
          0.007604526 = sum of:
            0.007604526 = weight(_text_:a in 5014) [ClassicSimilarity], result of:
              0.007604526 = score(doc=5014,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.14413087 = fieldWeight in 5014, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=5014)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  2. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.00
    0.0012674211 = product of:
      0.0025348421 = sum of:
        0.0025348421 = product of:
          0.007604526 = sum of:
            0.007604526 = weight(_text_:a in 1256) [ClassicSimilarity], result of:
              0.007604526 = score(doc=1256,freq=16.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.14413087 = fieldWeight in 1256, product of:
                  4.0 = tf(freq=16.0), with freq of:
                    16.0 = termFreq=16.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1256)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This article describes an approach for linking geographically distributed collections of metadata so that they are searchable as a single collection. We describe the infrastructure, which uses standard Internet protocols such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP), to distribute queries, return results, and exchange index information. We discuss the advantages of using linked collections of authoritative metadata as an alternative to using a keyword indexing search-engine for resource discovery. We examine other architectures that use metadata for resource discovery, such as Dienst/NCSTRL, the AHDS HTTP/Z39.50 Gateway, and the ROADS initiative. Finally, we discuss research issues and future directions of the project. The Internet Scout Project, which is funded by the National Science Foundation and is located in the Computer Sciences Department at the University of Wisconsin-Madison, is charged with assisting the higher education community in resource discovery on the Internet. To that end, the Scout Report and subsequent subject-specific Scout Reports were developed to guide the U.S. higher education community to research-quality resources. The Scout Report Signpost utilizes the content from the Scout Reports as the basis of a metadata collection. Signpost consists of more than 2000 cataloged Internet sites using established standards such as Library of Congress subject headings and abbreviated call letters, and emerging standards such as the Dublin Core (DC). This searchable and browseable collection is free and freely accessible, as are all of the Internet Scout Project's services.
    As well developed as both the Scout Reports and Signpost are, they cannot capture the wealth of high-quality content that is available on the Internet. An obvious next step toward increasing the usefulness of our own collection and its value to our customer base is to partner with other high-quality content providers who have developed similar collections and to develop a single, virtual collection. Project Isaac (working title) is the Internet Scout Project's latest resource discovery effort. Project Isaac involves the development of a research testbed that allows experimentation with protocols and algorithms for creating, maintaining, indexing and searching distributed collections of metadata. Project Isaac's infrastructure uses standard Internet protocols, such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP) to distribute queries, return results, and exchange index or centroid information. The overall goal is to support a single-search interface to geographically distributed and independently maintained metadata collections.
    Type
    a
  3. Severiens, T.; Hohlfeld, M.; Zimmermann, K.; Hilf, E.R.: PhysDoc - a distributed network of physics institutions documents : collecting, indexing, and searching high quality documents by using harvest (2000) 0.00
    0.0012524803 = product of:
      0.0025049606 = sum of:
        0.0025049606 = product of:
          0.007514882 = sum of:
            0.007514882 = weight(_text_:a in 6470) [ClassicSimilarity], result of:
              0.007514882 = score(doc=6470,freq=10.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.14243183 = fieldWeight in 6470, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6470)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    PhysNet offers online services that enable a physicist to keep in touch with the worldwide physics community and to receive all information he or she may need. In addition to being of great value to physicists, these services are practical examples of the use of modern methods of digital libraries, in particular the use of metadata harvesting. One service is PhysDoc. This consists of a Harvest-based online information broker- and gatherer-network, which harvests information from the local web-servers of professional physics institutions worldwide (mostly in Europe and USA so far). PhysDoc focuses on scientific information posted by the individual scientist at his local server, such as documents, publications, reports, publication lists, and lists of links to documents. All rights are reserved for the authors who are responsible for the content and quality of their documents. PhysDis is an analogous service but specifically for university theses, with their dual requirements of examination work and publication. The strategy is to select high quality sites containing metadata. We report here on the present status of PhysNet, our experience in operating it, and the development of its usage. To continuously involve authors, research groups, and national societies is considered crucial for a future stable service.
    Type
    a
  4. Crestani, F.; Wu, S.: Testing the cluster hypothesis in distributed information retrieval (2006) 0.00
    0.0012524803 = product of:
      0.0025049606 = sum of:
        0.0025049606 = product of:
          0.007514882 = sum of:
            0.007514882 = weight(_text_:a in 984) [ClassicSimilarity], result of:
              0.007514882 = score(doc=984,freq=10.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.14243183 = fieldWeight in 984, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=984)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    How to merge and organise query results retrieved from different resources is one of the key issues in distributed information retrieval. Some previous research and experiments suggest that cluster-based document browsing is more effective than a single merged list. Cluster-based retrieval results presentation is based on the cluster hypothesis, which states that documents that cluster together have a similar relevance to a given query. However, while this hypothesis has been demonstrated to hold in classical information retrieval environments, it has never been fully tested in heterogeneous distributed information retrieval environments. Heterogeneous document representations, the presence of document duplicates, and disparate qualities of retrieval results, are major features of an heterogeneous distributed information retrieval environment that might disrupt the effectiveness of the cluster hypothesis. In this paper we report on an experimental investigation into the validity and effectiveness of the cluster hypothesis in highly heterogeneous distributed information retrieval environments. The results show that although clustering is affected by different retrieval results representations and quality, the cluster hypothesis still holds and that generating hierarchical clusters in highly heterogeneous distributed information retrieval environments is still a very effective way of presenting retrieval results to users.
    Type
    a
  5. Lynch, C.A.: Building the infrastructure of resource sharing : union catalogs, distributed search, and cross database linkage (1997) 0.00
    0.0011642005 = product of:
      0.002328401 = sum of:
        0.002328401 = product of:
          0.006985203 = sum of:
            0.006985203 = weight(_text_:a in 1506) [ClassicSimilarity], result of:
              0.006985203 = score(doc=1506,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13239266 = fieldWeight in 1506, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1506)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Effective resourcesharing presupposes an infrastructure which permits users to locate materials of interest in both print and electronic formats. 2 approaches for providing this are union catalogues and Z39.50 based distributed search systems and computer to computer information retrieval protocols. The advantages and limitations of each approach are considered, paying particular attention to a relaistic assessment of Z39.50 implementations. Argues that the union catalogue is far from obsolete and the 2 approaches should be considered complementary rather than competitive. Technologies to create links between the bibliographic apparatus of catalogues and abstracting and indexing databases and primary content in electronic form, such as the new Serial Item and Contribution Identifier (SICI) standard are also discussed as key elements in the infrastructure to support resource sharing
    Footnote
    Article included in an issue devoted to the theme: resource sharing in a changing environemnt
    Type
    a
  6. Croft, W.B.: Combining approaches to information retrieval (2000) 0.00
    0.0011642005 = product of:
      0.002328401 = sum of:
        0.002328401 = product of:
          0.006985203 = sum of:
            0.006985203 = weight(_text_:a in 6862) [ClassicSimilarity], result of:
              0.006985203 = score(doc=6862,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13239266 = fieldWeight in 6862, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=6862)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    The combination of different text representations and search strategies has become a standard technique for improving the effectiveness of information retrieval. Combination, for example, has been studied extensively in the TREC evaluations and is the basis of the "meta-search" engines used on the Web. This paper examines the development of this technique, including both experimental results and the retrieval models that have been proposed as formal frameworks for combination. We show that combining approaches for information retrieval can be modeled as combining the outputs of multiple classifiers based on one or more representations, and that this simple model can provide explanations for many of the experimental results. We also show that this view of combination is very similar to the inference net model, and that a new approach to retrieval based on language models supports combination and can be integrated with the inference net model
    Type
    a
  7. Zia, L.L.: ¬The NSF National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) Program : new projects from fiscal year 2004 (2005) 0.00
    0.0011642005 = product of:
      0.002328401 = sum of:
        0.002328401 = product of:
          0.006985203 = sum of:
            0.006985203 = weight(_text_:a in 1221) [ClassicSimilarity], result of:
              0.006985203 = score(doc=1221,freq=24.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.13239266 = fieldWeight in 1221, product of:
                  4.8989797 = tf(freq=24.0), with freq of:
                    24.0 = termFreq=24.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1221)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    In fall 2004, the National Science Foundation's (NSF) National Science, Technology, Engineering, and Mathematics Education Digital Library (NSDL) program made new grants in three tracks: Pathways, Services, and Targeted Research. Together with projects started in fiscal years (FY) 2000-03 these new grants continue the development of a national digital library of high quality educational resources to support learning at all levels in science, technology, engineering, and mathematics (STEM). By enabling broad access to reliable and authoritative learning and teaching materials and associated services in a digital environment, the National Science Digital Library expects to promote continual improvements in the quality of formal STEM education, and also to serve as a resource for informal and lifelong learning. Proposals for the FY05 funding cycle are due April 11, 2005, and the full solicitation is available at <http://www.nsf.gov/publications/pub_summ.jsp?ods_key=nsf05545>. Two NSF directorates, the Directorate for Geosciences (GEO) and the Directorate for Mathematical and Physical Sciences (MPS) have both provided significant co-funding for over twenty projects in the first four years of the program, illustrating the NSDL program's facilitation of the integration of research and education, an important strategic objective of the NSF. In FY2004, the NSDL program introduced a new Pathways track, replacing the earlier Collections track. The Services track strongly encouraged two particular types of projects: (1) selection services and (2) usage development workshops. * Pathways projects provide stewardship for educational content and services needed by a broad community of learners; * Selection services projects identify and increase the high-quality STEM educational content known to NSDL; and * Usage development workshops engage new communities of learners in the use of NSDL and its resources.
    These three elements reflect a refinement of NSDL's initial emphasis on collecting educational resources, materials, and other digital learning objects, towards enabling learners to "connect" or otherwise find pathways to resources appropriate to their needs. Projects are also developing both the capacities of individual users and the capacity of larger communities of learners to use and contribute to NSDL. For the FY2004 funding cycle, one hundred forty-four proposals sought approximately $126.5 million in total funding. Twenty-four new awards were made with a cumulative budget of approximately $10.2 million. These include four in the Pathways track, twelve in the Services track, and eight in the Targeted Research track. As in the earlier years of the program, sister directorates to the NSF Directorate for Education and Human Resources (EHR) are providing significant co-funding of projects. Participating directorates for FY2004 are GEO and MPS. Within EHR, the Advanced Technological Education program and the Experimental Program to Stimulate Competitive Research are also co-funding projects. Complete information on the technical and organizational progress of NSDL including links to current Standing Committees and community workspaces may be found at <http://nsdl.org/community/nsdlgroups.php>. All workspaces are open to the public, and interested organizations and individuals are encouraged to learn more about NSDL and join in its development. Following is a list of the new FY04 awards displaying the official NSF award number, the project title, the grantee institution, and the name of the Principal Investigator (PI). A condensed description of the project is also included. Full abstracts are available from the NSDL program site (under Related URLs see the link to NSDL program site (under Related URLs see the link to Abstracts of Recent Awards Made Through This Program.) The projects are displayed by track and are listed by award number. In addition, seven of these projects have explicit relevance to applications to pre-K to 12 education (indicated with a * below). Four others have clear potential for application to the pre-K to 12 arena (indicated with a ** below).
    Type
    a
  8. Park, S.: Usability, user preferences, effectiveness, and user behaviors when searching individual and integrated full-text databases : implications for digital libraries (2000) 0.00
    0.0011202524 = product of:
      0.0022405048 = sum of:
        0.0022405048 = product of:
          0.0067215143 = sum of:
            0.0067215143 = weight(_text_:a in 4591) [ClassicSimilarity], result of:
              0.0067215143 = score(doc=4591,freq=8.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.12739488 = fieldWeight in 4591, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4591)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This article addresses a crucial issue in the digital library environment: how to support effective interaction of users with heterogeneous and distributed information resources. In particular, this study compared usability, user preference, effectiveness, and searching behaviors in systems that implement interaction with multiple databases as if they were one (integrated interaction) in a experiment in the TREC environment. 28 volunteers were recruited from the graduate students of the School of Communication, Information & Library Studies at Rutgers University. Significantly more subjects preferred the common interface to the integrated interface, mainly because they could have more control over database selection. Subjects were also more satisfied with the results from the common interface, and performed better with the common interface than with the integrated interface. Overall, it appears that for this population, interacting with databases through a common interface is preferable on all grounds to interacting with databases through an integrated interface. These results suggest that: (1) the general assumption of the information retrieval (IR) literature that an integrated interaction is best needs to be revisited; (2) it is important to allow for more user control in the distributed environment; (3) for digital library purposes, it is important to characterize different databases to support user choice for integration; and (4) certain users prefer control over database selection while still opting for results to be merged
    Type
    a
  9. Kuberek, M.: ¬Der Kooperative Bibliotheksverbund Berlin-Brandenburg (KOBV) : Ein innovatives Verbundkonzept für die Region (2000) 0.00
    0.0011202524 = product of:
      0.0022405048 = sum of:
        0.0022405048 = product of:
          0.0067215143 = sum of:
            0.0067215143 = weight(_text_:a in 5473) [ClassicSimilarity], result of:
              0.0067215143 = score(doc=5473,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.12739488 = fieldWeight in 5473, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=5473)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  10. Holm, L.A.: ONE project : results and experiences (1999) 0.00
    0.0011202524 = product of:
      0.0022405048 = sum of:
        0.0022405048 = product of:
          0.0067215143 = sum of:
            0.0067215143 = weight(_text_:a in 6460) [ClassicSimilarity], result of:
              0.0067215143 = score(doc=6460,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.12739488 = fieldWeight in 6460, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6460)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  11. Krause, J.: Sacherschließung in virtuellen Bibliotheken : Standardisierung versus Heterogenität (2000) 0.00
    0.0011202524 = product of:
      0.0022405048 = sum of:
        0.0022405048 = product of:
          0.0067215143 = sum of:
            0.0067215143 = weight(_text_:a in 6070) [ClassicSimilarity], result of:
              0.0067215143 = score(doc=6070,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.12739488 = fieldWeight in 6070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.078125 = fieldNorm(doc=6070)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  12. Laegreid, J.A.: ¬The Nordic SR-net project : implementation of the SR/Z39.50 standards in the Nordic countries (1994) 0.00
    0.0011089934 = product of:
      0.0022179869 = sum of:
        0.0022179869 = product of:
          0.0066539603 = sum of:
            0.0066539603 = weight(_text_:a in 3196) [ClassicSimilarity], result of:
              0.0066539603 = score(doc=3196,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.12611452 = fieldWeight in 3196, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3196)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Source
    Resource sharing: new technologies as a must for Universal Availability of Information. Proceedings of the 16th International Essen Symposium, 18-21 Oct 1993. Ed.: A.H. Helal u. J.W. Weiss
    Type
    a
  13. Krause, J.: Heterogenität und Integration : Zur Weiterentwicklung von Inhaltserschließung und Retrieval in sich veränderten Kontexten (2001) 0.00
    9.701671E-4 = product of:
      0.0019403342 = sum of:
        0.0019403342 = product of:
          0.0058210026 = sum of:
            0.0058210026 = weight(_text_:a in 6071) [ClassicSimilarity], result of:
              0.0058210026 = score(doc=6071,freq=6.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.11032722 = fieldWeight in 6071, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=6071)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    As an important support tool in science research, specialized information systems are rapidly changing their character. The potential for improvement compared with today's usual systems is enormous. This fact will be demonstrated by means of two problem complexes: - WWW search engines, which were developed without any government grants, are increasingly dominating the scene. Does the WWW displace information centers with their high quality databases? What are the results we can get nowadays using general WWW search engines? - In addition to the WWW and specialized databases, scientists now use WWW library catalogues of digital libraries, which combine the catalogues from an entire region or a country. At the same time, however, they are faced with highly decentralized heterogeneous databases which contain the widest range of textual sources and data, e.g. from surveys. One consequence is the presence of serious inconsistencies in quality, relevance and content analysis. Thus, the main problem to be solved is as follows: users must be supplied with heterogeneous data from different sources, modalities and content development processes via a visual user interface without inconsistencies in content development, for example, seriously impairing the quality of the search results, e. g. when phrasing their search inquiry in the terminology to which they are accustomed
    Type
    a
  14. Vikor, D.L.; Gaumond, G.; Heath, F.M.: Building electronic cooperation in the 1990s : the Maryland, Georgia, and Texas experiences (1997) 0.00
    9.5056574E-4 = product of:
      0.0019011315 = sum of:
        0.0019011315 = product of:
          0.0057033943 = sum of:
            0.0057033943 = weight(_text_:a in 1680) [ClassicSimilarity], result of:
              0.0057033943 = score(doc=1680,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10809815 = fieldWeight in 1680, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1680)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    During the 1990s statewide cooperative use of networks in the USA has moved towards providing mainly access to bibliographic and full-text resources not held locally and usually provided by commercial vendors for use by libraries. Describes 3 academic library networks: the University System of Maryland's Library Information Management System serving the information needs of users throughout the state; Georgia's GALILEO (Georgia Library Learning On-Line) which provides a set of electronic resources and services for the 34 colleges and universities of the University System of Georgia; and TexShare in which all 52 libraries from the public educational institutions in Texas participate. Although the development of funding sources, the technical implementations and support, and the management organization differ from state to state, all three reflect an incremental shift towards the electronic library
    Type
    a
  15. Hakala, J.: Z39.50-1995: information retrieval protocol : an introduction to the standard and it's usage (1996) 0.00
    8.9620193E-4 = product of:
      0.0017924039 = sum of:
        0.0017924039 = product of:
          0.0053772116 = sum of:
            0.0053772116 = weight(_text_:a in 3340) [ClassicSimilarity], result of:
              0.0053772116 = score(doc=3340,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10191591 = fieldWeight in 3340, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3340)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    This article describes the Internet information retrieval protocol, Z39.50, and it's usage. The services of Z39.50 are depicted, as are some important terms related to the standard. A description of the OPAC Network in Europe (ONE), an important Z39.50 implementation project is included
  16. Duda, L.E.; Rioux, M.A.: ¬One library, one bib record : two opacs, two systems (1998) 0.00
    8.9620193E-4 = product of:
      0.0017924039 = sum of:
        0.0017924039 = product of:
          0.0053772116 = sum of:
            0.0053772116 = weight(_text_:a in 2229) [ClassicSimilarity], result of:
              0.0053772116 = score(doc=2229,freq=2.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.10191591 = fieldWeight in 2229, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2229)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Type
    a
  17. Strötgen, R.; Kokkelink, S.: Metadatenextraktion aus Internetquellen : Heterogenitätsbehandlung im Projekt CARMEN (2001) 0.00
    7.9213816E-4 = product of:
      0.0015842763 = sum of:
        0.0015842763 = product of:
          0.0047528287 = sum of:
            0.0047528287 = weight(_text_:a in 5808) [ClassicSimilarity], result of:
              0.0047528287 = score(doc=5808,freq=4.0), product of:
                0.052761257 = queryWeight, product of:
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.045758117 = queryNorm
                0.090081796 = fieldWeight in 5808, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  1.153047 = idf(docFreq=37942, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5808)
          0.33333334 = coord(1/3)
      0.5 = coord(1/2)
    
    Abstract
    Die Sonderfördermaßnahme CARMEN (Content Analysis, Retrieval and Metadata: Effective Networking) zielt im Rahmen des vom BMB+F geförderten Programms GLOBAL INFO darauf ab, in der heutigen dezentralen Informationsweit geeignete Informationssysteme für die verteilten Datenbestände in Bibliotheken, Fachinformationszentren und im Internet zu schaffen. Diese Zusammenführung ist weniger technisch als inhaltlich und konzeptuell problematisch. Heterogenität tritt beispielsweise auf, wenn unterschiedliche Datenbestände zur Inhaltserschließung verschiedene Thesauri oder Klassifikationen benutzen, wenn Metadaten unterschiedlich oder überhaupt nicht erfasst werden oder wenn intellektuell aufgearbeitete Quellen mit in der Regel vollständig unerschlossenen Internetdokumenten zusammentreffen. Im Projekt CARMEN wird dieses Problem mit mehreren Methoden angegangen: Über deduktiv-heuristische Verfahren werden Metadaten automatisch aus Dokumenten generiert, außerdem lassen sich mit statistisch-quantitativen Methoden die unterschiedlichen Verwendungen von Termen in den verschiedenen Beständen aufeinander abbilden, und intellektuell erstellte Crosskonkordanzen schaffen sichere Übergänge von einer Dokumentationssprache in eine andere. Für die Extraktion von Metadaten gemäß Dublin Core (v. a. Autor, Titel, Institution, Abstract, Schlagworte) werden anhand typischer Dokumente (Dissertationen aus Math-Net im PostScript-Format und verschiedenste HTML-Dateien von WWW-Servern deutscher sozialwissenschaftlicher Institutionen) Heuristiken entwickelt. Die jeweilige Wahrscheinlichkeit, dass die so gewonnenen Metadaten korrekt und vertrauenswürdig sind, wird über Gewichte den einzelnen Daten zugeordnet. Die Heuristiken werden iterativ in ein Extraktionswerkzeug implementiert, getestet und verbessert, um die Zuverlässigkeit der Verfahren zu erhöhen. Derzeit werden an der Universität Osnabrück und im InformationsZentrum Sozialwissenschaften Bonn anhand mathematischer und sozialwissenschaftlicher Datenbestände erste Prototypen derartiger Transfermodule erstellt
    Type
    a

Years

Languages

  • e 59
  • d 36
  • f 1
  • More… Less…

Types

  • a 92
  • el 10
  • m 2
  • r 1
  • s 1
  • x 1
  • More… Less…

Classifications