Search (121 results, page 1 of 7)

  • × theme_ss:"Information Gateway"
  1. Kirriemuir, J.; Brickley, D.; Welsh, S.; Knight, J.; Hamilton, M.: Cross-searching subject gateways : the query routing and forward knowledge approach (1998) 0.08
    0.07624563 = product of:
      0.12707604 = sum of:
        0.040348392 = weight(_text_:context in 1252) [ClassicSimilarity], result of:
          0.040348392 = score(doc=1252,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 1252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1252)
        0.06342807 = weight(_text_:index in 1252) [ClassicSimilarity], result of:
          0.06342807 = score(doc=1252,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.3413878 = fieldWeight in 1252, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1252)
        0.023299592 = weight(_text_:system in 1252) [ClassicSimilarity], result of:
          0.023299592 = score(doc=1252,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 1252, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1252)
      0.6 = coord(3/5)
    
    Abstract
    A subject gateway, in the context of network-based resource access, can be defined as some facility that allows easier access to network-based resources in a defined subject area. The simplest types of subject gateways are sets of Web pages containing lists of links to resources. Some gateways index their lists of links and provide a simple search facility. More advanced gateways offer a much enhanced service via a system consisting of a resource database and various indexes, which can be searched and/or browsed through a Web-based interface. Each entry in the database contains information about a network-based resource, such as a Web page, Web site, mailing list or document. Entries are usually created by a cataloguer manually identifying a suitable resource, describing the resource using a template, and submitting the template to the database for indexing. Subject gateways are also known as subject-based information gateways (SBIGs), subject-based gateways, subject index gateways, virtual libraries, clearing houses, subject trees, pathfinders and other variations thereof. This paper describes the characteristics of some of the subject gateways currently accessible through the Web, and compares them to automatic "vacuum cleaner" type search engines, such as AltaVista. The application of WHOIS++, centroids, query routing, and forward knowledge to searching several of these subject gateways simultaneously is outlined. The paper concludes with looking at some of the issues facing subject gateway development in the near future. The paper touches on many of the issues mentioned in a previous paper in D-Lib Magazine, especially regarding resource-discovery related initiatives and services.
  2. Blandford, A.; Adams, A.; Attfield, S.; Buchanan, G.; Gow, J.; Makri, S.; Rimmer, J.; Warwick, C.: ¬The PRET A Rapporter framework : evaluating digital libraries from the perspective of information work (2008) 0.04
    0.038738146 = product of:
      0.096845366 = sum of:
        0.04841807 = weight(_text_:context in 2021) [ClassicSimilarity], result of:
          0.04841807 = score(doc=2021,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.27475408 = fieldWeight in 2021, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
        0.048427295 = weight(_text_:system in 2021) [ClassicSimilarity], result of:
          0.048427295 = score(doc=2021,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.36163113 = fieldWeight in 2021, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2021)
      0.4 = coord(2/5)
    
    Abstract
    The strongest tradition of IR systems evaluation has focused on system effectiveness; more recently, there has been a growing interest in evaluation of Interactive IR systems, balancing system and user-oriented evaluation criteria. In this paper we shift the focus to considering how IR systems, and particularly digital libraries, can be evaluated to assess (and improve) their fit with users' broader work activities. Taking this focus, we answer a different set of evaluation questions that reveal more about the design of interfaces, user-system interactions and how systems may be deployed in the information working context. The planning and conduct of such evaluation studies share some features with the established methods for conducting IR evaluation studies, but come with a shift in emphasis; for example, a greater range of ethical considerations may be pertinent. We present the PRET A Rapporter framework for structuring user-centred evaluation studies and illustrate its application to three evaluation studies of digital library systems.
  3. Zeeman, D.; Turner, G.: Resource discovery in the Government of Canada using the Dewey Decimal Classification (2006) 0.04
    0.035642873 = product of:
      0.089107186 = sum of:
        0.05648775 = weight(_text_:context in 5782) [ClassicSimilarity], result of:
          0.05648775 = score(doc=5782,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 5782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5782)
        0.03261943 = weight(_text_:system in 5782) [ClassicSimilarity], result of:
          0.03261943 = score(doc=5782,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 5782, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5782)
      0.4 = coord(2/5)
    
    Footnote
    Beitrag in einem Themenheft "Moving beyond the presentation layer: content and context in the Dewey Decimal Classification (DDC) System"
  4. Janée, G.; Frew, J.; Hill, L.L.: Issues in georeferenced digital libraries (2004) 0.04
    0.035642873 = product of:
      0.089107186 = sum of:
        0.05648775 = weight(_text_:context in 1165) [ClassicSimilarity], result of:
          0.05648775 = score(doc=1165,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 1165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1165)
        0.03261943 = weight(_text_:system in 1165) [ClassicSimilarity], result of:
          0.03261943 = score(doc=1165,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.2435858 = fieldWeight in 1165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1165)
      0.4 = coord(2/5)
    
    Abstract
    Based on a decade's experience with the Alexandria Digital Library Project, seven issues are presented that arise in creating georeferenced digital libraries, and that appear to be intrinsic to the problem of creating any library-like information system that operates on georeferenced and geospatial resources. The first and foremost issue is providing discovery of georeferenced resources. Related to discovery are the issues of gazetteer integration and specialized ranking of search results. Strong data typing and scalability are implementation issues. Providing spatial context is a critical user interface issue. Finally, sophisticated resource access mechanisms are necessary to operate on geospatial resources.
  5. Choi, Y.; Syn, S.Y.: Characteristics of tagging behavior in digitized humanities online collections (2016) 0.03
    0.026664922 = product of:
      0.066662304 = sum of:
        0.057061244 = weight(_text_:context in 2891) [ClassicSimilarity], result of:
          0.057061244 = score(doc=2891,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32380077 = fieldWeight in 2891, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2891)
        0.009601062 = product of:
          0.028803186 = sum of:
            0.028803186 = weight(_text_:22 in 2891) [ClassicSimilarity], result of:
              0.028803186 = score(doc=2891,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19345059 = fieldWeight in 2891, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2891)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    The purpose of this study was to examine user tags that describe digitized archival collections in the field of humanities. A collection of 8,310 tags from a digital portal (Nineteenth-Century Electronic Scholarship, NINES) was analyzed to find out what attributes of primary historical resources users described with tags. Tags were categorized to identify which tags describe the content of the resource, the resource itself, and subjective aspects (e.g., usage or emotion). The study's findings revealed that over half were content-related; tags representing opinion, usage context, or self-reference, however, reflected only a small percentage. The study further found that terms related to genre or physical format of a resource were frequently used in describing primary archival resources. It was also learned that nontextual resources had lower numbers of content-related tags and higher numbers of document-related tags than textual resources and bibliographic materials; moreover, textual resources tended to have more user-context-related tags than other resources. These findings help explain users' tagging behavior and resource interpretation in primary resources in the humanities. Such information provided through tags helps information professionals decide to what extent indexing archival and cultural resources should be done for resource description and discovery, and understand users' terminology.
    Date
    21. 4.2016 11:23:22
  6. Tudhope, D.; Binding, C.; Blocks, D.; Cunliffe, D.: Compound descriptors in context : a matching function for classifications and thesauri (2002) 0.03
    0.025459195 = product of:
      0.063647985 = sum of:
        0.040348392 = weight(_text_:context in 3179) [ClassicSimilarity], result of:
          0.040348392 = score(doc=3179,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.22896172 = fieldWeight in 3179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3179)
        0.023299592 = weight(_text_:system in 3179) [ClassicSimilarity], result of:
          0.023299592 = score(doc=3179,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 3179, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3179)
      0.4 = coord(2/5)
    
    Abstract
    There are many advantages for Digital Libraries in indexing with classifications or thesauri, but some current disincentive in the lack of flexible retrieval tools that deal with compound descriptors. This paper discusses a matching function for compound descriptors, or multi-concept subject headings, that does not rely an exact matching but incorporates term expansion via thesaurus semantic relationships to produce ranked results that take account of missing and partially matching terms. The matching function is based an a measure of semantic closeness between terms, which has the potential to help with recall problems. The work reported is part of the ongoing FACET project in collaboration with the National Museum of Science and Industry and its collections database. The architecture of the prototype system and its Interface are outlined. The matching problem for compound descriptors is reviewed and the FACET implementation described. Results are discussed from scenarios using the faceted Getty Art and Architecture Thesaurus. We argue that automatic traversal of thesaurus relationships can augment the user's browsing possibilities. The techniques can be applied both to unstructured multi-concept subject headings and potentially to more syntactically structured strings. The notion of a focus term is used by the matching function to model AAT modified descriptors (noun phrases). The relevance of the approach to precoordinated indexing and matching faceted strings is discussed.
  7. Woldering, B.: Aufbau einer virtuellen europäischen Nationalbibliothek : Von Gabriel zu The European Library (2004) 0.02
    0.023397211 = product of:
      0.058493026 = sum of:
        0.050742455 = weight(_text_:index in 4950) [ClassicSimilarity], result of:
          0.050742455 = score(doc=4950,freq=4.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.27311024 = fieldWeight in 4950, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=4950)
        0.0077505717 = product of:
          0.023251714 = sum of:
            0.023251714 = weight(_text_:29 in 4950) [ClassicSimilarity], result of:
              0.023251714 = score(doc=4950,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.15546128 = fieldWeight in 4950, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=4950)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Im Januar 2004 wurde das EU-Projekt »The European Library (TEL)« erfolgreich beendet: Die Errichtung einer virtuellen europäischen Nationalbibliothek hat sich als machbar erwiesen. Die Arbeit im TEL-Projekt wurde im Februar 2001 aufgenommen und konzentrierte sich auf folgende Schwerpunkte: - Untersuchung der Möglichkeiten, Vereinbarungen mit Verlegern über die europaweite Bereitstellung elektronischer Publikationen über die Nationalbibliotheken zu erzielen, - Herstellen eines Konsenses unter den beteiligten Partnern über das angestrebte gemeinsame Serviceangebot sowie die Erstellung eines für alle akzeptablen Geschäftsmodells für die Entwicklung, das Management und die Finanzierung dieses Serviceangebots, - Erstellung eines abgestimmten, für weitere Entwicklungen offenen Metadaten-Modells für das geplante Serviceangebot, - Entwicklung und Test einer technischen Umgebung, welche den integrierten Zugang zu den Daten der Partner sowohl über Z39.50 als auch über einen zentralen, XML-basierten Index ermöglicht. Die Ergebnisse des TEL-Projektes sind ein Geschäftsmodell, ein Metadatenmodell sowie eine technische Lösung für die Integration von Daten, auf welche über Z39.50 oder über einen zentralen, XML-basierten Index zugegriffen werden kann. Aufgrund dieser Ergebnisse beschlossen die TEL-Partner (die Nationalbibliotheken von Deutschland, Finnland, Großbritannien, Italien, der Niederlande, Portugal, Slowenien und der Schweiz sowie das Istituto Centrale per il Catalogo Unico delle Biblioteche Italia ne e per le Informazioni Bibliografiche ICCU), nach Beendigung der Projektphase TEL als kostenloses Webangebot der europäischen Nationalbibliotheken aufzubauen. Alle Projektpartner erklärten sich bereit, sich an der Umsetzung und Startfinanzierung zu beteiligen. Langfristig ist die Beteiligung aller in der Konferenz der Europäischen Nationalbibliothekare (CENL) vertretenen Nationalbibliotheken geplant. Das TEL-Projekt ist aus der Idee der Weiterentwicklung von Gabriel entstanden, dem Webservice der CENL-Bibliotheken. Im Jahre 1994 beschlossen die Direktorinnen und Direktoren der europäischen Nationalbibliotheken die Einrichtung eines gemeinsamen Online-Forums, um einen schnelleren und einfacheren Austausch über neue Entwicklungen und Aktivitäten in europäischen Bibliotheken zu ermöglichen. Die Idee wurde bald ausgeweitet, sodass nicht nur ein Forum für alle CENL-Mitglieder, sondern auch ein Informationsangebot über CENL, ihre Mitgliedsbibliotheken und deren OnlineDienste als »single point of access« geplant wurde. Die Nationalbibliotheken von Deutschland, Finnland, Frankreich, Großbritannien und den Niederlanden übernahmen die Entwicklung eines prototypischen Webangebots, das sie »Gabriel - Gateway and Bridge to Europe's National Libraries« nannten. Gabriel bietet Informationen auf drei Ebenen: auf der europäischen Ebene über Kooperationsprojekte und internationale Veranstaltungen, auf nationaler Ebene Beschreibungen der Bibliotheken, ihrer Funktionen und ihrer Sammlungen und schließlich auf individueller Ebene die Dienstleistungsangebote der einzelnen Bibliotheken.
    Date
    15. 2.2006 11:25:29
  8. Hjoerland, B.: ¬The methodology of constructing classification schemes : a discussion of the state-of-the-art (2003) 0.02
    0.021808002 = product of:
      0.054520003 = sum of:
        0.03588033 = weight(_text_:index in 2760) [ClassicSimilarity], result of:
          0.03588033 = score(doc=2760,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.1931181 = fieldWeight in 2760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.03125 = fieldNorm(doc=2760)
        0.018639674 = weight(_text_:system in 2760) [ClassicSimilarity], result of:
          0.018639674 = score(doc=2760,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.13919188 = fieldWeight in 2760, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=2760)
      0.4 = coord(2/5)
    
    Abstract
    Special classifications have been somewhat neglected in KO compared to general classifications. The methodology of constructing special classifications is important, however, also for the methodology of constructing general classification schemes. The methodology of constructing special classifications can be regarded as one among about a dozen approaches to domain analysis. The methodology of (special) classification in LIS has been dominated by the rationalistic facet-analytic tradition, which, however, neglects the question of the empirical basis of classification. The empirical basis is much better grasped by, for example, bibliometric methods. Even the combination of rational and empirical methods is insufficient. This presentation will provide evidence for the necessity of historical and pragmatic methods for the methodology of classification and will point to the necessity of analyzing "paradigms". The presentation covers the methods of constructing classifications from Ranganathan to the design of ontologies in computer science and further to the recent "paradigm shift" in classification research. 1. Introduction Classification of a subject field is one among about eleven approaches to analyzing a domain that are specific for information science and in my opinion define the special competencies of information specialists (Hjoerland, 2002a). Classification and knowledge organization are commonly regarded as core qualifications of librarians and information specialists. Seen from this perspective one expects a firm methodological basis for the field. This paper tries to explore the state-of-the-art conceming the methodology of classification. 2. Classification: Science or non-science? As it is part of the curriculum at universities and subject in scientific journals and conferences like ISKO, orte expects classification/knowledge organization to be a scientific or scholarly activity and a scientific field. However, very often when information specialists classify or index documents and when they revise classification system, the methods seem to be rather ad hoc. Research libraries or scientific databases may employ people with adequate subject knowledge. When information scientists construct or evaluate systems, they very often elicit the knowledge from "experts" (Hjorland, 2002b, p. 260). Mostly no specific arguments are provided for the specific decisions in these processes.
  9. Ohly, H.P.: ¬The organization of Internet links in a social science clearing house (2004) 0.02
    0.020466631 = product of:
      0.05116658 = sum of:
        0.03954072 = weight(_text_:system in 2641) [ClassicSimilarity], result of:
          0.03954072 = score(doc=2641,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 2641, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=2641)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 2641) [ClassicSimilarity], result of:
              0.034877572 = score(doc=2641,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 2641, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2641)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    The German Internet Clearinghouse SocioGuide has changed to a database management system. Accordingly the metadata description scheme has become more detailed. The main information types are: institutions, persons, literature, tools, data sets, objects, topics, processes and services. Some of the description elements, such as title, resource identifier, and creator are universal, whereas others, such as primary/secondary information, and availability are specific to information type and cannot be generalized by referring to Dublin Core elements. The quality of Internet sources is indicated implicitly by characteristics, such as extent, restriction, or status. The SocioGuide is managed in DBClear, a generic system that can be adapted to different source types. It makes distributed input possible and contains workflow components.
    Date
    29. 8.2004 10:51:14
  10. Hellweg, H.; Hermes, B.; Stempfhuber, M.; Enderle, W.; Fischer, T.: DBClear : a generic system for clearinghouses (2002) 0.02
    0.020466631 = product of:
      0.05116658 = sum of:
        0.03954072 = weight(_text_:system in 3605) [ClassicSimilarity], result of:
          0.03954072 = score(doc=3605,freq=4.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.29527056 = fieldWeight in 3605, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3605)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 3605) [ClassicSimilarity], result of:
              0.034877572 = score(doc=3605,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 3605, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3605)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Clearinghouses - or subject gateways - are domain-specific collections of links to resources an the Internet. The links are described with metadata and structured according to a domain-specific subject hierarchy. Users access the information by searching in the metadata or by browsing the subject hierarchy. The standards for metadata vary across existing Clearinghouses and different technologies for storing and accessing the metadata are used. This makes it difficult to distribute the editorial or administrative work involved in maintaining a clearinghouse, or to exchange information with other systems. DBClear is a generic, platform-independent clearinghouse system, whose metadata schema can be adapted to different standards. The data is stored in a relational database. It includes a workflow component to Support distributed maintenance and automation modules for link checking and metadata extraction. The presentation of the clearinghouse an the Web can be modified to allow seamless integration into existing web sites.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  11. Lim, E.: Southeast Asian subject gateways : an examination of their classification practices (2000) 0.02
    0.018517707 = product of:
      0.09258853 = sum of:
        0.09258853 = product of:
          0.13888279 = sum of:
            0.069755144 = weight(_text_:29 in 6040) [ClassicSimilarity], result of:
              0.069755144 = score(doc=6040,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.46638384 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
            0.06912764 = weight(_text_:22 in 6040) [ClassicSimilarity], result of:
              0.06912764 = score(doc=6040,freq=2.0), product of:
                0.1488917 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04251826 = queryNorm
                0.46428138 = fieldWeight in 6040, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6040)
          0.6666667 = coord(2/3)
      0.2 = coord(1/5)
    
    Date
    22. 6.2002 19:42:47
    Source
    International cataloguing and bibliographic control. 29(2000) no.3, S.45-48
  12. Stoklasova, B.; Balikova, M.; Celbová, L.: Relationship between subject gateways and national bibliographies in international context (engl. Fassung) (2003) 0.02
    0.016139356 = product of:
      0.080696784 = sum of:
        0.080696784 = weight(_text_:context in 1938) [ClassicSimilarity], result of:
          0.080696784 = score(doc=1938,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.45792344 = fieldWeight in 1938, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.078125 = fieldNorm(doc=1938)
      0.2 = coord(1/5)
    
  13. Dühlmeyer, K.; Maier, S.; Rüter, C.: Neue Informationsdienste für die Ethnologie : Das Sondersammelgebiet Volks- und Völkerkunde (2005) 0.02
    0.015834149 = product of:
      0.03958537 = sum of:
        0.027959513 = weight(_text_:system in 3866) [ClassicSimilarity], result of:
          0.027959513 = score(doc=3866,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.20878783 = fieldWeight in 3866, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3866)
        0.011625858 = product of:
          0.034877572 = sum of:
            0.034877572 = weight(_text_:29 in 3866) [ClassicSimilarity], result of:
              0.034877572 = score(doc=3866,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.23319192 = fieldWeight in 3866, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3866)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Neue Anforderungen an die Sondersammelgebietsbibliotheken Zwei Entwicklungen haben in den letzten Jahren massiv die Tätigkeit der Sondersammelgebiets-Bibliotheken verändert: Zum einen brachte die "elektronische Revolution" nicht nur neue Medienformate in die Bibliotheken, sondern veränderte auch Arbeitsformen, Dienstleistungen und Nutzererwartungen. Zum anderen sehen sich die Sondersammelgebiets-Bibliotheken seit der Veröffentlichung des Memorandums zur überregionalen Literaturversorgung 1998 mit Anforderungen konfrontiert, die weit über ihren bisherigen Auftrag hinausgehen. Diese Anforderungen, die nachfolgend noch ausgedehnt wurden und fortlaufend weiter entwickelt werden sollen, beinhalten vor allem - einen erweiterten Sammel- und Erschließungsauftrag durch Einbeziehung digitaler Dokumente und neuer Erschließungsformen, - die Erstellung digitaler Dokumente zur besseren Versorgung überregionaler Nutzer, - eine stärkere Profilierung und Zielgruppenorientierung, - die Weiterentwicklung von Serviceleistungen. Die Universitätsbibliothek der Humboldt-Universität zu Berlin (UB der HU Berlin) hat 1998 die Betreuung der Sondersammelgebiete (SSG) 7,13: "Allgemeine und vergleichende Volkskunde", 10: "Allgemeine und vergleichende Völkerkunde" sowie 24,2: "Hochschulwesen. Organisation der Wissenschaften und ihrer Einrichtungen" übernommen. Sie gehört damit zu den im ersten Teil des Memorandums genannten Bibliotheken der neuen Bundesländer, die neu in das SSG-System der überregionalen Literaturversorgung einbezogen wurden.
    Date
    29. 5.2007 11:38:46
  14. Shiri, A.; Molberg, K.: Interfaces to knowledge organization systems in Canadian digital library collections (2005) 0.01
    0.013195123 = product of:
      0.032987807 = sum of:
        0.023299592 = weight(_text_:system in 2559) [ClassicSimilarity], result of:
          0.023299592 = score(doc=2559,freq=2.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.17398985 = fieldWeight in 2559, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2559)
        0.009688215 = product of:
          0.029064644 = sum of:
            0.029064644 = weight(_text_:29 in 2559) [ClassicSimilarity], result of:
              0.029064644 = score(doc=2559,freq=2.0), product of:
                0.14956595 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04251826 = queryNorm
                0.19432661 = fieldWeight in 2559, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2559)
          0.33333334 = coord(1/3)
      0.4 = coord(2/5)
    
    Abstract
    Purpose - The purpose of this paper is to report an investigation into the ways in which Canadian digital library collections have incorporated knowledge organization systems into their search interfaces. Design/methodology/approach - A combination of data-gathering techniques was used. These were as follows: a review of the literature related to the application of knowledge organization systems, deep scanning of Canadian governmental and academic institutions web sites on the web, identify and contact researchers in the area of knowledge organization, and identify and contact people in the governmental organizations who are involved in knowledge organization and information management. Findings - A total of 33 digital collections were identified that have made use of some type of knowledge organization system. Thesauri, subject heading lists and classification schemes were the widely used knowledge organization systems in the surveyed Canadian digital library collections. Research limitations/implications - The target population for this research was limited to governmental and academic digital library collections. Practical implications - An evaluation of the knowledge organization systems interfaces showed that searching, browsing and navigation facilities as well as bilingual features call for improvements. Originality/value - This research contributes to the following areas: digital libraries, knowledge organization systems and services and search interface design.
    Source
    Online information review. 29(2005) no.6, S.604-620
  15. Buchanan, S.; Salako, A.: Evaluating the usability and usefulness of a digital library (2009) 0.01
    0.011788766 = product of:
      0.05894383 = sum of:
        0.05894383 = weight(_text_:system in 3632) [ClassicSimilarity], result of:
          0.05894383 = score(doc=3632,freq=20.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.4401634 = fieldWeight in 3632, product of:
              4.472136 = tf(freq=20.0), with freq of:
                20.0 = termFreq=20.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.03125 = fieldNorm(doc=3632)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - System usability and system usefulness are interdependent properties of system interaction, which in combination, determine system satisfaction and usage. Often approached separately, or in the case of digital libraries, often focused upon usability, there is emerging consensus among the research community for their unified treatment and research attention. However, a key challenge is to identify, both respectively and relatively, what to measure and how, compounded by concerns regarding common understanding of usability measures, and associated calls for more valid and complete measures within integrated and comprehensive models. The purpose of this paper is to address this challenge. Design/methodology/approach - Identified key usability and usefulness attributes and associated measures, compiled an integrated measurement framework, identified a suitable methodological approach for application of the framework, and conducted a pilot study on an interactive search system developed by a Health Service as part of their e-library service. Findings - Effectiveness, efficiency, aesthetic appearance, terminology, navigation, and learnability are key attributes of system usability; and relevance, reliability, and currency key attributes of system usefulness. There are shared aspects to several of these attributes, but each is also sufficiently unique to preserve its respective validity. They can be combined as part of a multi-method approach to system evaluation. Research limitations/implications - Pilot study has demonstrated that usability and usefulness can be readily combined, and that questionnaire and observation are valid multi-method approaches, but further research is called for under a variety of conditions, with further combinations of methods, and larger samples. Originality/value - This paper provides an integrated measurement framework, derived from the goal, question, metric paradigm, which provides a relatively comprehensive and representative set of system usability and system usefulness attributes and associated measures, which could be adapted and further refined on a case-by-case basis.
  16. Prasad, A.R.D.; Madalli, D.P.: Faceted infrastructure for semantic digital libraries (2008) 0.01
    0.011412249 = product of:
      0.057061244 = sum of:
        0.057061244 = weight(_text_:context in 1905) [ClassicSimilarity], result of:
          0.057061244 = score(doc=1905,freq=4.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32380077 = fieldWeight in 1905, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1905)
      0.2 = coord(1/5)
    
    Abstract
    Purpose - The paper aims to argue that digital library retrieval should be based on semantic representations and propose a semantic infrastructure for digital libraries. Design/methodology/approach - The approach taken is formal model based on subject representation for digital libraries. Findings - Search engines and search techniques have fallen short of user expectations as they do not give context based retrieval. Deploying semantic web technologies would lead to efficient and more precise representation of digital library content and hence better retrieval. Though digital libraries often have metadata of information resources which can be accessed through OAI-PMH, much remains to be accomplished in making digital libraries semantic web compliant. This paper presents a semantic infrastructure for digital libraries, that will go a long way in providing them and web based information services with products highly customised to users needs. Research limitations/implications - Here only a model for semantic infrastructure is proposed. This model is proposed after studying current user-centric, top-down models adopted in digital library service architectures. Originality/value - This paper gives a generic model for building semantic infrastructure for digital libraries. Faceted ontologies for digital libraries is just one approach. But the same may be adopted by groups working with different approaches in building ontologies to realise efficient retrieval in digital libraries.
    Footnote
    Beitrag eines Themenheftes "Digital libraries and the semantic web: context, applications and research".
  17. Hickey, T.R.: CORC : a system for gateway creation (2000) 0.01
    0.011299702 = product of:
      0.056498513 = sum of:
        0.056498513 = weight(_text_:system in 4870) [ClassicSimilarity], result of:
          0.056498513 = score(doc=4870,freq=6.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.42190298 = fieldWeight in 4870, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4870)
      0.2 = coord(1/5)
    
    Abstract
    CORC is an OCLC project that id developing tools and systems to enable libraries to provide enhanced access to Internet resources. By adapting and extending library techniques and procedures, we are developing a self-supporting system capable of describing a large and useful subset of the Web. CORC is more a system for hosting and supporting subject gateways than a gateway itself and relies on large-scale cooperation among libraries to maintain a centralized database. By supporting emerging metadata standards such as Dublin Core and other standards such as Unicode and RDF, CORC broadens the range of libraries and librarians able to participate. Current plans are for OCLC as a full service in July 2000
  18. Cristán, A.L.: SACO and subject gateways (2004) 0.01
    0.01129755 = product of:
      0.05648775 = sum of:
        0.05648775 = weight(_text_:context in 5679) [ClassicSimilarity], result of:
          0.05648775 = score(doc=5679,freq=2.0), product of:
            0.17622331 = queryWeight, product of:
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.04251826 = queryNorm
            0.32054642 = fieldWeight in 5679, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.14465 = idf(docFreq=1904, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5679)
      0.2 = coord(1/5)
    
    Abstract
    This presentation attempts to fit the subject contribution mechanism used in the Program for Cooperative Cataloging's SACO Program into the context of subject gateways. The discussion points to several subject gateways and concludes that there is no similarity between the two. Subject gateways are a mechanism for facilitating searching, while the SACO Program is a cooperative venture that provides a "gateway" for the development of LCSH (Library of Congress Subject Heading list) into an international authority file for subject headings.
  19. Kruk, S.R.; Westerki, A.; Kruk, E.: Architecture of semantic digital libraries (2009) 0.01
    0.011183805 = product of:
      0.055919025 = sum of:
        0.055919025 = weight(_text_:system in 3379) [ClassicSimilarity], result of:
          0.055919025 = score(doc=3379,freq=8.0), product of:
            0.13391352 = queryWeight, product of:
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.04251826 = queryNorm
            0.41757566 = fieldWeight in 3379, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.1495528 = idf(docFreq=5152, maxDocs=44218)
              0.046875 = fieldNorm(doc=3379)
      0.2 = coord(1/5)
    
    Abstract
    The main motivation of this chapter was to gather existing requirements and solutions, and to present a generic architectural design of semantic digital libraries. This design is meant to answer a number of requirements, such as interoperability or ability to exchange resources and solutions, and set up the foundations for the best practices in the new domain of semantic digital libraries. We start by presenting the library from different high-level perspectives, i.e., user (see Sect. 2) and metadata (see Sect. 1) perspective; this overview narrows the scope and puts emphasis on certain aspects related to the system perspective, i.e., the architecture of the actual digital library management system. We conclude by presenting the system architecture from three perspectives: top-down layered architecture (see Sect. 3), vertical architecture of core services (see Sect. 4), and stack of enabling infrastructures (see Sect. 5); based upon the observations and evaluation of the contemporary state of the art presented in the previous sections, these last three subsections will describe an in-depth model of the digital library management system.
  20. Wolf, S.: Neuer Meilenstein für BASE : 90 Millionen Dokumente (2016) 0.01
    0.0107641 = product of:
      0.0538205 = sum of:
        0.0538205 = weight(_text_:index in 2872) [ClassicSimilarity], result of:
          0.0538205 = score(doc=2872,freq=2.0), product of:
            0.18579477 = queryWeight, product of:
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.04251826 = queryNorm
            0.28967714 = fieldWeight in 2872, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.369764 = idf(docFreq=1520, maxDocs=44218)
              0.046875 = fieldNorm(doc=2872)
      0.2 = coord(1/5)
    
    Abstract
    BASE (https://www.base-search.net) ermöglicht seit Anfang April eine Suche nach über 90 Millionen Dokumenten, deren Metadaten von über 4.200 Dokumentenservern (Repositories) wissenschaftlicher Institutionen weltweit bereit gestellt werden. Damit ist BASE nach Google Scholar die größte Suchmaschine für wissenschaftliche, frei im Internet verfügbare Dokumente. Für über 30 Mio. Dokumente, die in BASE zu finden sind, können wir aufgrund von Informationen in den Metadaten einen Open-Access-Status ausweisen, insgesamt schätzen wir den Open-Access-Anteil derzeit auf 60%. Über ein Boosting-Verfahren werden Nachweise zu Open-Access-Dokumenten bevorzugt angezeigt, ebenso ist ein gezieltes Suchen unter Berücksichtigung verschiedener Lizenz- und Rechteangaben möglich. Der BASE-Index steht über verschiedene Schnittstellen zahlreichen anderen kommerziellen und nicht-kommerziellen Discovery-Systemen, Suchmaschinen, Datenbankanbietern, Fachbibliotheken und Entwicklern zur Nachnutzung zur Verfügung. BASE trägt damit wesentlich zur Nutzung von Inhalten auf Dokumentservern bei. Weitere Informationen: https://www.base-search.net/

Languages

  • e 78
  • d 43

Types

  • a 108
  • el 23
  • m 3
  • s 3
  • x 1
  • More… Less…