Search (537 results, page 1 of 27)

  • × type_ss:"el"
  1. Arenas, M.; Cuenca Grau, B.; Kharlamov, E.; Marciuska, S.; Zheleznyakov, D.: Faceted search over ontology-enhanced RDF data (2014) 0.06
    0.059065707 = product of:
      0.118131414 = sum of:
        0.10918013 = weight(_text_:interfaces in 2207) [ClassicSimilarity], result of:
          0.10918013 = score(doc=2207,freq=4.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.4885056 = fieldWeight in 2207, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.046875 = fieldNorm(doc=2207)
        0.008951281 = product of:
          0.026853843 = sum of:
            0.026853843 = weight(_text_:systems in 2207) [ClassicSimilarity], result of:
              0.026853843 = score(doc=2207,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.2037246 = fieldWeight in 2207, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2207)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    An increasing number of applications rely on RDF, OWL2, and SPARQL for storing and querying data. SPARQL, however, is not targeted towards end-users, and suitable query interfaces are needed. Faceted search is a prominent approach for end-user data access, and several RDF-based faceted search systems have been developed. There is, however, a lack of rigorous theoretical underpinning for faceted search in the context of RDF and OWL2. In this paper, we provide such solid foundations. We formalise faceted interfaces for this context, identify a fragment of first-order logic capturing the underlying queries, and study the complexity of answering such queries for RDF and OWL2 profiles. We then study interface generation and update, and devise efficiently implementable algorithms. Finally, we have implemented and tested our faceted search algorithms for scalability, with encouraging results.
  2. Degkwitz, A.: "Next Generation Library Systems (NGLS) in Germany" (ALMA,WMS) (2016) 0.06
    0.057435524 = product of:
      0.11487105 = sum of:
        0.10293601 = weight(_text_:interfaces in 3554) [ClassicSimilarity], result of:
          0.10293601 = score(doc=3554,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.46056747 = fieldWeight in 3554, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0625 = fieldNorm(doc=3554)
        0.01193504 = product of:
          0.03580512 = sum of:
            0.03580512 = weight(_text_:systems in 3554) [ClassicSimilarity], result of:
              0.03580512 = score(doc=3554,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.2716328 = fieldWeight in 3554, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3554)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Fazit: "But in the end of the day: All the synchronisation procedures, which have been considered, failed or are too sophisticated. The project recommended cataloging in the World Cat, what includes a number of conditions and prerequisites like interfaces, data formats, working procedures etc."
  3. Yee, M.M.: Guidelines for OPAC displays : prepared for the IFLA Task Force on Guidelines for OPAC Displays (1998) 0.06
    0.056636028 = product of:
      0.113272056 = sum of:
        0.10293601 = weight(_text_:interfaces in 5069) [ClassicSimilarity], result of:
          0.10293601 = score(doc=5069,freq=8.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.46056747 = fieldWeight in 5069, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.03125 = fieldNorm(doc=5069)
        0.010336049 = product of:
          0.031008147 = sum of:
            0.031008147 = weight(_text_:systems in 5069) [ClassicSimilarity], result of:
              0.031008147 = score(doc=5069,freq=6.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.2352409 = fieldWeight in 5069, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5069)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Several studies on OPACs have been made since the early 1980s. However, OPAC development has been governed by systems designers, bibliographic networks and technical services librarians, but not necessarily according to user needs. Existing OPACs demonstrate differences, for example, in the range and complexity of their functional features, terminology and help facilities. While many libraries have already established their own OPACs, there is a need to bring together in the form of guidelines or recommendations a corpus of good practice to assist libraries to design or re-design their OPACs.
    As mentioned above, the guidelines are intended to apply to all types of catalogue, including Web-based catalogues, GUI-based interfaces, and Z39.50-web interfaces. The focus of the guidelines is on the display of cataloguing information (as opposed to circulation, serials check-in, fund accounting, acquisitions, or bindery information). However, some general statements are made concerning the value of displaying to users information that is drawn from these other types of records. The guidelines do not attempt to cover HELP screens, searching methods, or command names and functions. Thus, the guidelines do not directly address the difference between menu-mode access (so common now in GUI and Web interfaces) vs. command-mode access (often completely unavailable in GUI and Web interfaces). However, note that in menu-mode access, the user often has to go through many more screens to attain results than in command-mode access, and each of these screens constitutes a display. The intent is to recommend a standard set of display defaults, defined as features that should be provided for users who have not selected other options, including users who want to begin searching right away without much instruction. It is not the intent to restrict the creativity of system designers who want to build in further options to offer to advanced users (beyond the defaults), advanced users being those people who are willing to put some time into learning how to use the system in more sophisticated and complex ways. The Task Force is aware of the fact that many existing systems are not capable of following all of the recommendations in this document. We hope that existing systems will attempt to work toward the implementation of the guidelines as they develop new versions of their software in the future.
  4. Fagan, J.C.: Usability studies of faceted browsing : a literature review (2010) 0.05
    0.05025608 = product of:
      0.10051216 = sum of:
        0.090069 = weight(_text_:interfaces in 4396) [ClassicSimilarity], result of:
          0.090069 = score(doc=4396,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.40299654 = fieldWeight in 4396, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4396)
        0.010443161 = product of:
          0.031329483 = sum of:
            0.031329483 = weight(_text_:systems in 4396) [ClassicSimilarity], result of:
              0.031329483 = score(doc=4396,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.23767869 = fieldWeight in 4396, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4396)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Faceted browsing is a common feature of new library catalog interfaces. But to what extent does it improve user performance in searching within today's library catalog systems? This article reviews the literature for user studies involving faceted browsing and user studies of "next-generation" library catalogs that incorporate faceted browsing. Both the results and the methods of these studies are analyzed by asking, What do we currently know about faceted browsing? How can we design better studies of faceted browsing in library catalogs? The article proposes methodological considerations for practicing librarians and provides examples of goals, tasks, and measurements for user studies of faceted browsing in library catalogs.
  5. Mimno, D.; Crane, G.; Jones, A.: Hierarchical catalog records : implementing a FRBR catalog (2005) 0.05
    0.050179586 = product of:
      0.10035917 = sum of:
        0.07278675 = weight(_text_:interfaces in 1183) [ClassicSimilarity], result of:
          0.07278675 = score(doc=1183,freq=4.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.3256704 = fieldWeight in 1183, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.03125 = fieldNorm(doc=1183)
        0.027572421 = product of:
          0.04135863 = sum of:
            0.01790256 = weight(_text_:systems in 1183) [ClassicSimilarity], result of:
              0.01790256 = score(doc=1183,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.1358164 = fieldWeight in 1183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1183)
            0.02345607 = weight(_text_:29 in 1183) [ClassicSimilarity], result of:
              0.02345607 = score(doc=1183,freq=2.0), product of:
                0.15088047 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04289195 = queryNorm
                0.15546128 = fieldWeight in 1183, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1183)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    IFLA's Functional Requirements for Bibliographic Records (FRBR) lay the foundation for a new generation of cataloging systems that recognize the difference between a particular work (e.g., Moby Dick), diverse expressions of that work (e.g., translations into German, Japanese and other languages), different versions of the same basic text (e.g., the Modern Library Classics vs. Penguin editions), and particular items (a copy of Moby Dick on the shelf). Much work has gone into finding ways to infer FRBR relationships between existing catalog records and modifying catalog interfaces to display those relationships. Relatively little work, however, has gone into exploring the creation of catalog records that are inherently based on the FRBR hierarchy of works, expressions, manifestations, and items. The Perseus Digital Library has created a new catalog that implements such a system for a small collection that includes many works with multiple versions. We have used this catalog to explore some of the implications of hierarchical catalog records for searching and browsing. Current online library catalog interfaces present many problems for searching. One commonly cited failure is the inability to find and collocate all versions of a distinct intellectual work that exist in a collection and the inability to take into account known variations in titles and personal names (Yee 2005). The IFLA Functional Requirements for Bibliographic Records (FRBR) attempts to address some of these failings by introducing the concept of multiple interrelated bibliographic entities (IFLA 1998). In particular, relationships between abstract intellectual works and the various published instances of those works are divided into a four-level hierarchy of works (such as the Aeneid), expressions (Robert Fitzgerald's translation of the Aeneid), manifestations (a particular paperback edition of Robert Fitzgerald's translation of the Aeneid), and items (my copy of a particular paperback edition of Robert Fitzgerald's translation of the Aeneid). In this formulation, each level in the hierarchy "inherits" information from the preceding level. Much of the work on FRBRized catalogs so far has focused on organizing existing records that describe individual physical books. Relatively little work has gone into rethinking what information should be in catalog records, or how the records should relate to each other. It is clear, however, that a more "native" FRBR catalog would include separate records for works, expressions, manifestations, and items. In this way, all information about a work would be centralized in one record. Records for subsequent expressions of that work would add only the information specific to each expression: Samuel Butler's translation of the Iliad does not need to repeat the fact that the work was written by Homer. This approach has certain inherent advantages for collections with many versions of the same works: new publications can be cataloged more quickly, and records can be stored and updated more efficiently.
    Date
    26.12.2011 14:08:29
  6. Reiner, U.: DDC-basierte Suche in heterogenen digitalen Bibliotheks- und Wissensbeständen (2005) 0.04
    0.04446502 = product of:
      0.08893004 = sum of:
        0.07720201 = weight(_text_:interfaces in 4854) [ClassicSimilarity], result of:
          0.07720201 = score(doc=4854,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.3454256 = fieldWeight in 4854, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.046875 = fieldNorm(doc=4854)
        0.011728036 = product of:
          0.035184108 = sum of:
            0.035184108 = weight(_text_:29 in 4854) [ClassicSimilarity], result of:
              0.035184108 = score(doc=4854,freq=2.0), product of:
                0.15088047 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04289195 = queryNorm
                0.23319192 = fieldWeight in 4854, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4854)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Waehrend die Sachsuche schon seit Laengerem vereinheitlicht wurde, steht die Vereinheitlichung der Fachsuche fuer Endbenutzer(innen) im Wesentlichen noch aus. Als Mitglied im DFG-gefoerderten Konsortium DDC Deutsch erarbeitet der GBV Voraussetzungen fuer eine thematische Suche mit Hilfe von DDC-Notationen. Im Rahmen des VZG-Projektes Colibri (COntext Generation and LInguistic Tools for Bibliographic Retrieval Interfaces) wird das Ziel verfolgt, alle Titeldatensätze des Verbundkataloges (GVK) und die mehr als 20 Mio. Aufsatztitel der Online Contents Datenbank, die noch nicht intellektuell DDC-klassifiziert sind, entweder auf Basis von Konkordanzen zu anderen Erschliessungssystemen oder automatisch mit DDC-Notationen zu versehen. Hierzu wird aus GVK-Titeldatensaetzen eine DDC-Basis erstellt. Sowohl molekulare als auch atomare DDC-Notationen sind Bestandteil dieser DDC-Basis. Eingegangen wird auf den Stand der Forschung, insbesondere auf Songqiao Liu's Dissertation zur automatischen Zerlegung von DDC-Notationen (Univ. of California, Los Angeles, Calif., 1993). Anhand von Beispielen wird dargelegt, dass sich Liu's Ergebnisse reproduzieren lassen. Weiterhin wird der Stand der VZG-Colibri-Arbeiten zur Modell- und Begriffsbildung, Klassifizierung und Implementierung vorgestellt. Schliesslich wird gezeigt, wie DDC-Notationen zur systematischen Erkundung heterogener digitaler Bibliotheks- und Wissensbestaende herangezogen werden koennen.
    Date
    19. 1.2006 19:15:29
  7. Mäkelä, E.; Hyvönen, E.; Ruotsalo, T.: How to deal with massively heterogeneous cultural heritage data : lessons learned in CultureSampo (2012) 0.04
    0.043076646 = product of:
      0.08615329 = sum of:
        0.07720201 = weight(_text_:interfaces in 3263) [ClassicSimilarity], result of:
          0.07720201 = score(doc=3263,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.3454256 = fieldWeight in 3263, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.046875 = fieldNorm(doc=3263)
        0.008951281 = product of:
          0.026853843 = sum of:
            0.026853843 = weight(_text_:systems in 3263) [ClassicSimilarity], result of:
              0.026853843 = score(doc=3263,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.2037246 = fieldWeight in 3263, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3263)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This paper presents the CultureSampo system for publishing heterogeneous linked data as a service. Discussed are the problems of converting legacy data into linked data, as well as the challenge of making the massively heterogeneous yet interlinked cultural heritage content interoperable on a semantic level. Novel user interface concepts for then utilizing the content are also presented. In the approach described, the data is published not only for human use, but also as intelligent services for other computer systems that can then provide interfaces of their own for the linked data. As a concrete use case of using CultureSampo as a service, the BookSampo system for publishing Finnish fiction literature on the semantic web is presented.
  8. Fang, L.: ¬A developing search service : heterogeneous resources integration and retrieval system (2004) 0.04
    0.042035364 = product of:
      0.08407073 = sum of:
        0.064335 = weight(_text_:interfaces in 1193) [ClassicSimilarity], result of:
          0.064335 = score(doc=1193,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.28785467 = fieldWeight in 1193, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1193)
        0.01973572 = product of:
          0.059207156 = sum of:
            0.059207156 = weight(_text_:systems in 1193) [ClassicSimilarity], result of:
              0.059207156 = score(doc=1193,freq=14.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.4491705 = fieldWeight in 1193, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1193)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This article describes two approaches for searching heterogeneous resources, which are explained as they are used in two corresponding existing systems-RIRS (Resource Integration Retrieval System) and HRUSP (Heterogeneous Resource Union Search Platform). On analyzing the existing systems, a possible framework-the MUSP (Multimetadata-Based Union Search Platform) is presented. Libraries now face a dilemma. On one hand, libraries subscribe to many types of database retrieval systems that are produced by various providers. The libraries build their data and information systems independently. This results in highly heterogeneous and distributed systems at the technical level (e.g., different operating systems and user interfaces) and at the conceptual level (e.g., the same objects are named using different terms). On the other hand, end users want to access all these heterogeneous data via a union interface, without having to know the structure of each information system or the different retrieval methods used by the systems. Libraries must achieve a harmony between information providers and users. In order to bridge the gap between the service providers and the users, it would seem that all source databases would need to be rebuilt according to a uniform data structure and query language, but this seems impossible. Fortunately, however, libraries and information and technology providers are now making an effort to find a middle course that meets the requirements of both data providers and users. They are doing this through resource integration.
  9. Bastos Vieira, S.; DeBrito, M.; Mustafa El Hadi, W.; Zumer, M.: Developing imaged KOS with the FRSAD Model : a conceptual methodology (2016) 0.04
    0.040613048 = product of:
      0.081226096 = sum of:
        0.07278675 = weight(_text_:interfaces in 3109) [ClassicSimilarity], result of:
          0.07278675 = score(doc=3109,freq=4.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.3256704 = fieldWeight in 3109, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.03125 = fieldNorm(doc=3109)
        0.008439349 = product of:
          0.025318045 = sum of:
            0.025318045 = weight(_text_:systems in 3109) [ClassicSimilarity], result of:
              0.025318045 = score(doc=3109,freq=4.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.19207339 = fieldWeight in 3109, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=3109)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    This proposal presents the methodology of indexing with images suggested by De Brito and Caribé (2015). The imagetic model is used as a compatible mechanism with FRSAD for a global information share and use of subject data, both within the library sector and beyond. The conceptual model of imagetic indexing shows how images are related to topics and 'key-images' are interpreted as nomens to implement the FRSAD model. Indexing with images consists of using images instead of key-words or descriptors, to represent and organize information. Implementing the imaged navigation in OPACs denotes multiple advantages derived from this rethinking the OPAC anew, since we are looking forward to sharing concepts within the subject authority data. Images, carrying linguistic objects, permeate inter-social and cultural concepts. In practice it includes translated metadata, symmetrical multilingual thesaurus, or any traditional indexing tools. iOPAC embodies efforts focused on conceptual levels as expected from librarians. Imaged interfaces are more intuitive since users do not need specific training for information retrieval, offering easier comprehension of indexing codes, larger conceptual portability of descriptors (as images), and a better interoperability between discourse codes and indexing competences affecting positively social and cultural interoperability. The imagetic methodology deploys R&D fields for more suitable interfaces taking into consideration users with specific needs such as deafness and illiteracy. This methodology arouse questions about the paradigms of the primacy of orality in information systems and pave the way to a legitimacy of multiple perspectives in document indexing by suggesting a more universal communication system based on images. Interdisciplinarity in neurosciences, linguistics and information sciences would be desirable competencies for further investigations about he nature of cognitive processes in information organization and classification while developing assistive KOS for individuals with communication problems, such autism and deafness.
    Source
    Proceedings of the 15th European Networked Knowledge Organization Systems Workshop (NKOS 2016) co-located with the 20th International Conference on Theory and Practice of Digital Libraries 2016 (TPDL 2016), Hannover, Germany, September 9, 2016. Edi. by Philipp Mayr et al. [http://ceur-ws.org/Vol-1676/=urn:nbn:de:0074-1676-5]
  10. Teets, M.; Murray, P.: Metasearch authentication and access management (2006) 0.04
    0.037054185 = product of:
      0.07410837 = sum of:
        0.064335 = weight(_text_:interfaces in 1154) [ClassicSimilarity], result of:
          0.064335 = score(doc=1154,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.28785467 = fieldWeight in 1154, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0390625 = fieldNorm(doc=1154)
        0.009773364 = product of:
          0.029320091 = sum of:
            0.029320091 = weight(_text_:29 in 1154) [ClassicSimilarity], result of:
              0.029320091 = score(doc=1154,freq=2.0), product of:
                0.15088047 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04289195 = queryNorm
                0.19432661 = fieldWeight in 1154, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=1154)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Metasearch - also called parallel search, federated search, broadcast search, and cross-database search - has become commonplace in the information community's vocabulary. All speak to a common theme of searching and retrieving from multiple databases, sources, platforms, protocols, and vendors at the point of the user's request. Metasearch services rely on a variety of approaches including open standards (such as NISO's Z39.50 and SRU/SRW), proprietary programming interfaces, and "screen scraping." However, the absence of widely supported standards, best practices, and tools makes the metasearch environment less efficient for the metasearch provider, the content provider, and ultimately the end-user. To spur the development of widely supported standards and best practices, the National Information Standards Organization (NISO) sponsored a Metasearch Initiative in 2003 to enable: * metasearch service providers to offer more effective and responsive services, * content providers to deliver enhanced content and protect their intellectual property, and * libraries to deliver a simple search (a.k.a. "Google") that covers the breadth of their vetted commercial and free resources. The Access Management Task Group was one of three groups chartered by NISO as part of the Metasearch Initiative. The focus of the group was on gathering requirements for Metasearch authentication and access needs, inventorying existing processes, developing a series of formal use cases describing the access needs, recommending best practices given today's processes, and recommending and pursing changes to current solutions to better support metasearch applications. In September 2005, the group issued their final report and recommendation. This article summarizes the group's work and final recommendation.
    Date
    26.12.2011 16:29:10
  11. Spink, A.; Wilson, T.; Ellis, D.; Ford, N.: Modeling users' successive searches in digital environments : a National Science Foundation/British Library funded study (1998) 0.04
    0.036366224 = product of:
      0.07273245 = sum of:
        0.063688405 = weight(_text_:interfaces in 1255) [ClassicSimilarity], result of:
          0.063688405 = score(doc=1255,freq=4.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.28496158 = fieldWeight in 1255, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1255)
        0.009044043 = product of:
          0.027132127 = sum of:
            0.027132127 = weight(_text_:systems in 1255) [ClassicSimilarity], result of:
              0.027132127 = score(doc=1255,freq=6.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.20583579 = fieldWeight in 1255, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1255)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    As digital libraries become a major source of information for many people, we need to know more about how people seek and retrieve information in digital environments. Quite commonly, users with a problem-at-hand and associated question-in-mind repeatedly search a literature for answers, and seek information in stages over extended periods from a variety of digital information resources. The process of repeatedly searching over time in relation to a specific, but possibly an evolving information problem (including changes or shifts in a variety of variables), is called the successive search phenomenon. The study outlined in this paper is currently investigating this new and little explored line of inquiry for information retrieval, Web searching, and digital libraries. The purpose of the research project is to investigate the nature, manifestations, and behavior of successive searching by users in digital environments, and to derive criteria for use in the design of information retrieval interfaces and systems supporting successive searching behavior. This study includes two related projects. The first project is based in the School of Library and Information Sciences at the University of North Texas and is funded by a National Science Foundation POWRE Grant <http://www.nsf.gov/cgi-bin/show?award=9753277>. The second project is based at the Department of Information Studies at the University of Sheffield (UK) and is funded by a grant from the British Library <http://www.shef. ac.uk/~is/research/imrg/uncerty.html> Research and Innovation Center. The broad objectives of each project are to examine the nature and extent of successive search episodes in digital environments by real users over time. The specific aim of the current project is twofold: * To characterize progressive changes and shifts that occur in: user situational context; user information problem; uncertainty reduction; user cognitive styles; cognitive and affective states of the user, and consequently in their queries; and * To characterize related changes over time in the type and use of information resources and search strategies particularly related to given capabilities of IR systems, and IR search engines, and examine changes in users' relevance judgments and criteria, and characterize their differences. The study is an observational, longitudinal data collection in the U.S. and U.K. Three questionnaires are used to collect data: reference, client post search and searcher post search questionnaires. Each successive search episode with a search intermediary for textual materials on the DIALOG Information Service is audiotaped and search transaction logs are recorded. Quantitative analysis includes statistical analysis using Likert scale data from the questionnaires and log-linear analysis of sequential data. Qualitative methods include: content analysis, structuring taxonomies; and diagrams to describe shifts and transitions within and between each search episode. Outcomes of the study are the development of appropriate model(s) for IR interactions in successive search episodes and the derivation of a set of design criteria for interfaces and systems supporting successive searching.
  12. Harlow, C.: Data munging tools in Preparation for RDF : Catmandu and LODRefine (2015) 0.04
    0.035897203 = product of:
      0.071794406 = sum of:
        0.064335 = weight(_text_:interfaces in 2277) [ClassicSimilarity], result of:
          0.064335 = score(doc=2277,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.28785467 = fieldWeight in 2277, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2277)
        0.007459401 = product of:
          0.022378203 = sum of:
            0.022378203 = weight(_text_:systems in 2277) [ClassicSimilarity], result of:
              0.022378203 = score(doc=2277,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.1697705 = fieldWeight in 2277, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2277)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Data munging, or the work of remediating, enhancing and transforming library datasets for new or improved uses, has become more important and staff-inclusive in many library technology discussions and projects. Many times we know how we want our data to look, as well as how we want our data to act in discovery interfaces or when exposed, but we are uncertain how to make the data we have into the data we want. This article introduces and compares two library data munging tools that can help: LODRefine (OpenRefine with the DERI RDF Extension) and Catmandu. The strengths and best practices of each tool are discussed in the context of metadata munging use cases for an institution's metadata migration workflow. There is a focus on Linked Open Data modeling and transformation applications of each tool, in particular how metadataists, catalogers, and programmers can create metadata quality reports, enhance existing data with LOD sets, and transform that data to a RDF model. Integration of these tools with other systems and projects, the use of domain specific transformation languages, and the expansion of vocabulary reconciliation services are mentioned.
  13. Huurdeman, H.C.; Kamps, J.: Designing multistage search systems to support the information seeking process (2020) 0.04
    0.035897203 = product of:
      0.071794406 = sum of:
        0.064335 = weight(_text_:interfaces in 5882) [ClassicSimilarity], result of:
          0.064335 = score(doc=5882,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.28785467 = fieldWeight in 5882, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5882)
        0.007459401 = product of:
          0.022378203 = sum of:
            0.022378203 = weight(_text_:systems in 5882) [ClassicSimilarity], result of:
              0.022378203 = score(doc=5882,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.1697705 = fieldWeight in 5882, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5882)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Due to the advances in information retrieval in the past decades, search engines have become extremely efficient at acquiring useful sources in response to a user's query. However, for more prolonged and complex information seeking tasks, these search engines are not as well suited. During complex information seeking tasks, various stages may occur, which imply varying support needs for users. However, the implications of theoretical information seeking models for concrete search user interfaces (SUI) design are unclear, both at the level of the individual features and of the whole interface. Guidelines and design patterns for concrete SUIs, on the other hand, provide recommendations for feature design, but these are separated from their role in the information seeking process. This chapter addresses the question of how to design SUIs with enhanced support for the macro-level process, first by reviewing previous research. Subsequently, we outline a framework for complex task support, which explicitly connects the temporal development of complex tasks with different levels of support by SUI features. This is followed by a discussion of concrete system examples which include elements of the three dimensions of our framework in an exploratory search and sensemaking context. Moreover, we discuss the connection of navigation with the search-oriented framework. In our final discussion and conclusion, we provide recommendations for designing more holistic SUIs which potentially evolve along with a user's information seeking process.
  14. Dietz, K.: en.wikipedia.org > 6 Mio. Artikel (2020) 0.03
    0.033271596 = product of:
      0.06654319 = sum of:
        0.056769826 = product of:
          0.17030947 = sum of:
            0.17030947 = weight(_text_:3a in 5669) [ClassicSimilarity], result of:
              0.17030947 = score(doc=5669,freq=2.0), product of:
                0.36363843 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04289195 = queryNorm
                0.46834838 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
        0.009773364 = product of:
          0.029320091 = sum of:
            0.029320091 = weight(_text_:29 in 5669) [ClassicSimilarity], result of:
              0.029320091 = score(doc=5669,freq=2.0), product of:
                0.15088047 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04289195 = queryNorm
                0.19432661 = fieldWeight in 5669, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5669)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Content
    "Die Englischsprachige Wikipedia verfügt jetzt über mehr als 6 Millionen Artikel. An zweiter Stelle kommt die deutschsprachige Wikipedia mit 2.3 Millionen Artikeln, an dritter Stelle steht die französischsprachige Wikipedia mit 2.1 Millionen Artikeln (via Researchbuzz: Firehose <https://rbfirehose.com/2020/01/24/techcrunch-wikipedia-now-has-more-than-6-million-articles-in-english/> und Techcrunch <https://techcrunch.com/2020/01/23/wikipedia-english-six-million-articles/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29&guccounter=1&guce_referrer=aHR0cHM6Ly9yYmZpcmVob3NlLmNvbS8yMDIwLzAxLzI0L3RlY2hjcnVuY2gtd2lraXBlZGlhLW5vdy1oYXMtbW9yZS10aGFuLTYtbWlsbGlvbi1hcnRpY2xlcy1pbi1lbmdsaXNoLw&guce_referrer_sig=AQAAAK0zHfjdDZ_spFZBF_z-zDjtL5iWvuKDumFTzm4HvQzkUfE2pLXQzGS6FGB_y-VISdMEsUSvkNsg2U_NWQ4lwWSvOo3jvXo1I3GtgHpP8exukVxYAnn5mJspqX50VHIWFADHhs5AerkRn3hMRtf_R3F1qmEbo8EROZXp328HMC-o>). 250120 via digithek ch = #fineBlog s.a.: Angesichts der Veröffentlichung des 6-millionsten Artikels vergangene Woche in der englischsprachigen Wikipedia hat die Community-Zeitungsseite "Wikipedia Signpost" ein Moratorium bei der Veröffentlichung von Unternehmensartikeln gefordert. Das sei kein Vorwurf gegen die Wikimedia Foundation, aber die derzeitigen Maßnahmen, um die Enzyklopädie gegen missbräuchliches undeklariertes Paid Editing zu schützen, funktionierten ganz klar nicht. *"Da die ehrenamtlichen Autoren derzeit von Werbung in Gestalt von Wikipedia-Artikeln überwältigt werden, und da die WMF nicht in der Lage zu sein scheint, dem irgendetwas entgegenzusetzen, wäre der einzige gangbare Weg für die Autoren, fürs erste die Neuanlage von Artikeln über Unternehmen zu untersagen"*, schreibt der Benutzer Smallbones in seinem Editorial <https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2020-01-27/From_the_editor> zur heutigen Ausgabe."
  15. Dobratz, S.; Neuroth, H.: nestor: Network of Expertise in long-term STOrage of digital Resources : a digital preservation initiative for Germany (2004) 0.03
    0.032916557 = product of:
      0.065833114 = sum of:
        0.038601004 = weight(_text_:interfaces in 1195) [ClassicSimilarity], result of:
          0.038601004 = score(doc=1195,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.1727128 = fieldWeight in 1195, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0234375 = fieldNorm(doc=1195)
        0.02723211 = product of:
          0.040848166 = sum of:
            0.02325611 = weight(_text_:systems in 1195) [ClassicSimilarity], result of:
              0.02325611 = score(doc=1195,freq=6.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.17643067 = fieldWeight in 1195, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1195)
            0.017592054 = weight(_text_:29 in 1195) [ClassicSimilarity], result of:
              0.017592054 = score(doc=1195,freq=2.0), product of:
                0.15088047 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.04289195 = queryNorm
                0.11659596 = fieldWeight in 1195, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=1195)
          0.6666667 = coord(2/3)
      0.5 = coord(2/4)
    
    Abstract
    As follow up, in 2002 the nestor long-term archiving working group provided an initial spark towards planning and organising coordinated activities concerning the long-term preservation and long-term availability of digital documents in Germany. This resulted in a workshop, held 29 - 30 October 2002, where major tasks were discussed. Influenced by the demands and progress of the nestor network, the participants reached agreement to start work on application-oriented projects and to address the following topics: * Overlapping problems o Collection and preservation of digital objects (selection criteria, preservation policy) o Definition of criteria for trusted repositories o Creation of models of cooperation, etc. * Digital objects production process o Analysis of potential conflicts between production and long-term preservation o Documentation of existing document models and recommendations for standards models to be used for long-term preservation o Identification systems for digital objects, etc. * Transfer of digital objects o Object data and metadata o Transfer protocols and interoperability o Handling of different document types, e.g. dynamic publications, etc. * Long-term preservation of digital objects o Design and prototype implementation of depot systems for digital objects (OAIS was chosen to be the best functional model.) o Authenticity o Functional requirements on user interfaces of an depot system o Identification systems for digital objects, etc. At the end of the workshop, participants decided to establish a permanent distributed infrastructure for long-term preservation and long-term accessibility of digital resources in Germany comparable, e.g., to the Digital Preservation Coalition in the UK. The initial phase, nestor, is now being set up by the above-mentioned 3-year funding project.
  16. Kumar, V.; Furuta, R.; Allen, B.: Interactive interfaces for knowledge-rich domains (1996) 0.03
    0.031844202 = product of:
      0.12737681 = sum of:
        0.12737681 = weight(_text_:interfaces in 7082) [ClassicSimilarity], result of:
          0.12737681 = score(doc=7082,freq=4.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.56992316 = fieldWeight in 7082, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0546875 = fieldNorm(doc=7082)
      0.25 = coord(1/4)
    
    Abstract
    Explores the use of interactive documents as interfaces to historical data starting with the basis of the well known representation of a timeline. When incorporated into the context of electronic documents, the timeline provides the basis for implementing an interface into an event space, relying particularly on hypertextual-style links. Generalizing timelines also permits the flexible representation of many different kinds of relationships beyond the temporal. Describes examples of such representations taken from prototype implementations
  17. Proceedings of the 2nd International Workshop on Semantic Digital Archives held in conjunction with the 16th Int. Conference on Theory and Practice of Digital Libraries (TPDL) on September 27, 2012 in Paphos, Cyprus (2012) 0.03
    0.029532854 = product of:
      0.059065707 = sum of:
        0.054590065 = weight(_text_:interfaces in 468) [ClassicSimilarity], result of:
          0.054590065 = score(doc=468,freq=4.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.2442528 = fieldWeight in 468, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0234375 = fieldNorm(doc=468)
        0.0044756406 = product of:
          0.013426921 = sum of:
            0.013426921 = weight(_text_:systems in 468) [ClassicSimilarity], result of:
              0.013426921 = score(doc=468,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.1018623 = fieldWeight in 468, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.0234375 = fieldNorm(doc=468)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    Archival Information Systems (AIS) are becoming increasingly important. For decades, the amount of content created digitally is growing and its complete life cycle nowadays tends to remain digital. A selection of this content is expected to be of value for the future and can thus be considered being part of our cultural heritage. However, digital content poses many challenges for long-term or indefinite preservation, e.g. digital publications become increasingly complex by the embedding of different kinds of multimedia, data in arbitrary formats and software. As soon as these digital publications become obsolete, but are still deemed to be of value in the future, they have to be transferred smoothly into appropriate AIS where they need to be kept accessible even through changing technologies. The successful previous SDA workshop in 2011 showed: Both, the library and the archiving community have made valuable contributions to the management of huge amounts of knowledge and data. However, both are approaching this topic from different views which shall be brought together to cross-fertilize each other. There are promising combinations of pertinence and provenance models since those are traditionally the prevailing knowledge organization principles of the library and archiving community, respectively. Another scientific discipline providing promising technical solutions for knowledge representation and knowledge management is semantic technologies, which is supported by appropriate W3C recommendations and a large user community. At the forefront of making the semantic web a mature and applicable reality is the linked data initiative, which already has started to be adopted by the library community. It can be expected that using semantic (web) technologies in general and linked data in particular can mature the area of digital archiving as well as technologically tighten the natural bond between digital libraries and digital archives. Semantic representations of contextual knowledge about cultural heritage objects will enhance organization and access of data and knowledge. In order to achieve a comprehensive investigation, the information seeking and document triage behaviors of users (an area also classified under the field of Human Computer Interaction) will also be included in the research.
    Semantic search & semantic information retrieval in digital archives and digital libraries Semantic multimedia archives Ontologies & linked data for digital archives and digital libraries Ontologies & linked data for multimedia archives Implementations and evaluations of semantic digital archives Visualization and exploration of digital content User interfaces for semantic digital libraries User interfaces for intelligent multimedia information retrieval User studies focusing on end-user needs and information seeking behavior of end-users Theoretical and practical archiving frameworks using Semantic (Web) technologies Logical theories for digital archives Semantic (Web) services implementing the OAIS standard Semantic or logical provenance models for digital archives or digital libraries Information integration/semantic ingest (e.g. from digital libraries) Trust for ingest and data security/integrity check for long-term storage of archival records Semantic extensions of emulation/virtualization methodologies tailored for digital archives Semantic long-term storage and hardware organization tailored for AIS Migration strategies based on Semantic (Web) technologies Knowledge evolution We expect new insights and results for sustainable technical solutions for digital archiving using knowledge management techniques based on semantic technologies. The workshop emphasizes interdisciplinarity and aims at an audience consisting of scientists and scholars from the digital library, digital archiving, multimedia technology and semantic web community, the information and library sciences, as well as, from the social sciences and (digital) humanities, in particular people working on the mentioned topics. We encourage end-users, practitioners and policy-makers from cultural heritage institutions to participate as well.
  18. Payette, S.; Blanchi, C.; Lagoze, C.; Overly, E.A.: Interoperability for digital objects and repositories : the Cornell/CNRI experiments (1999) 0.03
    0.028717762 = product of:
      0.057435524 = sum of:
        0.051468004 = weight(_text_:interfaces in 1248) [ClassicSimilarity], result of:
          0.051468004 = score(doc=1248,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.23028374 = fieldWeight in 1248, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.03125 = fieldNorm(doc=1248)
        0.00596752 = product of:
          0.01790256 = sum of:
            0.01790256 = weight(_text_:systems in 1248) [ClassicSimilarity], result of:
              0.01790256 = score(doc=1248,freq=2.0), product of:
                0.13181444 = queryWeight, product of:
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.04289195 = queryNorm
                0.1358164 = fieldWeight in 1248, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.0731742 = idf(docFreq=5561, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1248)
          0.33333334 = coord(1/3)
      0.5 = coord(2/4)
    
    Abstract
    For several years the Digital Library Research Group at Cornell University and the Corporation for National Research Initiatives (CNRI) have been engaged in research focused on the design and development of infrastructures for open architecture, confederated digital libraries. The goal of this effort is to achieve interoperability and extensibility of digital library systems through the definition of key digital library services and their open interfaces, allowing flexible interaction of existing services and augmentation of the infrastructure with new services. Some aspects of this research have included the development and deployment of the Dienst software, the Handle System®, and the architecture of digital objects and repositories. In this paper, we describe the joint effort by Cornell and CNRI to prototype a rich and deployable architecture for interoperable digital objects and repositories. This effort has challenged us to move theories of interoperability closer to practice. The Cornell/CNRI collaboration builds on two existing projects focusing on the development of interoperable digital libraries. Details relating to the technology of these projects are described elsewhere. Both projects were strongly influenced by the fundamental abstractions of repositories and digital objects as articulated by Kahn and Wilensky in A Framework for Distributed Digital Object Services. Furthermore, both programs were influenced by the container architecture described in the Warwick Framework, and by the notions of distributed dynamic objects presented by Lagoze and Daniel in their Distributed Active Relationship work. With these common roots, one would expect that the CNRI and Cornell repositories would be at least theoretically interoperable. However, the actual test would be the extent to which our independently developed repositories were practically interoperable. This paper focuses on the definition of interoperability in the joint Cornell/CNRI work and the set of experiments conducted to formally test it. Our motivation for this work is the eventual deployment of formally tested reference implementations of the repository architecture for experimentation and development by fellow digital library researchers. In Section 2, we summarize the digital object and repository approach that was the focus of our interoperability experiments. In Section 3, we describe the set of experiments that progressively tested interoperability at increasing levels of functionality. In Section 4, we discuss general conclusions, and in Section 5, we give a preview of our future work, including our plans to evolve our experimentation to the point of defining a set of formal metrics for measuring interoperability for repositories and digital objects. This is still a work in progress that is expected to undergo additional refinements during its development.
  19. Kleineberg, M.: Context analysis and context indexing : formal pragmatics in knowledge organization (2014) 0.03
    0.028384913 = product of:
      0.11353965 = sum of:
        0.11353965 = product of:
          0.34061894 = sum of:
            0.34061894 = weight(_text_:3a in 1826) [ClassicSimilarity], result of:
              0.34061894 = score(doc=1826,freq=2.0), product of:
                0.36363843 = queryWeight, product of:
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.04289195 = queryNorm
                0.93669677 = fieldWeight in 1826, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  8.478011 = idf(docFreq=24, maxDocs=44218)
                  0.078125 = fieldNorm(doc=1826)
          0.33333334 = coord(1/3)
      0.25 = coord(1/4)
    
    Source
    http://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fdigbib.ubka.uni-karlsruhe.de%2Fvolltexte%2Fdocuments%2F3131107&ei=HzFWVYvGMsiNsgGTyoFI&usg=AFQjCNE2FHUeR9oQTQlNC4TPedv4Mo3DaQ&sig2=Rlzpr7a3BLZZkqZCXXN_IA&bvm=bv.93564037,d.bGg&cad=rja
  20. Lightle, K.S.; Ridgway, J.S.: Generation of XML records across multiple metadata standards (2003) 0.03
    0.025734002 = product of:
      0.10293601 = sum of:
        0.10293601 = weight(_text_:interfaces in 2189) [ClassicSimilarity], result of:
          0.10293601 = score(doc=2189,freq=2.0), product of:
            0.22349821 = queryWeight, product of:
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.04289195 = queryNorm
            0.46056747 = fieldWeight in 2189, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.2107263 = idf(docFreq=655, maxDocs=44218)
              0.0625 = fieldNorm(doc=2189)
      0.25 = coord(1/4)
    
    Abstract
    This paper describes the process that Eisenhower National Clearinghouse (ENC) staff went through to develop crosswalks between metadata based on three different standards and the generation of the corresponding XML records. ENC needed to generate different flavors of XML records so that metadata would be displayed correctly in catalog records generated through different digital library interfaces. The crosswalk between USMARC, IEEE LOM, and DC-ED is included, as well as examples of the XML records.

Years

Languages

  • e 349
  • d 173
  • a 3
  • el 3
  • i 2
  • nl 1
  • More… Less…

Types

  • a 257
  • i 21
  • x 10
  • s 9
  • m 8
  • r 7
  • p 5
  • b 3
  • n 1
  • More… Less…

Themes