Search (117 results, page 1 of 6)

  • × theme_ss:"Datenformate"
  1. Martin, P.: Conventions and notations for knowledge representation and retrieval (2000) 0.06
    0.05847824 = product of:
      0.09746373 = sum of:
        0.028076671 = weight(_text_:retrieval in 5070) [ClassicSimilarity], result of:
          0.028076671 = score(doc=5070,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.20052543 = fieldWeight in 5070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=5070)
        0.05304678 = weight(_text_:semantic in 5070) [ClassicSimilarity], result of:
          0.05304678 = score(doc=5070,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.2756298 = fieldWeight in 5070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=5070)
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 5070) [ClassicSimilarity], result of:
              0.03268054 = score(doc=5070,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 5070, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=5070)
          0.5 = coord(1/2)
      0.6 = coord(3/5)
    
    Abstract
    Much research has focused on the problem of knowledge accessibility, sharing and reuse. Specific languages (e.g. KIF, CG, RDF) and ontologies have been proposed. Common characteristics, conventions or ontological distinctions are beginning to emerge. Since knowledge providers (humans and software agents) must follow common conventions for the knowledge to be widely accessed and re-used, we propose lexical, structural, semantic and ontological conventions based on various knowledge representation projects and our own research. These are minimal conventions that can be followed by most and cover the most common knowledge representation cases. However, agreement and refinements are still required. We also show that a notation can be both readable and expressive by quickly presenting two new notations -- Formalized English (FE) and Frame-CG (FCG) - derived from the CG linear form [9] and Frame-Logics [4]. These notations support the above conventions, and are implemented in our Web-based knowledge representation and document indexation tool, WebKB¹ [7]
  2. Manguinhas, H.; Freire, N.; Machado, J.; Borbinha, J.: Supporting multilingual bibliographic resource discovery with Functional Requirements for Bibliographic Records (2012) 0.04
    0.04280589 = product of:
      0.10701473 = sum of:
        0.076566435 = weight(_text_:semantic in 133) [ClassicSimilarity], result of:
          0.076566435 = score(doc=133,freq=6.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.39783734 = fieldWeight in 133, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=133)
        0.030448299 = product of:
          0.060896598 = sum of:
            0.060896598 = weight(_text_:web in 133) [ClassicSimilarity], result of:
              0.060896598 = score(doc=133,freq=10.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.40312994 = fieldWeight in 133, product of:
                  3.1622777 = tf(freq=10.0), with freq of:
                    10.0 = termFreq=10.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=133)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This paper describes an experiment exploring the hypothesis that innovative application of the Functional Require-ments for Bibliographic Records (FRBR) principles can complement traditional bibliographic resource discovery systems in order to improve the user experience. A specialized service was implemented that, when given a plain list of results from a regular online catalogue, was able to process, enrich and present that list in a more relevant way for the user. This service pre-processes the records of a traditional online catalogue in order to build a semantic structure following the FRBR model. The service also explores web search features that have been revolutionizing the way users conceptualize resource discovery, such as relevance ranking and metasearching. This work was developed in the context of the TELPlus project. We processed nearly one hundred thousand bibliographic and authority records, in multiple languages, and originating from twelve European na-tional libraries. This paper describes the architecture of the service and the main challenges faced, especially concerning the extraction and linking of the relevant FRBR entities from the bibliographic metadata produced by the libraries. The service was evaluated by end users, who filled out a questionnaire after using a traditional online catalogue and the new service, both with the same bibliographic collection. The analysis of the results supports the hypothesis that FRBR can be implemented for re-source discovery in a non-intrusive way, reusing the data of any existing traditional bibliographic system.
    Content
    Beitrag eines Schwerpunktthemas: Semantic Web and Reasoning for Cultural Heritage and Digital Libraries: Vgl.: http://www.semantic-web-journal.net/content/supporting-multilingual-bibliographic-resource-discovery-functional-requirements-bibliograph http://www.semantic-web-journal.net/sites/default/files/swj145_2.pdf.
    Source
    Semantic Web journal. 3(2012) no.1, S.3-21
  3. Doerr, M.; Gradmann, S.; Hennicke, S.; Isaac, A.; Meghini, C.; Van de Sompel, H.: ¬The Europeana Data Model (EDM) (2010) 0.04
    0.036543902 = product of:
      0.09135976 = sum of:
        0.075019486 = weight(_text_:semantic in 3967) [ClassicSimilarity], result of:
          0.075019486 = score(doc=3967,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.38979942 = fieldWeight in 3967, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=3967)
        0.01634027 = product of:
          0.03268054 = sum of:
            0.03268054 = weight(_text_:web in 3967) [ClassicSimilarity], result of:
              0.03268054 = score(doc=3967,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.21634221 = fieldWeight in 3967, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3967)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    The Europeana Data Model (EDM) is a new approach towards structuring and representing data delivered to Europeana by the various contributing cultural heritage institutions. The model aims at greater expressivity and flexibility in comparison to the current Europeana Semantic Elements (ESE), which it is destined to replace. The design principles underlying the EDM are based on the core principles and best practices of the Semantic Web and Linked Data efforts to which Europeana wants to contribute. The model itself builds upon established standards like RDF(S), OAI-ORE, SKOS, and Dublin Core. It acts as a common top-level ontology which retains original data models and information perspectives while at the same time enabling interoperability. The paper elaborates on the aforementioned aspects and the design principles which drove the development of the EDM.
  4. Carvalho, J.R. de; Cordeiro, M.I.; Lopes, A.; Vieira, M.: Meta-information about MARC : an XML framework for validation, explanation and help systems (2004) 0.03
    0.033534992 = product of:
      0.08383748 = sum of:
        0.06188791 = weight(_text_:semantic in 2848) [ClassicSimilarity], result of:
          0.06188791 = score(doc=2848,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.32156807 = fieldWeight in 2848, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2848)
        0.021949572 = product of:
          0.043899145 = sum of:
            0.043899145 = weight(_text_:22 in 2848) [ClassicSimilarity], result of:
              0.043899145 = score(doc=2848,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.2708308 = fieldWeight in 2848, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2848)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article proposes a schema for meta-information about MARC that can express at a fairly comprehensive level the syntactic and semantic aspects of MARC formats in XML, including not only rules but also all texts and examples that are conveyed by MARC documentation. It can be thought of as an XML version of the MARC or UNIMARC manuals, for both machine and human usage. The article explains how such a schema can be the central piece of a more complete framework, to be used in conjunction with "slim" record formats, providing a rich environment for the automated processing of bibliographic data.
    Source
    Library hi tech. 22(2004) no.2, S.131-137
  5. Galvão, R.M.: UNIMARC format relevance : maintenance or replacement? (2018) 0.03
    0.032380622 = product of:
      0.08095156 = sum of:
        0.06188791 = weight(_text_:semantic in 5163) [ClassicSimilarity], result of:
          0.06188791 = score(doc=5163,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.32156807 = fieldWeight in 5163, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5163)
        0.019063652 = product of:
          0.038127303 = sum of:
            0.038127303 = weight(_text_:web in 5163) [ClassicSimilarity], result of:
              0.038127303 = score(doc=5163,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.25239927 = fieldWeight in 5163, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5163)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article presents an empirical study focused on a qualitative analysis of the UNIMARC format. An analysis of the structural quality of the data provided by the format is evaluated to determine its current suitability for meeting the requirements and trends in data architecture for the information network and the Semantic Web. Driven by a set of quality characteristics that identify weaknesses in the data schema that cannot be bridged by simply converting data to MARC XML or RDF/XML, we conclude that the UNIMARC format is not compliant with the current metadata schema desiderata and must be replaced.
  6. Nix, M.: ¬Die praktische Einsetzbarkeit des CIDOC CRM in Informationssystemen im Bereich des Kulturerbes (2004) 0.03
    0.02711632 = product of:
      0.0677908 = sum of:
        0.04420565 = weight(_text_:semantic in 3742) [ClassicSimilarity], result of:
          0.04420565 = score(doc=3742,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.22969149 = fieldWeight in 3742, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3742)
        0.02358515 = product of:
          0.0471703 = sum of:
            0.0471703 = weight(_text_:web in 3742) [ClassicSimilarity], result of:
              0.0471703 = score(doc=3742,freq=6.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.3122631 = fieldWeight in 3742, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3742)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Es steht uns eine praktisch unbegrenzte Menge an Informationen über das World Wide Web zur Verfügung. Das Problem, das daraus erwächst, ist, diese Menge zu bewältigen und an die Information zu gelangen, die im Augenblick benötigt wird. Das überwältigende Angebot zwingt sowohl professionelle Anwender als auch Laien zu suchen, ungeachtet ihrer Ansprüche an die gewünschten Informationen. Um dieses Suchen effizienter zu gestalten, gibt es einerseits die Möglichkeit, leistungsstärkere Suchmaschinen zu entwickeln. Eine andere Möglichkeit ist, Daten besser zu strukturieren, um an die darin enthaltenen Informationen zu gelangen. Hoch strukturierte Daten sind maschinell verarbeitbar, sodass ein Teil der Sucharbeit automatisiert werden kann. Das Semantic Web ist die Vision eines weiterentwickelten World Wide Web, in dem derart strukturierten Daten von so genannten Softwareagenten verarbeitet werden. Die fortschreitende inhaltliche Strukturierung von Daten wird Semantisierung genannt. Im ersten Teil der Arbeit sollen einige wichtige Methoden der inhaltlichen Strukturierung von Daten skizziert werden, um die Stellung von Ontologien innerhalb der Semantisierung zu klären. Im dritten Kapitel wird der Aufbau und die Aufgabe des CIDOC Conceptual Reference Model (CRM), einer Domain Ontologie im Bereich des Kulturerbes dargestellt. Im darauf folgenden praktischen Teil werden verschiedene Ansätze zur Verwendung des CRM diskutiert und umgesetzt. Es wird ein Vorschlag zur Implementierung des Modells in XML erarbeitet. Das ist eine Möglichkeit, die dem Datentransport dient. Außerdem wird der Entwurf einer Klassenbibliothek in Java dargelegt, auf die die Verarbeitung und Nutzung des Modells innerhalb eines Informationssystems aufbauen kann.
  7. Bales, K.: ¬The USMARC formats and visual materials (1989) 0.03
    0.025008315 = product of:
      0.06252079 = sum of:
        0.03743556 = weight(_text_:retrieval in 2861) [ClassicSimilarity], result of:
          0.03743556 = score(doc=2861,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.26736724 = fieldWeight in 2861, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=2861)
        0.025085226 = product of:
          0.05017045 = sum of:
            0.05017045 = weight(_text_:22 in 2861) [ClassicSimilarity], result of:
              0.05017045 = score(doc=2861,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.30952093 = fieldWeight in 2861, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2861)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    Paper presented at a symposium on 'Implementing the Art and Architecture Thesaurus (AAT): Controlled Vocabulary in the Extended MARC format', held at the 1989 Annual Conference of the Art Libraries Society of North America. Describes how changes are effected in MARC and the role of the various groups in the library community that are involved in the implementing these changes. Discusses the expansion of the formats to accomodate cataloguing and retrieval for visual materials. Expanded capabilities for coding visual materials offer greater opportunity for user access.
    Date
    4.12.1995 22:40:20
  8. Johnson, B.C.: XML and MARC : which is "right"? (2001) 0.02
    0.023886476 = product of:
      0.059716187 = sum of:
        0.032756116 = weight(_text_:retrieval in 5423) [ClassicSimilarity], result of:
          0.032756116 = score(doc=5423,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.23394634 = fieldWeight in 5423, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5423)
        0.026960073 = product of:
          0.053920146 = sum of:
            0.053920146 = weight(_text_:web in 5423) [ClassicSimilarity], result of:
              0.053920146 = score(doc=5423,freq=4.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.35694647 = fieldWeight in 5423, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5423)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article explores recent discussions about appropriate mark-up conventions for library information intended for use on the World Wide Web. In particular, the question of whether the MARC 21 format will continue to be useful and whether the time is right for a full-fledged conversion effort to XML is explored. The author concludes that the MARC format will be relevant well into the future, and its use will not hamper access to bibliographic information via the web. Early XML exploratory efforts carried out at the Stanford University's Lane Medical Library are reported on. Although these efforts are a promising start, much more consultation and investigation is needed to arrive at broadly acceptable standards for XML library information encoding and retrieval.
  9. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.02
    0.02290516 = product of:
      0.114525795 = sum of:
        0.114525795 = sum of:
          0.043574058 = weight(_text_:web in 2845) [ClassicSimilarity], result of:
            0.043574058 = score(doc=2845,freq=2.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.2884563 = fieldWeight in 2845, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0625 = fieldNorm(doc=2845)
          0.07095174 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
            0.07095174 = score(doc=2845,freq=4.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.4377287 = fieldWeight in 2845, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0625 = fieldNorm(doc=2845)
      0.2 = coord(1/5)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  10. Aliprand, J.M.: Linkage in USMARC bibliographic records (1993) 0.02
    0.017504547 = product of:
      0.08752273 = sum of:
        0.08752273 = weight(_text_:semantic in 544) [ClassicSimilarity], result of:
          0.08752273 = score(doc=544,freq=4.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.45476598 = fieldWeight in 544, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.0546875 = fieldNorm(doc=544)
      0.2 = coord(1/5)
    
    Abstract
    USMARC records that contain non Roman scripts exhibit 2 types of linkage between the Latin script fields and their alternate graphic representation (the non Roman text): linkage based on systematic romanization, and linkage between names for the same person, place or thing. The lack of rules for linkage inhibits copy cataloging and causes inconsistency on record displays. To determine an unequivocal basis for linkage, 4 types of field association in bibliographic records are examined: hierarchy of components; functional equivalence; semantic equivalence; and systematic romanization. Concludes that semantic equivalence is the ideal basis for linkage and can be accomodated by the current structure of the USMARC format for bibliographic data
  11. Guenther, R.S.: Using the Metadata Object Description Schema (MODS) for resource description : guidelines and applications (2004) 0.02
    0.01640529 = product of:
      0.08202645 = sum of:
        0.08202645 = sum of:
          0.038127303 = weight(_text_:web in 2837) [ClassicSimilarity], result of:
            0.038127303 = score(doc=2837,freq=2.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.25239927 = fieldWeight in 2837, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2837)
          0.043899145 = weight(_text_:22 in 2837) [ClassicSimilarity], result of:
            0.043899145 = score(doc=2837,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.2708308 = fieldWeight in 2837, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2837)
      0.2 = coord(1/5)
    
    Abstract
    This paper describes the Metadata Object Description Schema (MODS), its accompanying documentation and some of its applications. It reviews the MODS user guidelines provided by the Library of Congress and how they enable a user of the schema to consistently apply MODS as a metadata scheme. Because the schema itself could not fully document appropriate usage, the guidelines provide element definitions, history, relationships with other elements, usage conventions, and examples. Short descriptions of some MODS applications are given and a more detailed discussion of its use in the Library of Congress's Minerva project for Web archiving is given.
    Source
    Library hi tech. 22(2004) no.1, S.89-98
  12. Concise UNIMARC Classification Format : Draft 5 (20000125) (2000) 0.01
    0.014974224 = product of:
      0.07487112 = sum of:
        0.07487112 = weight(_text_:retrieval in 4421) [ClassicSimilarity], result of:
          0.07487112 = score(doc=4421,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.5347345 = fieldWeight in 4421, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.125 = fieldNorm(doc=4421)
      0.2 = coord(1/5)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  13. McCallum, S.H.: Machine Readable Cataloging (MARC): 1975-2007 (2009) 0.01
    0.014061676 = product of:
      0.07030838 = sum of:
        0.07030838 = sum of:
          0.03268054 = weight(_text_:web in 3841) [ClassicSimilarity], result of:
            0.03268054 = score(doc=3841,freq=2.0), product of:
              0.15105948 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.04628742 = queryNorm
              0.21634221 = fieldWeight in 3841, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.046875 = fieldNorm(doc=3841)
          0.03762784 = weight(_text_:22 in 3841) [ClassicSimilarity], result of:
            0.03762784 = score(doc=3841,freq=2.0), product of:
              0.16209066 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04628742 = queryNorm
              0.23214069 = fieldWeight in 3841, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.046875 = fieldNorm(doc=3841)
      0.2 = coord(1/5)
    
    Abstract
    This entry describes the development of the MARC Communications format. After a brief overview of the initial 10 years it describes the succeeding phases of development up to the present. This takes the reader through the expansion of the format for all types of bibliographic data and for a multiple character scripts. At the same time a large business community was developing that offered products based on the format to the library community. The introduction of the Internet in the 1990s and the Web technology brought new opportunities and challenges and the format was adapted to this new environment. There has been a great deal of international adoption of the format that has continued into the 2000s. More recently new syntaxes for MARC 21 and models are being explored.
    Date
    27. 8.2011 14:22:38
  14. Oehlschläger, S.: Aus der 52. Sitzung der Arbeitsgemeinschaft der Verbundsysteme am 24. und 25. April 2007 in Berlin (2007) 0.01
    0.011564509 = product of:
      0.028911272 = sum of:
        0.022102825 = weight(_text_:semantic in 465) [ClassicSimilarity], result of:
          0.022102825 = score(doc=465,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.114845745 = fieldWeight in 465, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.01953125 = fieldNorm(doc=465)
        0.0068084467 = product of:
          0.013616893 = sum of:
            0.013616893 = weight(_text_:web in 465) [ClassicSimilarity], result of:
              0.013616893 = score(doc=465,freq=2.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.09014259 = fieldWeight in 465, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=465)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    kim - Kompetenzzentrum Interoperable Metadaten Projektziel ist der Aufbau eines Kompetenzzentrums Interoperable Metadaten (KIM), das im deutschsprachigen Raum den Ausbau der Kompetenzen für die Themen interoperable Metadaten, Metadatenaustausch und Formate fördern soll. Strukturierte Beschreibungen (Metadaten) verschiedener Datenbestände werden als "interoperabel" bezeichnet, wenn in ihnen über einheitliche Suchverfahren recherchiert oder sie sinnvoll in die Datenverwaltung integriert werden können. Interoperabilität wird erreicht, indem man sich in kontrollierten institutionellen Zusammenhängen an technische Spezifikationen hält. Eine zentrale Aufgabe des "Kompetenzzentrum Interoperable Metadaten" als DCMI-Affiliate in Deutschland (nach Projektende im deutschsprachigen Raum mit Partnern aus Österreich und der Schweiz) besteht dabei in der Bildung einer Kernarbeitsgruppe mit weiteren spezifischen Arbeitsgruppen, die durch die Begutachtung bereits existierender Anwendungsprofile ein gemeinsames Verständnis für "good practice" in der Anwendung sowohl des Dublin-Core-Modells als auch des Semantic-Web-Modells entwickelt. Dieses Verständnis bildet die Grundlage für die Entwicklung von Zertifizierungsverfahren, Ausbildungsangeboten und Beratungsdiensten im Bereich interoperable Metadaten.
  15. Woods, E.W.; IFLA Section on classification and Indexing and Indexing and Information Technology; Joint Working Group on a Classification Format: Requirements for a format of classification data : Final report, July 1996 (1996) 0.01
    0.011230669 = product of:
      0.056153342 = sum of:
        0.056153342 = weight(_text_:retrieval in 3008) [ClassicSimilarity], result of:
          0.056153342 = score(doc=3008,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.40105087 = fieldWeight in 3008, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.09375 = fieldNorm(doc=3008)
      0.2 = coord(1/5)
    
    Theme
    Klassifikationssysteme im Online-Retrieval
  16. Jimenez, V.O.R.: Nuevas perspectivas para la catalogacion : metadatos ver MARC (1999) 0.01
    0.01064276 = product of:
      0.0532138 = sum of:
        0.0532138 = product of:
          0.1064276 = sum of:
            0.1064276 = weight(_text_:22 in 5743) [ClassicSimilarity], result of:
              0.1064276 = score(doc=5743,freq=4.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.6565931 = fieldWeight in 5743, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5743)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Date
    30. 3.2002 19:45:22
    Source
    Revista Española de Documentaçion Cientifica. 22(1999) no.2, S.198-219
  17. Zapounidou, S.; Sfakakis, M.; Papatheodorou, C.: Library data integration : towards BIBFRAME mapping to EDM (2014) 0.01
    0.010609357 = product of:
      0.05304678 = sum of:
        0.05304678 = weight(_text_:semantic in 1589) [ClassicSimilarity], result of:
          0.05304678 = score(doc=1589,freq=2.0), product of:
            0.19245663 = queryWeight, product of:
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.04628742 = queryNorm
            0.2756298 = fieldWeight in 1589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.1578603 = idf(docFreq=1879, maxDocs=44218)
              0.046875 = fieldNorm(doc=1589)
      0.2 = coord(1/5)
    
    Abstract
    Integration of library data into the Linked Data environment is a key issue in libraries and is approached on the basis of interoperability between library data conceptual models. Achieving interoperability for different representations of the same or related entities between the library and cultural heritage domains shall enhance rich bibliographic data reusability and support the development of new data-driven information services. This paper aims to contribute to the desired interoperability by attempting to map core semantic paths between the BIBFRAME and EDM conceptual models. BIBFRAME is developed by the Library of Congress to support transformation of legacy library data in MARC format into linked data. EDM is the model developed for and used in the Europeana Cultural Heritage aggregation portal.
  18. Oehlschläger, S.: Aus der 48. Sitzung der Arbeitsgemeinschaft der Verbundsysteme am 12. und 13. November 2004 in Göttingen (2005) 0.01
    0.010469174 = product of:
      0.026172934 = sum of:
        0.016544336 = weight(_text_:retrieval in 3556) [ClassicSimilarity], result of:
          0.016544336 = score(doc=3556,freq=4.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.11816074 = fieldWeight in 3556, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3556)
        0.009628598 = product of:
          0.019257195 = sum of:
            0.019257195 = weight(_text_:web in 3556) [ClassicSimilarity], result of:
              0.019257195 = score(doc=3556,freq=4.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.12748088 = fieldWeight in 3556, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=3556)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Content
    Die Deutsche Bibliothek Retrieval von Content In dem Projekt wird angestrebt, Verfahren zu entwickeln und einzuführen, die automatisch und ohne intellektuelle Bearbeitung für das Content-Retrieval ausreichend Sucheinstiege bieten. Dabei kann es sich um die Suche nach Inhalten von Volltexten, digitalen Abbildern, Audiofiles, Videofiles etc. von in Der Deutschen Bibliothek archivierten digitalen Ressourcen oder digitalen Surrogaten archivierter analoger Ressourcen (z. B. OCR-Ergebnisse) handeln. Inhalte, die in elektronischer Form vorhanden sind, aber dem InternetBenutzer Der Deutschen Bibliothek bisher nicht oder nur eingeschränkt zur Verfügung stehen, sollen in möglichst großem Umfang und mit möglichst großem Komfort nutzbar gemacht werden. Darüber hinaus sollen Inhalte benutzt werden, die für ein in ILTIS katalogisiertes Objekt beschreibenden Charakter haben, um auf das beschriebene Objekt zu verweisen. Die höchste Priorität liegt dabei auf der Erschließung von Inhalten in Textformaten. In einem ersten Schritt wurde der Volltext aller Zeitschriften, die im Projekt "Exilpresse digital" digitalisiert wurden, für eine erweiterte Suche genutzt. In einem nächsten Schritt soll die PSI-Software für die Volltextindexierung von Abstracts evaluiert werden. MILOS Mit dem Einsatz von MILOS eröffnet sich die Möglichkeit, nicht oder wenig sachlich erschlossene Bestände automatisch mit ergänzenden Inhaltserschließungsinformationen zu versehen, der Schwerpunkt liegt dabei auf der Freitext-Indexierung. Das bereits in einigen Bibliotheken eingesetzte System, das inzwischen von Der Deutschen Bibliothek für Deutschland lizenziert wurde, wurde in eine UNIX-Version überführt und angepasst. Inzwischen wurde nahezu der gesamte Bestand rückwirkend behandelt, die Daten werden im Gesamt-OPAC für die Recherche zur Verfügung stehen. Die in einer XMLStruktur abgelegten Indexeinträge werden dabei vollständig indexiert und zugänglich gemacht. Ein weiterer Entwicklungsschritt wird in dem Einsatz von MILOS im Online-Verfahren liegen.
    Hessisches BibliotheksinformationsSystem (HEBIS) Personennamendatei (PND) Vor dem Hintergrund der Harmonisierungsbestrebungen bei den Normdateien hat der HeBIS-Verbundrat nach erneuter Diskussion mehrheitlich entschieden, künftig neben SWD und GKD auch die PND als in HeBIS integrierte Normdatei obligatorisch zu nutzen. Im Zuge der wachsenden Vernetzung der regionalen Verbundsysteme kommt der Homogenität der Datensätze eine zunehmend wichtigere Bedeutung zu. Konkret wird dies speziell für HeBIS mit dem Produktionsbeginn des HeBIS-Portals und der integrierten verbundübergreifenden Fernleihe. Nur wenn die Verfasserrecherche in den einzelnen Verbunddatenbanken auf weitgehend einheitliche Datensätze einschließlich Verweisungsformen trifft, kann der Benutzer gute Trefferergebnisse erwarten und damit seine Chancen erhöhen, die gewünschte Literatur über Fernleihe bestellen zu können. Das Gesamtkonzept ist ausgelegt auf eine pragmatische und aufwandsreduzierte Vorgehensweise. Mit der Umsetzung wurde begonnen. Hochschulbibliothekszentrum des Landes Nordrhein-Westfalen (HBZ) FAST-Suchmaschine Das HBZ hat die Suchmaschinentechnologie des norwegischen Herstellers FAST lizenziert. Ziel ist es, die Produkte des HBZ mit Hilfe innovativer Suchmaschinentechnologien in einer neuen Ausrichtung zu präsentieren. Die Präsentation soll einen schnellen Recherche-Zugang zu den NRWVerbunddaten mittels FAST-Suchmaschinentechnologie mit folgenden Eigenschaften beinhalten: - Eine Web-Oberfläche, um für Laien eine schnelle Literatursuche anbieten zu können. - Eine Web-Oberfläche, um für Expertinnen und Experten eine schnelle Literatur-Suche anbieten zu können. - Präsentation von Zusatzfunktionen, die in gängigen Bibliothekskatalogen so nicht vorhanden sind. - Schaffung einer Zugriffsmöglichkeit für den KVK auf die Verbunddaten mit sehr kurzen Antwortzeiten Digitale Bibliothek Die Mehrzahl der Bibliotheken ist inzwischen auf Release 5 umgezogen. Einige befinden sich noch im Bearbeitungsstatus. Von den letzten drei Bibliotheken liegen inzwischen die Umzugsanträge vor. Durch die Umstrukturierung der RLB Koblenz zum LBZ Rheinland-Pfalz werden die Einzelsichten der RLB Koblenz, PLB Speyer und der Bipontina in Zweibrücken mit den Büchereistellen Koblenz und Neustadt zu einer Sicht verschmolzen.
  19. Kaiser, M.; Lieder, H.J.; Majcen, K.; Vallant, H.: New ways of sharing and using authority information : the LEAF project (2003) 0.01
    0.010126202 = product of:
      0.025315506 = sum of:
        0.011698613 = weight(_text_:retrieval in 1166) [ClassicSimilarity], result of:
          0.011698613 = score(doc=1166,freq=2.0), product of:
            0.14001551 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04628742 = queryNorm
            0.08355226 = fieldWeight in 1166, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1166)
        0.013616893 = product of:
          0.027233787 = sum of:
            0.027233787 = weight(_text_:web in 1166) [ClassicSimilarity], result of:
              0.027233787 = score(doc=1166,freq=8.0), product of:
                0.15105948 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.04628742 = queryNorm
                0.18028519 = fieldWeight in 1166, product of:
                  2.828427 = tf(freq=8.0), with freq of:
                    8.0 = termFreq=8.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.01953125 = fieldNorm(doc=1166)
          0.5 = coord(1/2)
      0.4 = coord(2/5)
    
    Abstract
    This article presents an overview of the LEAF project (Linking and Exploring Authority Files)1, which has set out to provide a framework for international, collaborative work in the sector of authority data with respect to authority control. Elaborating the virtues of authority control in today's Web environment is an almost futile exercise, since so much has been said and written about it in the last few years.2 The World Wide Web is generally understood to be poorly structured-both with regard to content and to locating required information. Highly structured databases might be viewed as small islands of precision within this chaotic environment. Though the Web in general or any particular structured database would greatly benefit from increased authority control, it should be noted that our following considerations only refer to authority control with regard to databases of "memory institutions" (i.e., libraries, archives, and museums). Moreover, when talking about authority records, we exclusively refer to personal name authority records that describe a specific person. Although different types of authority records could indeed be used in similar ways to the ones presented in this article, discussing those different types is outside the scope of both the LEAF project and this article. Personal name authority records-as are all other "authorities"-are maintained as separate records and linked to various kinds of descriptive records. Name authority records are usually either kept in independent databases or in separate tables in the database containing the descriptive records. This practice points at a crucial benefit: by linking any number of descriptive records to an authorized name record, the records related to this entity are collocated in the database. Variant forms of the authorized name are referenced in the authority records and thus ensure the consistency of the database while enabling search and retrieval operations that produce accurate results. On one hand, authority control may be viewed as a positive prerequisite of a consistent catalogue; on the other, the creation of new authority records is a very time consuming and expensive undertaking. As a consequence, various models of providing access to existing authority records have emerged: the Library of Congress and the French National Library (Bibliothèque nationale de France), for example, make their authority records available to all via a web-based search service.3 In Germany, the Personal Name Authority File (PND, Personennamendatei4) maintained by the German National Library (Die Deutsche Bibliothek, Frankfurt/Main) offers a different approach to shared access: within a closed network, participating institutions have online access to their pooled data. The number of recent projects and initiatives that have addressed the issue of authority control in one way or another is considerable.5 Two important current initiatives should be mentioned here: The Name Authority Cooperative (NACO) and Virtual International Authority File (VIAF).
  20. MARC and metadata : METS, MODS, and MARCXML: current and future implications (2004) 0.01
    0.010034091 = product of:
      0.05017045 = sum of:
        0.05017045 = product of:
          0.1003409 = sum of:
            0.1003409 = weight(_text_:22 in 2840) [ClassicSimilarity], result of:
              0.1003409 = score(doc=2840,freq=2.0), product of:
                0.16209066 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04628742 = queryNorm
                0.61904186 = fieldWeight in 2840, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.125 = fieldNorm(doc=2840)
          0.5 = coord(1/2)
      0.2 = coord(1/5)
    
    Source
    Library hi tech. 22(2004) no.1

Authors

Years

Languages

  • e 86
  • d 26
  • f 2
  • pl 1
  • sp 1
  • More… Less…

Types

  • a 99
  • el 9
  • s 5
  • m 4
  • n 3
  • b 2
  • l 1
  • x 1
  • More… Less…