Search (167 results, page 1 of 9)

  • × theme_ss:"Metadaten"
  1. Desconnets, J.-C.; Chahdi, H.; Mougenot, I.: Application profile for earth observation images (2014) 0.03
    0.030029628 = product of:
      0.09008888 = sum of:
        0.077900484 = weight(_text_:propose in 1573) [ClassicSimilarity], result of:
          0.077900484 = score(doc=1573,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3970968 = fieldWeight in 1573, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1573)
        0.012188397 = product of:
          0.036565192 = sum of:
            0.036565192 = weight(_text_:29 in 1573) [ClassicSimilarity], result of:
              0.036565192 = score(doc=1573,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.27205724 = fieldWeight in 1573, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=1573)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Based on the concept of an application profile as proposed by the Dublin Core initiative, the work presented in this manuscript attempts to propose an application profile for the Earth Observation images. This approach aims to provide an open and extensible model facilitating the sharing and management of distributed images within decentralized architectures. It is intended to eventually cover the needs of discovery, localization, consulting, preservation and processing of data for decision support. We are using the Singapore framework recommendations to build the application profile. A particular focus on the formalization and representation of Description Set Profile (DSP) in RDF is proposed.
    Source
    Metadata and semantics research: 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings. Eds.: S. Closs et al
  2. Kurth, M.; Ruddy, D.; Rupp, N.: Repurposing MARC metadata : using digital project experience to develop a metadata management design (2004) 0.03
    0.025708353 = product of:
      0.07712506 = sum of:
        0.06677184 = weight(_text_:propose in 4748) [ClassicSimilarity], result of:
          0.06677184 = score(doc=4748,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 4748, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=4748)
        0.010353219 = product of:
          0.031059656 = sum of:
            0.031059656 = weight(_text_:22 in 4748) [ClassicSimilarity], result of:
              0.031059656 = score(doc=4748,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23214069 = fieldWeight in 4748, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4748)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non-MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non-MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.
    Source
    Library hi tech. 22(2004) no.2, S.144-152
  3. Rusch-Feja, D.: ¬Die Open Archives Initiative (OAI) : Neue Zugangsformen zu wissenschaftlichen Arbeiten? (2001) 0.02
    0.023433086 = product of:
      0.07029925 = sum of:
        0.059946038 = weight(_text_:forschung in 1133) [ClassicSimilarity], result of:
          0.059946038 = score(doc=1133,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.32250258 = fieldWeight in 1133, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.046875 = fieldNorm(doc=1133)
        0.010353219 = product of:
          0.031059656 = sum of:
            0.031059656 = weight(_text_:22 in 1133) [ClassicSimilarity], result of:
              0.031059656 = score(doc=1133,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.23214069 = fieldWeight in 1133, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1133)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Date
    22. 4.2002 12:23:54
    Source
    Bibliothek: Forschung und Praxis. 25(2001) H.3, S.291-300
  4. Catarino, M.E.; Baptista, A.A.: Relating folksonomies with Dublin Core (2008) 0.02
    0.022614865 = product of:
      0.06784459 = sum of:
        0.055643205 = weight(_text_:propose in 2652) [ClassicSimilarity], result of:
          0.055643205 = score(doc=2652,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 2652, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2652)
        0.0122013865 = product of:
          0.03660416 = sum of:
            0.03660416 = weight(_text_:22 in 2652) [ClassicSimilarity], result of:
              0.03660416 = score(doc=2652,freq=4.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.27358043 = fieldWeight in 2652, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2652)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Folksonomy is the result of describing Web resources with tags created by Web users. Although it has become a popular application for the description of resources, in general terms Folksonomies are not being conveniently integrated in metadata. However, if the appropriate metadata elements are identified, then further work may be conducted to automatically assign tags to these elements (RDF properties) and use them in Semantic Web applications. This article presents research carried out to continue the project Kinds of Tags, which intends to identify elements required for metadata originating from folksonomies and to propose an application profile for DC Social Tagging. The work provides information that may be used by software applications to assign tags to metadata elements and, therefore, means for tags to be conveniently gathered by metadata interoperability tools. Despite the unquestionably high value of DC and the significance of the already existing properties in DC Terms, the pilot study show revealed a significant number of tags for which no corresponding properties yet existed. A need for new properties, such as Action, Depth, Rate, and Utility was determined. Those potential new properties will have to be validated in a later stage by the DC Social Tagging Community.
    Pages
    S.14-22
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  5. Li, C.; Sugimoto, S.: Provenance description of metadata application profiles for long-term maintenance of metadata schemas : Luciano Floridi's philosophy of information as the foundation for library and information science (2018) 0.02
    0.021449735 = product of:
      0.064349204 = sum of:
        0.055643205 = weight(_text_:propose in 4048) [ClassicSimilarity], result of:
          0.055643205 = score(doc=4048,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 4048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4048)
        0.008706 = product of:
          0.026117997 = sum of:
            0.026117997 = weight(_text_:29 in 4048) [ClassicSimilarity], result of:
              0.026117997 = score(doc=4048,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.19432661 = fieldWeight in 4048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4048)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Purpose Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas. Design/methodology/approach The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English. Findings Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time. Research limitations/implications The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered. Originality/value This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
    Date
    15. 1.2018 19:13:29
  6. Belém, F.M.; Almeida, J.M.; Gonçalves, M.A.: ¬A survey on tag recommendation methods : a review (2017) 0.02
    0.021423629 = product of:
      0.064270884 = sum of:
        0.055643205 = weight(_text_:propose in 3524) [ClassicSimilarity], result of:
          0.055643205 = score(doc=3524,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 3524, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3524)
        0.008627683 = product of:
          0.025883049 = sum of:
            0.025883049 = weight(_text_:22 in 3524) [ClassicSimilarity], result of:
              0.025883049 = score(doc=3524,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.19345059 = fieldWeight in 3524, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=3524)
          0.33333334 = coord(1/3)
      0.33333334 = coord(2/6)
    
    Abstract
    Tags (keywords freely assigned by users to describe web content) have become highly popular on Web 2.0 applications, because of the strong stimuli and easiness for users to create and describe their own content. This increase in tag popularity has led to a vast literature on tag recommendation methods. These methods aim at assisting users in the tagging process, possibly increasing the quality of the generated tags and, consequently, improving the quality of the information retrieval (IR) services that rely on tags as data sources. Regardless of the numerous and diversified previous studies on tag recommendation, to our knowledge, no previous work has summarized and organized them into a single survey article. In this article, we propose a taxonomy for tag recommendation methods, classifying them according to the target of the recommendations, their objectives, exploited data sources, and underlying techniques. Moreover, we provide a critical overview of these methods, pointing out their advantages and disadvantages. Finally, we describe the main open challenges related to the field, such as tag ambiguity, cold start, and evaluation issues.
    Date
    16.11.2017 13:30:22
  7. Biesenbender, S.; Tobias, R.: Rolle und Aufgaben von Bibliotheken im Umfeld des Kerndatensatz Forschung (2019) 0.02
    0.01730493 = product of:
      0.103829585 = sum of:
        0.103829585 = weight(_text_:forschung in 5350) [ClassicSimilarity], result of:
          0.103829585 = score(doc=5350,freq=6.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.5585909 = fieldWeight in 5350, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.046875 = fieldNorm(doc=5350)
      0.16666667 = coord(1/6)
    
    Abstract
    In diesem Artikel wird die Frage aufgegriffen, welche Auswirkungen der Kerndatensatz Forschung (KDSF) auf die aktuelle bibliothekarische Praxis haben kann. Dabei wird eine Bestandsaufnahme der (möglichen) Betätigungsfelder von Bibliotheken rund um den KDSF und die Einführung von Forschungsinformationssystemen (FIS) gezogen. Es sollen die Herausforderungen und das Potenzial des KDSF für die tägliche bibliothekarische Praxis im Rahmen einer modernen und integrierten Forschungsberichterstattung beleuchtet und Impulse für in der Zukunft erforderliche Anpassungsprozesse gegeben werden. Der Artikel stellt Aufbau und Konzept des KDSF vor. Der Fokus liegt dabei auf dem Kerndatensatz-Bereich "Publikationen". Bisherige Erfahrungen und Rückmeldungen an den "Helpdesk für die Einführung des Kerndatensatz Forschung" werden aus bibliothekarischer Sicht erörtert. Ein weiterer Teil zeigt beispielhafte Aktivitäten und Herangehensweisen, die sich für Bibliotheken im Umfeld der Einführung von FIS ergeben.
  8. Yang, T.-H.; Hsieh, Y.-L.; Liu, S.-H.; Chang, Y.-C.; Hsu, W.-L.: ¬A flexible template generation and matching method with applications for publication reference metadata extraction (2021) 0.01
    0.01311523 = product of:
      0.07869138 = sum of:
        0.07869138 = weight(_text_:propose in 63) [ClassicSimilarity], result of:
          0.07869138 = score(doc=63,freq=4.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.40112838 = fieldWeight in 63, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=63)
      0.16666667 = coord(1/6)
    
    Abstract
    Conventional rule-based approaches use exact template matching to capture linguistic information and necessarily need to enumerate all variations. We propose a novel flexible template generation and matching scheme called the principle-based approach (PBA) based on sequence alignment, and employ it for reference metadata extraction (RME) to demonstrate its effectiveness. The main contributions of this research are threefold. First, we propose an automatic template generation that can capture prominent patterns using the dominating set algorithm. Second, we devise an alignment-based template-matching technique that uses a logistic regression model, which makes it more general and flexible than pure rule-based approaches. Last, we apply PBA to RME on extensive cross-domain corpora and demonstrate its robustness and generality. Experiments reveal that the same set of templates produced by the PBA framework not only deliver consistent performance on various unseen domains, but also surpass hand-crafted knowledge (templates). We use four independent journal style test sets and one conference style test set in the experiments. When compared to renowned machine learning methods, such as conditional random fields (CRF), as well as recent deep learning methods (i.e., bi-directional long short-term memory with a CRF layer, Bi-LSTM-CRF), PBA has the best performance for all datasets.
  9. Blanchi, C.; Petrone, J.: Distributed interoperable metadata registry (2001) 0.01
    0.012983414 = product of:
      0.077900484 = sum of:
        0.077900484 = weight(_text_:propose in 1228) [ClassicSimilarity], result of:
          0.077900484 = score(doc=1228,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3970968 = fieldWeight in 1228, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1228)
      0.16666667 = coord(1/6)
    
    Abstract
    Interoperability between digital libraries depends on effective sharing of metadata. Successful sharing of metadata requires common standards for metadata exchange. Previous efforts have focused on either defining a single metadata standard, such as Dublin Core, or building digital library middleware, such as Z39.50 or Stanford's Digital Library Interoperability Protocol. In this article, we propose a distributed architecture for managing metadata and metadata schema. Instead of normalizing all metadata and schema to a single format, we have focused on building a middleware framework that tolerates heterogeneity. By providing facilities for typing and dynamic conversion of metadata, our system permits continual introduction of new forms of metadata with minimal impact on compatibility.
  10. Binz, V.; Rühle, S.: KIM - Das Kompetenzzentrum Interoperable Metadaten (2009) 0.01
    0.011656174 = product of:
      0.06993704 = sum of:
        0.06993704 = weight(_text_:forschung in 4559) [ClassicSimilarity], result of:
          0.06993704 = score(doc=4559,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.376253 = fieldWeight in 4559, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4559)
      0.16666667 = coord(1/6)
    
    Source
    Bibliothek: Forschung und Praxis. 33(2009) H.3, S.370-374
  11. Martin, P.: Conventions and notations for knowledge representation and retrieval (2000) 0.01
    0.011128641 = product of:
      0.06677184 = sum of:
        0.06677184 = weight(_text_:propose in 5070) [ClassicSimilarity], result of:
          0.06677184 = score(doc=5070,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 5070, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=5070)
      0.16666667 = coord(1/6)
    
    Abstract
    Much research has focused on the problem of knowledge accessibility, sharing and reuse. Specific languages (e.g. KIF, CG, RDF) and ontologies have been proposed. Common characteristics, conventions or ontological distinctions are beginning to emerge. Since knowledge providers (humans and software agents) must follow common conventions for the knowledge to be widely accessed and re-used, we propose lexical, structural, semantic and ontological conventions based on various knowledge representation projects and our own research. These are minimal conventions that can be followed by most and cover the most common knowledge representation cases. However, agreement and refinements are still required. We also show that a notation can be both readable and expressive by quickly presenting two new notations -- Formalized English (FE) and Frame-CG (FCG) - derived from the CG linear form [9] and Frame-Logics [4]. These notations support the above conventions, and are implemented in our Web-based knowledge representation and document indexation tool, WebKB¹ [7]
  12. Masanès, J.; Lupovici, C.: Preservation metadata : the NEDLIB's proposal Bibliothèque Nationale de France (2001) 0.01
    0.011128641 = product of:
      0.06677184 = sum of:
        0.06677184 = weight(_text_:propose in 6013) [ClassicSimilarity], result of:
          0.06677184 = score(doc=6013,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.3403687 = fieldWeight in 6013, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.046875 = fieldNorm(doc=6013)
      0.16666667 = coord(1/6)
    
    Abstract
    Preservation of digital documents for the long term requires above all to solve the problem of technological obsolescence. Accessing to digital documents in 20 or loo years will be impossible if we, or our successor, can't process the bit stream underlying digital documents. We can be sure that the modality of data processing will be different in 20 or loo years. It is then our task to collect key information about today's data processing to ensure future access to these documents. In this paper we present the NEDLIB's proposal for a preservation metadata set. This set gathers core metadata that are mandatory for preservation management purposes. We propose to define 8 metadata elements and 38 sub-elements following the OAIS taxonomy of information object. A layered information analysis of the digital document is proposed in order to list all information involved in the data processing of the bit stream. These metadata elements are intended to be populate, as much as possible, in an automatic way to make it possible to handle large amounts of documents
  13. Özel, S.A.; Altingövde, I.S.; Ulusoy, Ö.; Özsoyoglu, G.; Özsoyoglu, Z.M.: Metadata-Based Modeling of Information Resources an the Web (2004) 0.01
    0.009273868 = product of:
      0.055643205 = sum of:
        0.055643205 = weight(_text_:propose in 2093) [ClassicSimilarity], result of:
          0.055643205 = score(doc=2093,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.2836406 = fieldWeight in 2093, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2093)
      0.16666667 = coord(1/6)
    
    Abstract
    This paper deals with the problem of modeling Web information resources using expert knowledge and personalized user information for improved Web searching capabilities. We propose a "Web information space" model, which is composed of Web-based information resources (HTML/XML [Hypertext Markup Language/Extensible Markup Language] documents an the Web), expert advice repositories (domain-expert-specified metadata for information resources), and personalized information about users (captured as user profiles that indicate users' preferences about experts as well as users' knowledge about topics). Expert advice, the heart of the Web information space model, is specified using topics and relationships among topics (called metalinks), along the lines of the recently proposed topic maps. Topics and metalinks constitute metadata that describe the contents of the underlying HTML/XML Web resources. The metadata specification process is semiautomated, and it exploits XML DTDs (Document Type Definition) to allow domain-expert guided mapping of DTD elements to topics and metalinks. The expert advice is stored in an object-relational database management system (DBMS). To demonstrate the practicality and usability of the proposed Web information space model, we created a prototype expert advice repository of more than one million topics/metalinks for DBLP (Database and Logic Programming) Bibliography data set. We also present a query interface that provides sophisticated querying fa cilities for DBLP Bibliography resources using the expert advice repository.
  14. White, H.: Examining scientific vocabulary : mapping controlled vocabularies with free text keywords (2013) 0.01
    0.00924463 = product of:
      0.05546778 = sum of:
        0.05546778 = product of:
          0.08320167 = sum of:
            0.041788794 = weight(_text_:29 in 1953) [ClassicSimilarity], result of:
              0.041788794 = score(doc=1953,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.31092256 = fieldWeight in 1953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1953)
            0.041412875 = weight(_text_:22 in 1953) [ClassicSimilarity], result of:
              0.041412875 = score(doc=1953,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.30952093 = fieldWeight in 1953, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1953)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    29. 5.2015 19:09:22
  15. Neudecker, C.; Zaczynska, K.; Baierer, K.; Rehm, G.; Gerber, M.; Moreno Schneider, J.: Methoden und Metriken zur Messung von OCR-Qualität für die Kuratierung von Daten und Metadaten (2021) 0.01
    0.008325839 = product of:
      0.049955033 = sum of:
        0.049955033 = weight(_text_:forschung in 369) [ClassicSimilarity], result of:
          0.049955033 = score(doc=369,freq=2.0), product of:
            0.1858777 = queryWeight, product of:
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.038207654 = queryNorm
            0.26875216 = fieldWeight in 369, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              4.8649335 = idf(docFreq=926, maxDocs=44218)
              0.0390625 = fieldNorm(doc=369)
      0.16666667 = coord(1/6)
    
    Abstract
    Durch die systematische Digitalisierung der Bestände in Bibliotheken und Archiven hat die Verfügbarkeit von Bilddigitalisaten historischer Dokumente rasant zugenommen. Das hat zunächst konservatorische Gründe: Digitalisierte Dokumente lassen sich praktisch nach Belieben in hoher Qualität vervielfältigen und sichern. Darüber hinaus lässt sich mit einer digitalisierten Sammlung eine wesentlich höhere Reichweite erzielen, als das mit dem Präsenzbestand allein jemals möglich wäre. Mit der zunehmenden Verfügbarkeit digitaler Bibliotheks- und Archivbestände steigen jedoch auch die Ansprüche an deren Präsentation und Nachnutzbarkeit. Neben der Suche auf Basis bibliothekarischer Metadaten erwarten Nutzer:innen auch, dass sie die Inhalte von Dokumenten durchsuchen können. Im wissenschaftlichen Bereich werden mit maschinellen, quantitativen Analysen von Textmaterial große Erwartungen an neue Möglichkeiten für die Forschung verbunden. Neben der Bilddigitalisierung wird daher immer häufiger auch eine Erfassung des Volltextes gefordert. Diese kann entweder manuell durch Transkription oder automatisiert mit Methoden der Optical Character Recognition (OCR) geschehen (Engl et al. 2020). Der manuellen Erfassung wird im Allgemeinen eine höhere Qualität der Zeichengenauigkeit zugeschrieben. Im Bereich der Massendigitalisierung fällt die Wahl aus Kostengründen jedoch meist auf automatische OCR-Verfahren.
  16. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.01
    0.008089051 = product of:
      0.048534304 = sum of:
        0.048534304 = product of:
          0.072801456 = sum of:
            0.036565192 = weight(_text_:29 in 2673) [ClassicSimilarity], result of:
              0.036565192 = score(doc=2673,freq=2.0), product of:
                0.13440257 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.038207654 = queryNorm
                0.27205724 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
            0.036236268 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
              0.036236268 = score(doc=2673,freq=2.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.2708308 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
          0.6666667 = coord(2/3)
      0.16666667 = coord(1/6)
    
    Date
    1. 8.1996 22:08:06
    Source
    Computer networks and ISDN systems. 29(1997) no.8, S.1147-1156
  17. Aldana, J.F.; Gómez, A.C.; Moreno, N.; Nebro, A.J.; Roldán, M.M.: Metadata functionality for semantic Web integration (2003) 0.01
    0.007419094 = product of:
      0.044514563 = sum of:
        0.044514563 = weight(_text_:propose in 2731) [ClassicSimilarity], result of:
          0.044514563 = score(doc=2731,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.22691247 = fieldWeight in 2731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
      0.16666667 = coord(1/6)
    
    Abstract
    We propose an extension of a mediator architecture. This extension is oriented to ontology-driven data integration. In our architecture ontologies are not managed by an extemal component or service, but are integrated in the mediation layer. This approach implies rethinking the mediator design, but at the same time provides advantages from a database perspective. Some of these advantages include the application of optimization and evaluation techniques that use and combine information from all abstraction levels (physical schema, logical schema and semantic information defined by ontology). 1. Introduction Although the Web is probably the richest information repository in human history, users cannot specify what they want from it. Two major problems that arise in current search engines (Heflin, 2001) are: a) polysemy, when the same word is used with different meanings; b) synonymy, when two different words have the same meaning. Polysemy causes irrelevant information retrieval. On the other hand, synonymy produces lost of useful documents. The lack of a capability to understand the context of the words and the relationships among required terms, explains many of the lost and false results produced by search engines. The Semantic Web will bring structure to the meaningful content of Web pages, giving semantic relationships among terms and possibly avoiding the previous problems. Various proposals have appeared for meta-data representation and communication standards, and other services and tools that may eventually merge into the global Semantic Web (Berners-lee, 2001). Hopefully, in the next few years we will see the universal adoption of open standards for representation and sharing of meta-information. In this environment, software agents roaming from page to page can readily carry out sophisticated tasks for users (Berners-Lee, 2001). In this context, ontologies can be seen as metadata that represent semantic of data; providing a knowledge domain standard vocabulary, like DTDs and XML Schema do. If its pages were so structured, the Web could be seen as a heterogeneous collection of autonomous databases. This suggests that techniques developed in the Database area could be useful. Database research mainly deals with efficient storage and retrieval and with powerful query languages.
  18. Gracy, K.F.: Enriching and enhancing moving images with Linked Data : an exploration in the alignment of metadata models (2018) 0.01
    0.007419094 = product of:
      0.044514563 = sum of:
        0.044514563 = weight(_text_:propose in 4200) [ClassicSimilarity], result of:
          0.044514563 = score(doc=4200,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.22691247 = fieldWeight in 4200, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.03125 = fieldNorm(doc=4200)
      0.16666667 = coord(1/6)
    
    Abstract
    The purpose of this paper is to examine the current state of Linked Data (LD) in archival moving image description, and propose ways in which current metadata records can be enriched and enhanced by interlinking such metadata with relevant information found in other data sets. Design/methodology/approach Several possible metadata models for moving image production and archiving are considered, including models from records management, digital curation, and the recent BIBFRAME AV Modeling Study. This research also explores how mappings between archival moving image records and relevant external data sources might be drawn, and what gaps exist between current vocabularies and what is needed to record and make accessible the full lifecycle of archiving through production, use, and reuse. Findings The author notes several major impediments to implementation of LD for archival moving images. The various pieces of information about creators, places, and events found in moving image records are not easily connected to relevant information in other sources because they are often not semantically defined within the record and can be hidden in unstructured fields. Libraries, archives, and museums must work on aligning the various vocabularies and schemas of potential value for archival moving image description to enable interlinking between vocabularies currently in use and those which are used by external data sets. Alignment of vocabularies is often complicated by mismatches in granularity between vocabularies. Research limitations/implications The focus is on how these models inform functional requirements for access and other archival activities, and how the field might benefit from having a common metadata model for critical archival descriptive activities. Practical implications By having a shared model, archivists may more easily align current vocabularies and develop new vocabularies and schemas to address the needs of moving image data creators and scholars. Originality/value Moving image archives, like other cultural institutions with significant heritage holdings, can benefit tremendously from investing in the semantic definition of information found in their information databases. While commercial entities such as search engines and data providers have already embraced the opportunities that semantic search provides for resource discovery, most non-commercial entities are just beginning to do so. Thus, this research addresses the benefits and challenges of enriching and enhancing archival moving image records with semantically defined information via LD.
  19. Jimenez, V.O.R.: Nuevas perspectivas para la catalogacion : metadatos ver MARC (1999) 0.00
    0.004880555 = product of:
      0.029283328 = sum of:
        0.029283328 = product of:
          0.08784998 = sum of:
            0.08784998 = weight(_text_:22 in 5743) [ClassicSimilarity], result of:
              0.08784998 = score(doc=5743,freq=4.0), product of:
                0.13379669 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.038207654 = queryNorm
                0.6565931 = fieldWeight in 5743, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5743)
          0.33333334 = coord(1/3)
      0.16666667 = coord(1/6)
    
    Date
    30. 3.2002 19:45:22
    Source
    Revista Española de Documentaçion Cientifica. 22(1999) no.2, S.198-219
  20. Haslhofer, B.: ¬A Web-based mapping technique for establishing metadata interoperability (2008) 0.00
    0.004636934 = product of:
      0.027821602 = sum of:
        0.027821602 = weight(_text_:propose in 3173) [ClassicSimilarity], result of:
          0.027821602 = score(doc=3173,freq=2.0), product of:
            0.19617504 = queryWeight, product of:
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.038207654 = queryNorm
            0.1418203 = fieldWeight in 3173, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.1344433 = idf(docFreq=707, maxDocs=44218)
              0.01953125 = fieldNorm(doc=3173)
      0.16666667 = coord(1/6)
    
    Abstract
    The integration of metadata from distinct, heterogeneous data sources requires metadata interoperability, which is a qualitative property of metadata information objects that is not given by default. The technique of metadata mapping allows domain experts to establish metadata interoperability in a certain integration scenario. Mapping solutions, as a technical manifestation of this technique, are already available for the intensively studied domain of database system interoperability, but they rarely exist for the Web. If we consider the amount of steadily increasing structured metadata and corresponding metadata schemes on theWeb, we can observe a clear need for a mapping solution that can operate in aWeb-based environment. To achieve that, we first need to build its technical core, which is a mapping model that provides the language primitives to define mapping relationships. Existing SemanticWeb languages such as RDFS and OWL define some basic mapping elements (e.g., owl:equivalentProperty, owl:sameAs), but do not address the full spectrum of semantic and structural heterogeneities that can occur among distinct, incompatible metadata information objects. Furthermore, it is still unclear how to process defined mapping relationships during run-time in order to deliver metadata to the client in a uniform way. As the main contribution of this thesis, we present an abstract mapping model, which reflects the mapping problem on a generic level and provides the means for reconciling incompatible metadata. Instance transformation functions and URIs take a central role in that model. The former cover a broad spectrum of possible structural and semantic heterogeneities, while the latter bind the complete mapping model to the architecture of the Word Wide Web. On the concrete, language-specific level we present a binding of the abstract mapping model for the RDF Vocabulary Description Language (RDFS), which allows us to create mapping specifications among incompatible metadata schemes expressed in RDFS. The mapping model is embedded in a cyclic process that categorises the requirements a mapping solution should fulfil into four subsequent phases: mapping discovery, mapping representation, mapping execution, and mapping maintenance. In this thesis, we mainly focus on mapping representation and on the transformation of mapping specifications into executable SPARQL queries. For mapping discovery support, the model provides an interface for plugging-in schema and ontology matching algorithms. For mapping maintenance we introduce the concept of a simple, but effective mapping registry. Based on the mapping model, we propose aWeb-based mediator wrapper-architecture that allows domain experts to set up mediation endpoints that provide a uniform SPARQL query interface to a set of distributed metadata sources. The involved data sources are encapsulated by wrapper components that expose the contained metadata and the schema definitions on the Web and provide a SPARQL query interface to these metadata. In this thesis, we present the OAI2LOD Server, a wrapper component for integrating metadata that are accessible via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH). In a case study, we demonstrate how mappings can be created in aWeb environment and how our mediator wrapper architecture can easily be configured in order to integrate metadata from various heterogeneous data sources without the need to install any mapping solution or metadata integration solution in a local system environment.

Authors

Years

Languages

  • e 145
  • d 18
  • f 1
  • i 1
  • sp 1
  • More… Less…

Types

  • a 154
  • s 8
  • el 7
  • m 5
  • b 2
  • x 1
  • More… Less…