Search (209 results, page 1 of 11)

  • × theme_ss:"Metadaten"
  1. Peereboom, M.: DutchESS : Dutch Electronic Subject Service - a Dutch national collaborative effort (2000) 0.02
    0.0155305695 = product of:
      0.054356992 = sum of:
        0.04130835 = weight(_text_:retrieval in 4869) [ClassicSimilarity], result of:
          0.04130835 = score(doc=4869,freq=4.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.37811437 = fieldWeight in 4869, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4869)
        0.01304864 = product of:
          0.03914592 = sum of:
            0.03914592 = weight(_text_:22 in 4869) [ClassicSimilarity], result of:
              0.03914592 = score(doc=4869,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.30952093 = fieldWeight in 4869, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4869)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    This article gives an overview of the design and organisation of DutchESS, a Dutch information subject gateway created as a national collaborative effort of the National Library and a number of academic libraries. The combined centralised and distributed model of DutchESS is discussed, as well as its selection policy, its metadata format, classification scheme and retrieval options. Also some options for future collaboration on an international level are explored
    Date
    22. 6.2002 19:39:23
    Theme
    Klassifikationssysteme im Online-Retrieval
  2. Kopácsi, S. et al.: Development of a classification server to support metadata harmonization in a long term preservation system (2016) 0.02
    0.015092164 = product of:
      0.05282257 = sum of:
        0.03651177 = weight(_text_:retrieval in 3280) [ClassicSimilarity], result of:
          0.03651177 = score(doc=3280,freq=2.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.33420905 = fieldWeight in 3280, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.078125 = fieldNorm(doc=3280)
        0.0163108 = product of:
          0.0489324 = sum of:
            0.0489324 = weight(_text_:22 in 3280) [ClassicSimilarity], result of:
              0.0489324 = score(doc=3280,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.38690117 = fieldWeight in 3280, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3280)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Source
    Metadata and semantics research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings. Eds.: E. Garoufallou
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  3. Handbook of metadata, semantics and ontologies (2014) 0.01
    0.013881464 = product of:
      0.09717024 = sum of:
        0.09717024 = weight(_text_:aufsatzsammlung in 5134) [ClassicSimilarity], result of:
          0.09717024 = score(doc=5134,freq=4.0), product of:
            0.23696128 = queryWeight, product of:
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.036116153 = queryNorm
            0.41006804 = fieldWeight in 5134, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              6.5610886 = idf(docFreq=169, maxDocs=44218)
              0.03125 = fieldNorm(doc=5134)
      0.14285715 = coord(1/7)
    
    RSWK
    Metadaten / Ontologie <Wissensverarbeitung> / Aufsatzsammlung
    Subject
    Metadaten / Ontologie <Wissensverarbeitung> / Aufsatzsammlung
  4. Strötgen, R.: Treatment of semantic heterogeneity using meta-data extraction and query translation (2002) 0.01
    0.013618859 = product of:
      0.047666006 = sum of:
        0.036144804 = weight(_text_:retrieval in 3595) [ClassicSimilarity], result of:
          0.036144804 = score(doc=3595,freq=4.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.33085006 = fieldWeight in 3595, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3595)
        0.011521201 = product of:
          0.0345636 = sum of:
            0.0345636 = weight(_text_:29 in 3595) [ClassicSimilarity], result of:
              0.0345636 = score(doc=3595,freq=2.0), product of:
                0.12704533 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036116153 = queryNorm
                0.27205724 = fieldWeight in 3595, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3595)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    The project CARMEN ("Content Analysis, Retrieval and Metadata: Effective Networking") aimed - among other goals - at improving the expansion of searches in bibliographic databases into Internet searches. We pursued a set of different approaches to the treatment of semantic heterogeneity (meta-data extraction, query translation using statistic relations and Cross-concordances). This paper describes the concepts and implementation of these approaches and the evaluation of the impact for the retrieval result.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  5. Kent, R.E.: Organizing conceptual knowledge online : metadata interoperability and faceted classification (1998) 0.01
    0.013589248 = product of:
      0.047562364 = sum of:
        0.036144804 = weight(_text_:retrieval in 57) [ClassicSimilarity], result of:
          0.036144804 = score(doc=57,freq=4.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.33085006 = fieldWeight in 57, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=57)
        0.011417559 = product of:
          0.034252677 = sum of:
            0.034252677 = weight(_text_:22 in 57) [ClassicSimilarity], result of:
              0.034252677 = score(doc=57,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.2708308 = fieldWeight in 57, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=57)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Conceptual Knowledge Markup Language (CKML), an application of XML, is a new standard being promoted for the specification of online conceptual knowledge (Kent and Shrivastava, 1998). CKML follows the philosophy of Conceptual Knowledge Processing (Wille, 1982), a principled approach to knowledge representation and data analysis, which advocates the development of methodologies and techniques to support people in their rational thinking, judgement and actions. CKML was developed and is being used in the WAVE networked information discovery and retrieval system (Kent and Neuss, 1994) as a standard for the specification of conceptual knowledge
    Date
    30.12.2001 16:22:41
    Theme
    Klassifikationssysteme im Online-Retrieval
  6. Gardner, T.; Iannella, R.: Architecture and software solutions (2000) 0.01
    0.012073731 = product of:
      0.042258058 = sum of:
        0.029209416 = weight(_text_:retrieval in 4867) [ClassicSimilarity], result of:
          0.029209416 = score(doc=4867,freq=2.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.26736724 = fieldWeight in 4867, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0625 = fieldNorm(doc=4867)
        0.01304864 = product of:
          0.03914592 = sum of:
            0.03914592 = weight(_text_:22 in 4867) [ClassicSimilarity], result of:
              0.03914592 = score(doc=4867,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.30952093 = fieldWeight in 4867, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4867)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    The current subject gateways have evolved over time when the discipline of Internet resource discovery was in its infancy. This is reflected by the lack of well-established, light-weight, deployable, easy-to-use, standards for metadata and information retrieval. We provide an introduction to the architecture, standards and software solutions in use by subject gateways, and to the issues that must be addressed to support future subject gateways
    Date
    22. 6.2002 19:38:24
  7. Smiraglia, R.P.: Content metadata : an analysis of Etruscan artifacts in a museum of archeology (2005) 0.01
    0.011673308 = product of:
      0.040856577 = sum of:
        0.030981263 = weight(_text_:retrieval in 176) [ClassicSimilarity], result of:
          0.030981263 = score(doc=176,freq=4.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.2835858 = fieldWeight in 176, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=176)
        0.009875315 = product of:
          0.029625945 = sum of:
            0.029625945 = weight(_text_:29 in 176) [ClassicSimilarity], result of:
              0.029625945 = score(doc=176,freq=2.0), product of:
                0.12704533 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036116153 = queryNorm
                0.23319192 = fieldWeight in 176, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=176)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Metadata schemes target resources as information-packages, without attention to the distinction between content and carrier. Most schema are derived without empirical understanding of the concepts that need to be represented, the ways in which terms representing the central concepts might best be derived, and how metadata descriptions will be used for retrieval. Research is required to resolve this dilemma, and much research will be required if the plethora of schemes that already exist are to be made efficacious for resource description and retrieval. Here I report the results of a preliminary study, which was designed to see whether the bibliographic concept of "the work" could be of any relevance among artifacts held by a museum. I extend the "works metaphor" from the bibliographic to the artifactual domain, by altering the terms of the definition slightly, thus: 1) instantiation is understood as content genealogy. Case studies of Etruscan artifacts from the University of Pennsylvania Museum of Archaeology and Anthropology are used to demonstrate the inherence of the work in non-documentary artifacts.
    Date
    29. 9.2008 19:14:41
  8. Renear, A.H.; Wickett, K.M.; Urban, R.J.; Dubin, D.; Shreeves, S.L.: Collection/item metadata relationships (2008) 0.01
    0.011647928 = product of:
      0.040767744 = sum of:
        0.030981263 = weight(_text_:retrieval in 2623) [ClassicSimilarity], result of:
          0.030981263 = score(doc=2623,freq=4.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.2835858 = fieldWeight in 2623, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2623)
        0.009786479 = product of:
          0.029359438 = sum of:
            0.029359438 = weight(_text_:22 in 2623) [ClassicSimilarity], result of:
              0.029359438 = score(doc=2623,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.23214069 = fieldWeight in 2623, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2623)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Contemporary retrieval systems, which search across collections, usually ignore collection-level metadata. Alternative approaches, exploiting collection-level information, will require an understanding of the various kinds of relationships that can obtain between collection-level and item-level metadata. This paper outlines the problem and describes a project that is developing a logic-based framework for classifying collection/item metadata relationships. This framework will support (i) metadata specification developers defining metadata elements, (ii) metadata creators describing objects, and (iii) system designers implementing systems that take advantage of collection-level metadata. We present three examples of collection/item metadata relationship categories, attribute/value-propagation, value-propagation, and value-constraint and show that even in these simple cases a precise formulation requires modal notions in addition to first-order logic. These formulations are related to recent work in information retrieval and ontology evaluation.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  9. Guerrini, M.: Metadata: the dimension of cataloging in the digital age (2022) 0.01
    0.010594126 = product of:
      0.03707944 = sum of:
        0.025558239 = weight(_text_:retrieval in 735) [ClassicSimilarity], result of:
          0.025558239 = score(doc=735,freq=2.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23394634 = fieldWeight in 735, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=735)
        0.011521201 = product of:
          0.0345636 = sum of:
            0.0345636 = weight(_text_:29 in 735) [ClassicSimilarity], result of:
              0.0345636 = score(doc=735,freq=2.0), product of:
                0.12704533 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036116153 = queryNorm
                0.27205724 = fieldWeight in 735, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=735)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Metadata creation is the process of recording metadata, that is data essential to the identification and retrieval of any type of resource, including bibliographic resources. Metadata capable of identifying characteristics of an entity have always existed. However, the triggering event that has rewritten and enhanced their value is the digital revolution. Cataloging is configured as an action of creating metadata. While cataloging produces a catalog, that is a list of records relating to various types of resources, ordered and searchable, according to a defined criterion, the metadata process produces the metadata of the resources.
    Date
    29. 9.2022 18:11:09
  10. Hakala, J.: Dublin core in 1997 : a report from Dublin Core metadata workshops 4 & 5 (1998) 0.01
    0.010564514 = product of:
      0.036975797 = sum of:
        0.025558239 = weight(_text_:retrieval in 2220) [ClassicSimilarity], result of:
          0.025558239 = score(doc=2220,freq=2.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23394634 = fieldWeight in 2220, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2220)
        0.011417559 = product of:
          0.034252677 = sum of:
            0.034252677 = weight(_text_:22 in 2220) [ClassicSimilarity], result of:
              0.034252677 = score(doc=2220,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.2708308 = fieldWeight in 2220, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2220)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Creation of more and better metadata, or resource descriptions, is the best means to solve the problems of massive recall and lack of precision associated with Internet information retrieval. Dublin Core Metadata workshops aim to develop the resource descriptions. Describes the 4th workshop held in Canberra March 1997 and the 5th held in Oct. 1997 in Helsinki. DC-4 dealt with element structure with qualifiers language, scheme and type; extensibility issues; and element refinement. DC-5 dealt with element refinement and stability; definition of sub-elements and resource types; and sharing of Dublin Core implementation experiences, one of which is the Nordic Metadata project. The Nordic countries are now well prepared to implement useful new tools built by the Internet metadata community
    Source
    Nordinfo Nytt. 1997, nos.3/4, S.10-22
  11. Brasethvik, T.: ¬A semantic modeling approach to metadata (1998) 0.01
    0.010564514 = product of:
      0.036975797 = sum of:
        0.025558239 = weight(_text_:retrieval in 5165) [ClassicSimilarity], result of:
          0.025558239 = score(doc=5165,freq=2.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23394634 = fieldWeight in 5165, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5165)
        0.011417559 = product of:
          0.034252677 = sum of:
            0.034252677 = weight(_text_:22 in 5165) [ClassicSimilarity], result of:
              0.034252677 = score(doc=5165,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.2708308 = fieldWeight in 5165, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5165)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    States that heterogeneous project groups today may be expected to use the mechanisms of the Web for sharing information. Metadata has been proposed as a mechanism for expressing the semantics of information and, hence, facilitate information retrieval, understanding and use. Presents an approach to sharing information which aims to use a semantic modeling language as the basis for expressing the semantics of information and designing metadata schemes. Functioning on the borderline between human and computer understandability, the modeling language would be able to express the semantics of published Web documents. Reporting on work in progress, presents the overall framework and ideas
    Date
    9. 9.2000 17:22:23
  12. Metadata and semantics research : 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings (2016) 0.01
    0.010564514 = product of:
      0.036975797 = sum of:
        0.025558239 = weight(_text_:retrieval in 3283) [ClassicSimilarity], result of:
          0.025558239 = score(doc=3283,freq=2.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23394634 = fieldWeight in 3283, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3283)
        0.011417559 = product of:
          0.034252677 = sum of:
            0.034252677 = weight(_text_:22 in 3283) [ClassicSimilarity], result of:
              0.034252677 = score(doc=3283,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.2708308 = fieldWeight in 3283, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3283)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    This book constitutes the refereed proceedings of the 10th Metadata and Semantics Research Conference, MTSR 2016, held in Göttingen, Germany, in November 2016. The 26 full papers and 6 short papers presented were carefully reviewed and selected from 67 submissions. The papers are organized in several sessions and tracks: Digital Libraries, Information Retrieval, Linked and Social Data, Metadata and Semantics for Open Repositories, Research Information Systems and Data Infrastructures, Metadata and Semantics for Agriculture, Food and Environment, Metadata and Semantics for Cultural Collections and Applications, European and National Projects.
  13. Metadata and semantics research : 8th Research Conference, MTSR 2014, Karlsruhe, Germany, November 27-29, 2014, Proceedings (2014) 0.01
    0.009727757 = product of:
      0.03404715 = sum of:
        0.025817718 = weight(_text_:retrieval in 2192) [ClassicSimilarity], result of:
          0.025817718 = score(doc=2192,freq=4.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23632148 = fieldWeight in 2192, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2192)
        0.00822943 = product of:
          0.024688289 = sum of:
            0.024688289 = weight(_text_:29 in 2192) [ClassicSimilarity], result of:
              0.024688289 = score(doc=2192,freq=2.0), product of:
                0.12704533 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036116153 = queryNorm
                0.19432661 = fieldWeight in 2192, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2192)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    LCSH
    Information storage and retrieval systems
    Subject
    Information storage and retrieval systems
  14. Gartner, R.: Metadata : shaping knowledge from antiquity to the semantic web (2016) 0.01
    0.009727757 = product of:
      0.03404715 = sum of:
        0.025817718 = weight(_text_:retrieval in 731) [ClassicSimilarity], result of:
          0.025817718 = score(doc=731,freq=4.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23632148 = fieldWeight in 731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=731)
        0.00822943 = product of:
          0.024688289 = sum of:
            0.024688289 = weight(_text_:29 in 731) [ClassicSimilarity], result of:
              0.024688289 = score(doc=731,freq=2.0), product of:
                0.12704533 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036116153 = queryNorm
                0.19432661 = fieldWeight in 731, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=731)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Date
    29. 9.2022 17:43:42
    LCSH
    Information storage and retrieval
    Subject
    Information storage and retrieval
  15. Cho, H.; Donovan, A.; Lee, J.H.: Art in an algorithm : a taxonomy for describing video game visual styles (2018) 0.01
    0.009706606 = product of:
      0.03397312 = sum of:
        0.025817718 = weight(_text_:retrieval in 4218) [ClassicSimilarity], result of:
          0.025817718 = score(doc=4218,freq=4.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23632148 = fieldWeight in 4218, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0390625 = fieldNorm(doc=4218)
        0.0081554 = product of:
          0.0244662 = sum of:
            0.0244662 = weight(_text_:22 in 4218) [ClassicSimilarity], result of:
              0.0244662 = score(doc=4218,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.19345059 = fieldWeight in 4218, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=4218)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    The discovery and retrieval of video games in library and information systems is, by and large, dependent on a limited set of descriptive metadata. Noticeably missing from this metadata are classifications of visual style-despite the overwhelmingly visual nature of most video games and the interest in visual style among video game users. One explanation for this paucity is the difficulty in eliciting consistent judgements about visual style, likely due to subjective interpretations of terminology and a lack of demonstrable testing for coinciding judgements. This study presents a taxonomy of video game visual styles constructed from the findings of a 22-participant cataloging user study of visual styles. A detailed description of the study, and its value and shortcomings, are presented along with reflections about the challenges of cultivating consensus about visual style in video games. The high degree of overall agreement in the user study demonstrates the potential value of a descriptor like visual style and the use of a cataloging study in developing visual style taxonomies. The resulting visual style taxonomy, the methods and analysis described herein may help improve the organization and retrieval of video games and possibly other visual materials like graphic designs, illustrations, and animations.
  16. Tappenbeck, I.; Wessel, C.: CARMEN : Content Analysis, Retrieval and Metadata: Effective Net-working. Ein Halbzeitbericht (2001) 0.01
    0.009108469 = product of:
      0.03187964 = sum of:
        0.025296098 = weight(_text_:retrieval in 5900) [ClassicSimilarity], result of:
          0.025296098 = score(doc=5900,freq=6.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23154683 = fieldWeight in 5900, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5900)
        0.0065835435 = product of:
          0.01975063 = sum of:
            0.01975063 = weight(_text_:29 in 5900) [ClassicSimilarity], result of:
              0.01975063 = score(doc=5900,freq=2.0), product of:
                0.12704533 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036116153 = queryNorm
                0.15546128 = fieldWeight in 5900, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5900)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Das Projekt CARMEN startete als Sonderfördermaßnahme im Rahmen von Global lnfo im Oktober 1999 mit einer geplanten Laufzeit von 29 Monaten. Der Schwerpunkt des Projekts liegt in der Weiterentwicklung von Konzepten und Verfahren der Dokumenterschließung, die den Zugriff auf heterogene, dezentral verteilte Informationsbestände und deren Verwaltung nach gemeinsamen Prinzipien ermöglichen sollen. Dabei geht CARMEN gezielt einen anderen Weg als die meisten bisherigen Ansätze in diesem Bereich, die versuchen, Homogenität und Konsistenz in einer dezentralen Informationslandschaft technikorientiert herzustellen, indem Verfahren entwickelt werden, durch die physikalisch auf verschiedene Dokumentenräume gleichzeitig zugegriffen werden kann. Eine rein technische Parallelisierung von Zugriffsmöglichkeiten reicht jedoch nicht aus, denn das Hauptproblem der inhaltlichen, strukturellen und konzeptionellen Differenz der einzelnen Datenbestände wird damit nicht gelöst. Um diese Differenzen zu kompensieren, werden Problemlösungen und Weiterentwicklungen innerhalb des Projekts CARMEN in drei Bereichen erarbeitet: (1) Metadaten (Dokumentbeschreibung, Retrieval, Verwaltung, Archivierung) (2) Methoden des Umgangs mit der verbleibenden Heterogenität der Datenbestände (3) Retrieval für strukturierte Dokumente mit Metadaten und heterogenen Datentypen. Diese drei Aufgabenbereiche hängen eng zusammen. Durch die Entwicklungen im Bereich der Metadaten soll einerseits die verlorengegangene Konsistenz partiell wiederhergestellt und auf eine den neuen Medien gerechte Basis gestellt werden. Andererseits sollen durch Verfahren zur Heterogenitätsbehandlung Dokumente mit unterschiedlicher Datenrelevanz und Inhaltserschließung aufeinander bezogen und retrievalseitig durch ein Rechercheverfahren erganzt werden, das den unterschiedlichen Datentypen gerecht wird Innerhalb des Gesamtprojekts CARMEN werden diese Aspekte arbeitsteilig behandelt. Acht Arbeitspakete (APs) befassen sich in Abstimmung miteinander mit je verschiedenen Schwerpunkten. Um die Koordination der Arbeiten der verschiedenen APs untereinander zu unterstützen, trafen sich die ca. 40 Projektbearbeiter am 1. und 2. Februar 2001 zum "CARMEN middle OfTheRoad Workshop" in Bonn. Anlässlich dieses Workshops wurden die inhaltlichen und technischen Ergebnisse, die in der ersten Hälfte der Projektlaufzeit von den einzelnen APs erzielt worden sind, in insgesamt 17 Präsentationen vorgestellt
  17. Tappenbeck, I.; Wessel, C.: CARMEN : Content Analysis, Retrieval and Metadata: Effective Net-working. Bericht über den middleOfTheRoad Workshop (2001) 0.01
    0.009108469 = product of:
      0.03187964 = sum of:
        0.025296098 = weight(_text_:retrieval in 5901) [ClassicSimilarity], result of:
          0.025296098 = score(doc=5901,freq=6.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23154683 = fieldWeight in 5901, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=5901)
        0.0065835435 = product of:
          0.01975063 = sum of:
            0.01975063 = weight(_text_:29 in 5901) [ClassicSimilarity], result of:
              0.01975063 = score(doc=5901,freq=2.0), product of:
                0.12704533 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036116153 = queryNorm
                0.15546128 = fieldWeight in 5901, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=5901)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Das Projekt CARMEN startete als Sonderfördermaßnahme im Rahmen von Global lnfo im Oktober 1999 mit einer geplanten Laufzeit von 29 Monaten. Der Schwerpunkt des Projekts liegt in der Weiterentwicklung von Konzepten und Verfahren der Dokumenterschließung, die den Zugriff auf heterogene, dezentral verteilte Informationsbestände und deren Verwaltung nach gemeinsamen Prinzipien ermöglichen sollen. Dabei geht CARMEN gezielt einen anderen Weg als die meisten bisherigen Ansätze in diesem Bereich, die versuchen, Homogenität und Konsistenz in einer dezentralen Informationslandschaft technikorientiert herzustellen, indem Verfahren entwickelt werden, durch die physikalisch auf verschiedene Dokumentenräume gleichzeitig zugegriffen werden kann. Eine rein technische Parallelisierung von Zugriffsmöglichkeiten reicht jedoch nicht aus, denn das Hauptproblem der inhaltlichen, strukturellen und konzeptionellen Differenz der einzelnen Datenbestände wird damit nicht gelöst. Um diese Differenzen zu kompensieren, werden Problemlösungen und Weiterentwicklungen innerhalb des Projekts CARMEN in drei Bereichen erarbeitet: (1) Metadaten (Dokumentbeschreibung, Retrieval, Verwaltung, Archivierung) (2) Methoden des Umgangs mit der verbleibenden Heterogenität der Datenbestände (3) Retrieval für strukturierte Dokumente mit Metadaten und heterogenen Datentypen. Diese drei Aufgabenbereiche hängen eng zusammen. Durch die Entwicklungen im Bereich der Metadaten soll einerseits die verlorengegangene Konsistenz partiell wiederhergestellt und auf eine den neuen Medien gerechte Basis gestellt werden. Andererseits sollen durch Verfahren zur Heterogenitätsbehandlung Dokumente mit unterschiedlicher Datenrelevanz und Inhaltserschließung aufeinander bezogen und retrievalseitig durch ein Rechercheverfahren erganzt werden, das den unterschiedlichen Datentypen gerecht wird Innerhalb des Gesamtprojekts CARMEN werden diese Aspekte arbeitsteilig behandelt. Acht Arbeitspakete (APs) befassen sich in Abstimmung miteinander mit je verschiedenen Schwerpunkten. Um die Koordination der Arbeiten der verschiedenen APs untereinander zu unterstützen, trafen sich die ca. 40 Projektbearbeiter am 1. und 2. Februar 2001 zum "CARMEN middle OfTheRoad Workshop" in Bonn. Anlässlich dieses Workshops wurden die inhaltlichen und technischen Ergebnisse, die in der ersten Hälfte der Projektlaufzeit von den einzelnen APs erzielt worden sind, in insgesamt 17 Präsentationen vorgestellt
  18. Jun, W.: ¬A knowledge network constructed by integrating classification, thesaurus and metadata in a digital library (2003) 0.01
    0.009108469 = product of:
      0.03187964 = sum of:
        0.025296098 = weight(_text_:retrieval in 1254) [ClassicSimilarity], result of:
          0.025296098 = score(doc=1254,freq=6.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.23154683 = fieldWeight in 1254, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=1254)
        0.0065835435 = product of:
          0.01975063 = sum of:
            0.01975063 = weight(_text_:29 in 1254) [ClassicSimilarity], result of:
              0.01975063 = score(doc=1254,freq=2.0), product of:
                0.12704533 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036116153 = queryNorm
                0.15546128 = fieldWeight in 1254, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1254)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    Knowledge management in digital libraries is a universal problem. Keyword-based searching is applied everywhere no matter whether the resources are indexed databases or full-text Web pages. In keyword matching, the valuable content description and indexing of the metadata, such as the subject descriptors and the classification notations, are merely treated as common keywords to be matched with the user query. Without the support of vocabulary control tools, such as classification systems and thesauri, the intelligent labor of content analysis, description and indexing in metadata production are seriously wasted. New retrieval paradigms are needed to exploit the potential of the metadata resources. Could classification and thesauri, which contain the condensed intelligence of generations of librarians, be used in a digital library to organize the networked information, especially metadata, to facilitate their usability and change the digital library into a knowledge management environment? To examine that question, we designed and implemented a new paradigm that incorporates a classification system, a thesaurus and metadata. The classification and the thesaurus are merged into a concept network, and the metadata are distributed into the nodes of the concept network according to their subjects. The abstract concept node instantiated with the related metadata records becomes a knowledge node. A coherent and consistent knowledge network is thus formed. It is not only a framework for resource organization but also a structure for knowledge navigation, retrieval and learning. We have built an experimental system based on the Chinese Classification and Thesaurus, which is the most comprehensive and authoritative in China, and we have incorporated more than 5000 bibliographic records in the computing domain from the Peking University Library. The result is encouraging. In this article, we review the tools, the architecture and the implementation of our experimental system, which is called Vision.
    Source
    Bulletin of the American Society for Information Science. 29(2003) no.2, S.24-28
    Theme
    Semantisches Umfeld in Indexierung u. Retrieval
  19. Bartolo, L.M.; Lowe, C.S.; Melton, A.C.; Strahl, M.; Feng, L.; Woolverton, C.J.: Effectiveness of tagging laboratory data using Dublin Core in an electronic scientific notebook (2002) 0.01
    0.00908068 = product of:
      0.031782378 = sum of:
        0.021907061 = weight(_text_:retrieval in 3598) [ClassicSimilarity], result of:
          0.021907061 = score(doc=3598,freq=2.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.20052543 = fieldWeight in 3598, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=3598)
        0.009875315 = product of:
          0.029625945 = sum of:
            0.029625945 = weight(_text_:29 in 3598) [ClassicSimilarity], result of:
              0.029625945 = score(doc=3598,freq=2.0), product of:
                0.12704533 = queryWeight, product of:
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.036116153 = queryNorm
                0.23319192 = fieldWeight in 3598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5176873 = idf(docFreq=3565, maxDocs=44218)
                  0.046875 = fieldNorm(doc=3598)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    As a form of grey literature, scientific laboratory notebooks are intended to meet two broad functions: to record daily in-house activities as well as to manage research results. A major goal of this scientific electronic notebook project is to provide high quality resource discovery and retrieval capabilities for primary data objects produced in a multidisciplinary, biotechnology research laboratory study. This paper discusses a prototype modified relational database that incorporates Dublin Core metadata to organize and describe the laboratory data early in the scientific process. The study investigates the effectiveness of this approach to Support daily in-house tasks as well as to capture, integrate, and exchange research results.
    Source
    Gaining insight from research information (CRIS2002): Proceedings of the 6th International Conference an Current Research Information Systems, University of Kassel, August 29 - 31, 2002. Eds: W. Adamczak u. A. Nase
  20. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.01
    0.009055298 = product of:
      0.03169354 = sum of:
        0.021907061 = weight(_text_:retrieval in 2556) [ClassicSimilarity], result of:
          0.021907061 = score(doc=2556,freq=2.0), product of:
            0.109248295 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.036116153 = queryNorm
            0.20052543 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.009786479 = product of:
          0.029359438 = sum of:
            0.029359438 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
              0.029359438 = score(doc=2556,freq=2.0), product of:
                0.1264726 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.036116153 = queryNorm
                0.23214069 = fieldWeight in 2556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.33333334 = coord(1/3)
      0.2857143 = coord(2/7)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46

Authors

Years

Languages

Types

  • a 180
  • m 15
  • s 14
  • el 10
  • b 2
  • n 1
  • x 1
  • More… Less…