Search (123 results, page 1 of 7)

  • × theme_ss:"Metadaten"
  • × year_i:[2000 TO 2010}
  1. Zhang, J.; Jastram, I.: ¬A study of the metadata creation behavior of different user groups on the Internet (2006) 0.13
    0.12857777 = product of:
      0.17143703 = sum of:
        0.07053544 = weight(_text_:web in 982) [ClassicSimilarity], result of:
          0.07053544 = score(doc=982,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43716836 = fieldWeight in 982, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=982)
        0.046190813 = weight(_text_:search in 982) [ClassicSimilarity], result of:
          0.046190813 = score(doc=982,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 982, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=982)
        0.05471077 = product of:
          0.10942154 = sum of:
            0.10942154 = weight(_text_:engine in 982) [ClassicSimilarity], result of:
              0.10942154 = score(doc=982,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.41372913 = fieldWeight in 982, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=982)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Metadata is designed to improve information organization and information retrieval effectiveness and efficiency on the Internet. The way web publishers respond to metadata and the way they use it when publishing their web pages, however, is still a mystery. The authors of this paper aim to solve this mystery by defining different professional publisher groups, examining the behaviors of these user groups, and identifying the characteristics of their metadata use. This study will enhance the current understanding of metadata application behavior and provide evidence useful to researchers, web publishers, and search engine designers.
  2. Franklin, R.A.: Re-inventing subject access for the semantic web (2003) 0.11
    0.11402985 = product of:
      0.1520398 = sum of:
        0.09235258 = weight(_text_:web in 2556) [ClassicSimilarity], result of:
          0.09235258 = score(doc=2556,freq=14.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.57238775 = fieldWeight in 2556, product of:
              3.7416575 = tf(freq=14.0), with freq of:
                14.0 = termFreq=14.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.03959212 = weight(_text_:search in 2556) [ClassicSimilarity], result of:
          0.03959212 = score(doc=2556,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 2556, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=2556)
        0.02009509 = product of:
          0.04019018 = sum of:
            0.04019018 = weight(_text_:22 in 2556) [ClassicSimilarity], result of:
              0.04019018 = score(doc=2556,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23214069 = fieldWeight in 2556, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.046875 = fieldNorm(doc=2556)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    First generation scholarly research on the Web lacked a firm system of authority control. Second generation Web research is beginning to model subject access with library science principles of bibliographic control and cataloguing. Harnessing the Web and organising the intellectual content with standards and controlled vocabulary provides precise search and retrieval capability, increasing relevance and efficient use of technology. Dublin Core metadata standards permit a full evaluation and cataloguing of Web resources appropriate to highly specific research needs and discovery. Current research points to a type of structure based on a system of faceted classification. This system allows the semantic and syntactic relationships to be defined. Controlled vocabulary, such as the Library of Congress Subject Headings, can be assigned, not in a hierarchical structure, but rather as descriptive facets of relating concepts. Web design features such as this are adding value to discovery and filtering out data that lack authority. The system design allows for scalability and extensibility, two technical features that are integral to future development of the digital library and resource discovery.
    Date
    30.12.2008 18:22:46
    Theme
    Semantic Web
  3. Craven, T.C.: Variations in use of meta tag descriptions by Web pages in different languages (2004) 0.11
    0.106218934 = product of:
      0.14162524 = sum of:
        0.04072366 = weight(_text_:web in 2569) [ClassicSimilarity], result of:
          0.04072366 = score(doc=2569,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 2569, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2569)
        0.046190813 = weight(_text_:search in 2569) [ClassicSimilarity], result of:
          0.046190813 = score(doc=2569,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 2569, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2569)
        0.05471077 = product of:
          0.10942154 = sum of:
            0.10942154 = weight(_text_:engine in 2569) [ClassicSimilarity], result of:
              0.10942154 = score(doc=2569,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.41372913 = fieldWeight in 2569, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2569)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Sets of top-ranking pages in 20 languages returned by the Google search engine were downloaded and analyzed for presence of meta tag descriptions and lengths of descriptions. Results showed significant differences in proportion of pages with descriptions and in lengths of descriptions depending on language; specifically, pages in major Western European languages showed higher proportions with descriptions, while pages in Chinese showed the lowest proportions. Descriptions were mostly in the languages of the pages, though English descriptions were provided on some non-English pages. With few exceptions, coding schemes adopted for diacritics and non-Roman characters were standard.
  4. Mehler, A.; Waltinger, U.: Automatic enrichment of metadata (2009) 0.06
    0.05930443 = product of:
      0.11860886 = sum of:
        0.06581937 = weight(_text_:web in 4840) [ClassicSimilarity], result of:
          0.06581937 = score(doc=4840,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.4079388 = fieldWeight in 4840, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4840)
        0.052789498 = weight(_text_:search in 4840) [ClassicSimilarity], result of:
          0.052789498 = score(doc=4840,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 4840, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=4840)
      0.5 = coord(2/4)
    
    Abstract
    In this talk we present a retrieval model based on social ontologies. More specifically, we utilize the Wikipedia category system in order to perform semantic searches. That is, textual input is used to build queries by means of which documents are retrieved which do not necessarily contain any query term but are semantically related to the input text by virtue of their content. We present a desktop which utilizes this search facility in a web-based environment - the so called eHumanities Desktop.
    Theme
    Semantic Web
  5. Heidorn, P.B.; Wei, Q.: Automatic metadata extraction from museum specimen labels (2008) 0.06
    0.059120752 = product of:
      0.07882767 = sum of:
        0.029088326 = weight(_text_:web in 2624) [ClassicSimilarity], result of:
          0.029088326 = score(doc=2624,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 2624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2624)
        0.032993436 = weight(_text_:search in 2624) [ClassicSimilarity], result of:
          0.032993436 = score(doc=2624,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 2624, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2624)
        0.01674591 = product of:
          0.03349182 = sum of:
            0.03349182 = weight(_text_:22 in 2624) [ClassicSimilarity], result of:
              0.03349182 = score(doc=2624,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.19345059 = fieldWeight in 2624, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2624)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    This paper describes the information properties of museum specimen labels and machine learning tools to automatically extract Darwin Core (DwC) and other metadata from these labels processed through Optical Character Recognition (OCR). The DwC is a metadata profile describing the core set of access points for search and retrieval of natural history collections and observation databases. Using the HERBIS Learning System (HLS) we extract 74 independent elements from these labels. The automated text extraction tools are provided as a web service so that users can reference digital images of specimens and receive back an extended Darwin Core XML representation of the content of the label. This automated extraction task is made more difficult by the high variability of museum label formats, OCR errors and the open class nature of some elements. In this paper we introduce our overall system architecture, and variability robust solutions including, the application of Hidden Markov and Naïve Bayes machine learning models, data cleaning, use of field element identifiers, and specialist learning models. The techniques developed here could be adapted to any metadata extraction situation with noisy text and weakly ordered elements.
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  6. Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany (2008) 0.06
    0.0566559 = product of:
      0.0755412 = sum of:
        0.04072366 = weight(_text_:web in 2668) [ClassicSimilarity], result of:
          0.04072366 = score(doc=2668,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 2668, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2668)
        0.023095407 = weight(_text_:search in 2668) [ClassicSimilarity], result of:
          0.023095407 = score(doc=2668,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.1344041 = fieldWeight in 2668, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.02734375 = fieldNorm(doc=2668)
        0.011722136 = product of:
          0.023444273 = sum of:
            0.023444273 = weight(_text_:22 in 2668) [ClassicSimilarity], result of:
              0.023444273 = score(doc=2668,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.1354154 = fieldWeight in 2668, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=2668)
          0.5 = coord(1/2)
      0.75 = coord(3/4)
    
    Abstract
    Metadata is a key aspect of our evolving infrastructure for information management, social computing, and scientific collaboration. DC-2008 will focus on metadata challenges, solutions, and innovation in initiatives and activities underlying semantic and social applications. Metadata is part of the fabric of social computing, which includes the use of wikis, blogs, and tagging for collaboration and participation. Metadata also underlies the development of semantic applications, and the Semantic Web - the representation and integration of multimedia knowledge structures on the basis of semantic models. These two trends flow together in applications such as Wikipedia, where authors collectively create structured information that can be extracted and used to enhance access to and use of information sources. Recent discussion has focused on how existing bibliographic standards can be expressed as Semantic Web vocabularies to facilitate the ingration of library and cultural heritage data with other types of data. Harnessing the efforts of content providers and end-users to link, tag, edit, and describe their information in interoperable ways ("participatory metadata") is a key step towards providing knowledge environments that are scalable, self-correcting, and evolvable. DC-2008 will explore conceptual and practical issues in the development and deployment of semantic and social applications to meet the needs of specific communities of practice.
    Content
    Carol Jean Godby, Devon Smith, Eric Childress: Encoding Application Profiles in a Computational Model of the Crosswalk. - Maria Elisabete Catarino, Ana Alice Baptista: Relating Folksonomies with Dublin Core. - Ed Summers, Antoine Isaac, Clay Redding, Dan Krech: LCSH, SKOS and Linked Data. - Xia Lin, Jiexun Li, Xiaohua Zhou: Theme Creation for Digital Collections. - Boris Lauser, Gudrun Johannsen, Caterina Caracciolo, Willem Robert van Hage, Johannes Keizer, Philipp Mayr: Comparing Human and Automatic Thesaurus Mapping Approaches in the Agricultural Domain. - P. Bryan Heidorn, Qin Wei: Automatic Metadata Extraction From Museum Specimen Labels. - Stuart Allen Sutton, Diny Golder: Achievement Standards Network (ASN): An Application Profile for Mapping K-12 Educational Resources to Achievement Standards. - Allen H. Renear, Karen M. Wickett, Richard J. Urban, David Dubin, Sarah L. Shreeves: Collection/Item Metadata Relationships. - Seth van Hooland, Yves Bontemps, Seth Kaufman: Answering the Call for more Accountability: Applying Data Profiling to Museum Metadata. - Thomas Margaritopoulos, Merkourios Margaritopoulos, Ioannis Mavridis, Athanasios Manitsaris: A Conceptual Framework for Metadata Quality Assessment. - Miao Chen, Xiaozhong Liu, Jian Qin: Semantic Relation Extraction from Socially-Generated Tags: A Methodology for Metadata Generation. - Hak Lae Kim, Simon Scerri, John G. Breslin, Stefan Decker, Hong Gee Kim: The State of the Art in Tag Ontologies: A Semantic Model for Tagging and Folksonomies. - Martin Malmsten: Making a Library Catalogue Part of the Semantic Web. - Philipp Mayr, Vivien Petras: Building a Terminology Network for Search: The KoMoHe Project. - Michael Panzer: Cool URIs for the DDC: Towards Web-scale Accessibility of a Large Classification System. - Barbara Levergood, Stefan Farrenkopf, Elisabeth Frasnelli: The Specification of the Language of the Field and Interoperability: Cross-language Access to Catalogues and Online Libraries (CACAO)
  7. Dawson, A.; Hamilton, V.: Optimising metadata to make high-value content more accessible to Google users (2006) 0.06
    0.056427345 = product of:
      0.11285469 = sum of:
        0.07377557 = weight(_text_:search in 5598) [ClassicSimilarity], result of:
          0.07377557 = score(doc=5598,freq=10.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.4293381 = fieldWeight in 5598, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5598)
        0.03907912 = product of:
          0.07815824 = sum of:
            0.07815824 = weight(_text_:engine in 5598) [ClassicSimilarity], result of:
              0.07815824 = score(doc=5598,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.29552078 = fieldWeight in 5598, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=5598)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Purpose - This paper aims to show how information in digital collections that have been catalogued using high-quality metadata can be retrieved more easily by users of search engines such as Google. Design/methodology/approach - The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines. Findings - The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high-quality metadata. Originality/value - The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public-sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices.
  8. Broughton, V.: Automatic metadata generation : Digital resource description without human intervention (2007) 0.06
    0.05500108 = product of:
      0.11000216 = sum of:
        0.06981198 = weight(_text_:web in 6048) [ClassicSimilarity], result of:
          0.06981198 = score(doc=6048,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43268442 = fieldWeight in 6048, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.09375 = fieldNorm(doc=6048)
        0.04019018 = product of:
          0.08038036 = sum of:
            0.08038036 = weight(_text_:22 in 6048) [ClassicSimilarity], result of:
              0.08038036 = score(doc=6048,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.46428138 = fieldWeight in 6048, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=6048)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    22. 9.2007 15:41:14
    Theme
    Semantic Web
  9. Corby, O.; Dieng, R.; Hébért, C.: ¬A conceptual graph model for W3C resource description framework (2000) 0.05
    0.05189138 = product of:
      0.10378276 = sum of:
        0.05759195 = weight(_text_:web in 5086) [ClassicSimilarity], result of:
          0.05759195 = score(doc=5086,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35694647 = fieldWeight in 5086, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5086)
        0.046190813 = weight(_text_:search in 5086) [ClassicSimilarity], result of:
          0.046190813 = score(doc=5086,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 5086, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5086)
      0.5 = coord(2/4)
    
    Abstract
    With the aim of building a "Semantic Web", the content of the documents must be explicitly represented through metadata in order to enable contents-guided search. Our approach is to exploit a standard language (RDF, recommended by W3C) for expressing such metadata and to interpret these metadata in conceptual graphs (CG) in order to exploit querying and inferencing capabilities enabled by CG formalism. The paper presents our mapping of RDF into CG and its interest in the context of the semantic Web
  10. Aldana, J.F.; Gómez, A.C.; Moreno, N.; Nebro, A.J.; Roldán, M.M.: Metadata functionality for semantic Web integration (2003) 0.05
    0.04716453 = product of:
      0.09432906 = sum of:
        0.057001244 = weight(_text_:web in 2731) [ClassicSimilarity], result of:
          0.057001244 = score(doc=2731,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35328537 = fieldWeight in 2731, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
        0.03732781 = weight(_text_:search in 2731) [ClassicSimilarity], result of:
          0.03732781 = score(doc=2731,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.21722981 = fieldWeight in 2731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
      0.5 = coord(2/4)
    
    Abstract
    We propose an extension of a mediator architecture. This extension is oriented to ontology-driven data integration. In our architecture ontologies are not managed by an extemal component or service, but are integrated in the mediation layer. This approach implies rethinking the mediator design, but at the same time provides advantages from a database perspective. Some of these advantages include the application of optimization and evaluation techniques that use and combine information from all abstraction levels (physical schema, logical schema and semantic information defined by ontology). 1. Introduction Although the Web is probably the richest information repository in human history, users cannot specify what they want from it. Two major problems that arise in current search engines (Heflin, 2001) are: a) polysemy, when the same word is used with different meanings; b) synonymy, when two different words have the same meaning. Polysemy causes irrelevant information retrieval. On the other hand, synonymy produces lost of useful documents. The lack of a capability to understand the context of the words and the relationships among required terms, explains many of the lost and false results produced by search engines. The Semantic Web will bring structure to the meaningful content of Web pages, giving semantic relationships among terms and possibly avoiding the previous problems. Various proposals have appeared for meta-data representation and communication standards, and other services and tools that may eventually merge into the global Semantic Web (Berners-lee, 2001). Hopefully, in the next few years we will see the universal adoption of open standards for representation and sharing of meta-information. In this environment, software agents roaming from page to page can readily carry out sophisticated tasks for users (Berners-Lee, 2001). In this context, ontologies can be seen as metadata that represent semantic of data; providing a knowledge domain standard vocabulary, like DTDs and XML Schema do. If its pages were so structured, the Web could be seen as a heterogeneous collection of autonomous databases. This suggests that techniques developed in the Database area could be useful. Database research mainly deals with efficient storage and retrieval and with powerful query languages.
  11. Hawking, D.; Zobel, J.: Does topic metadata help with Web search? (2007) 0.05
    0.045585044 = product of:
      0.09117009 = sum of:
        0.05817665 = weight(_text_:web in 204) [ClassicSimilarity], result of:
          0.05817665 = score(doc=204,freq=8.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.36057037 = fieldWeight in 204, product of:
              2.828427 = tf(freq=8.0), with freq of:
                8.0 = termFreq=8.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=204)
        0.032993436 = weight(_text_:search in 204) [ClassicSimilarity], result of:
          0.032993436 = score(doc=204,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.19200584 = fieldWeight in 204, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=204)
      0.5 = coord(2/4)
    
    Abstract
    It has been claimed that topic metadata can be used to improve the accuracy of text searches. Here, we test this claim by examining the contribution of metadata to effective searching within Web sites published by a university with a strong commitment to and substantial investment in metadata. The authors use four sets of queries, a total of 463, extracted from the university's official query logs and from the university's site map. The results are clear: The available metadata is of little value in ranking answers to those queries. A follow-up experiment with the Web sites published in a particular government jurisdiction confirms that this conclusion is not specific to the particular university. Examination of the metadata present at the university reveals that, in addition to implementation deficiencies, there are inherent problems in trying to use subject and description metadata to enhance the searchability of Web sites. Our experiments show that link anchor text, which can be regarded as metadata created by others, is much more effective in identifying best answers to queries than other textual evidence. Furthermore, query-independent evidence such as link counts and uniform resource locator (URL) length, unlike subject and description metadata, can substantially improve baseline performance.
  12. Dawson, A.: Creating metadata that work for digital libraries and Google (2004) 0.04
    0.043457236 = product of:
      0.08691447 = sum of:
        0.04072366 = weight(_text_:web in 4762) [ClassicSimilarity], result of:
          0.04072366 = score(doc=4762,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 4762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4762)
        0.046190813 = weight(_text_:search in 4762) [ClassicSimilarity], result of:
          0.046190813 = score(doc=4762,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 4762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4762)
      0.5 = coord(2/4)
    
    Abstract
    For many years metadata has been recognised as a significant component of the digital information environment. Substantial work has gone into creating complex metadata schemes for describing digital content. Yet increasingly Web search engines, and Google in particular, are the primary means of discovering and selecting digital resources, although they make little use of metadata. This article considers how digital libraries can gain more value from their metadata by adapting it for Google users, while still following well-established principles and standards for cataloguing and digital preservation.
  13. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.04
    0.043457236 = product of:
      0.08691447 = sum of:
        0.04072366 = weight(_text_:web in 1155) [ClassicSimilarity], result of:
          0.04072366 = score(doc=1155,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 1155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
        0.046190813 = weight(_text_:search in 1155) [ClassicSimilarity], result of:
          0.046190813 = score(doc=1155,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 1155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
      0.5 = coord(2/4)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
  14. Hagedorn, K.: OAIster: a "no dead ends" OAI service provider (2003) 0.04
    0.04324353 = product of:
      0.08648706 = sum of:
        0.03959212 = weight(_text_:search in 4776) [ClassicSimilarity], result of:
          0.03959212 = score(doc=4776,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 4776, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=4776)
        0.04689494 = product of:
          0.09378988 = sum of:
            0.09378988 = weight(_text_:engine in 4776) [ClassicSimilarity], result of:
              0.09378988 = score(doc=4776,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.35462496 = fieldWeight in 4776, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4776)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    OAIster, at the University of Michigan, University Libraries, Digital Library Production Service (DLPS), is an Andrew W. Mellon Foundation grant-funded project designed to test the feasibility of using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) to harvest digital object metadata from multiple and varied digital object repositories and develop a service to allow end-users to access that metadata. This article describes in-depth the development of our system to harvest, store, transform the metadata into Digital Library eXtension Service (DLXS) Bibliographic Class format, build indexes and make the metadata searchable through an interface using the XPAT search engine. Results of the testing of our service and statistics on usage are reported, as well as the issues that we have encountered during our harvesting and transformation operations. The article closes by discussing the future improvements and potential of OAIster and the OAI-PMH protocol.
  15. Crowston, K.; Kwasnik, B.H.: Can document-genre metadata improve information access to large digital collections? (2004) 0.04
    0.043117315 = product of:
      0.08623463 = sum of:
        0.029088326 = weight(_text_:web in 824) [ClassicSimilarity], result of:
          0.029088326 = score(doc=824,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 824, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=824)
        0.057146307 = weight(_text_:search in 824) [ClassicSimilarity], result of:
          0.057146307 = score(doc=824,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.33256388 = fieldWeight in 824, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=824)
      0.5 = coord(2/4)
    
    Abstract
    We discuss the issues of resolving the information-retrieval problem in large digital collections through the identification and use of document genres. Explicit identification of genre seems particularly important for such collections because any search usually retrieves documents with a diversity of genres that are undifferentiated by obvious clues as to their identity. Also, because most genres are characterized by both form and purpose, identifying the genre of a document provides information as to the document's purpose and its fit to the user's situation, which can be otherwise difficult to assess. We begin by outlining the possible role of genre identification in the information-retrieval process. Our assumption is that genre identification would enhance searching, first because we know that topic alone is not enough to define an information problem and, second, because search results containing genre information would be more easily understandable. Next, we discuss how information professionals have traditionally tackled the issues of representing genre in settings where topical representation is the norm. Finally, we address the issues of studying the efficacy of identifying genre in large digital collections. Because genre is often an implicit notion, studying it in a systematic way presents many problems. We outline a research protocol that would provide guidance for identifying Web document genres, for observing how genre is used in searching and evaluating search results, and finally for representing and visualizing genres.
  16. Tennant, R.: ¬A bibliographic metadata infrastructure for the twenty-first century (2004) 0.04
    0.042216495 = product of:
      0.08443299 = sum of:
        0.046541322 = weight(_text_:web in 2845) [ClassicSimilarity], result of:
          0.046541322 = score(doc=2845,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 2845, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2845)
        0.037891667 = product of:
          0.075783335 = sum of:
            0.075783335 = weight(_text_:22 in 2845) [ClassicSimilarity], result of:
              0.075783335 = score(doc=2845,freq=4.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.4377287 = fieldWeight in 2845, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=2845)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    The current library bibliographic infrastructure was constructed in the early days of computers - before the Web, XML, and a variety of other technological advances that now offer new opportunities. General requirements of a modern metadata infrastructure for libraries are identified, including such qualities as versatility, extensibility, granularity, and openness. A new kind of metadata infrastructure is then proposed that exhibits at least some of those qualities. Some key challenges that must be overcome to implement a change of this magnitude are identified.
    Date
    9.12.2005 19:22:38
    Source
    Library hi tech. 22(2004) no.2, S.175-181
  17. Lagoze, C.: Keeping Dublin Core simple : Cross-domain discovery or resource description? (2001) 0.04
    0.040772825 = product of:
      0.08154565 = sum of:
        0.041137107 = weight(_text_:web in 1216) [ClassicSimilarity], result of:
          0.041137107 = score(doc=1216,freq=16.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25496176 = fieldWeight in 1216, product of:
              4.0 = tf(freq=16.0), with freq of:
                16.0 = termFreq=16.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1216)
        0.040408544 = weight(_text_:search in 1216) [ClassicSimilarity], result of:
          0.040408544 = score(doc=1216,freq=12.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.23515818 = fieldWeight in 1216, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.01953125 = fieldNorm(doc=1216)
      0.5 = coord(2/4)
    
    Abstract
    Reality is messy. Individuals perceive or define objects differently. Objects may change over time, morphing into new versions of their former selves or into things altogether different. A book can give rise to a translation, derivation, or edition, and these resulting objects are related in complex ways to each other and to the people and contexts in which they were created or transformed. Providing a normalized view of such a messy reality is a precondition for managing information. From the first library catalogs, through Melvil Dewey's Decimal Classification system in the nineteenth century, to today's MARC encoding of AACR2 cataloging rules, libraries have epitomized the process of what David Levy calls "order making", whereby catalogers impose a veneer of regularity on the natural disorder of the artifacts they encounter. The pre-digital library within which the Catalog and its standards evolved was relatively self-contained and controlled. Creating and maintaining catalog records was, and still is, the task of professionals. Today's Web, in contrast, has brought together a diversity of information management communities, with a variety of order-making standards, into what Stuart Weibel has called the Internet Commons. The sheer scale of this context has motivated a search for new ways to describe and index information. Second-generation search engines such as Google can yield astonishingly good search results, while tools such as ResearchIndex for automatic citation indexing and techniques for inferring "Web communities" from constellations of hyperlinks promise even better methods for focusing queries on information from authoritative sources. Such "automated digital libraries," according to Bill Arms, promise to radically reduce the cost of managing information. Alongside the development of such automated methods, there is increasing interest in metadata as a means of imposing pre-defined order on Web content. While the size and changeability of the Web makes professional cataloging impractical, a minimal amount of information ordering, such as that represented by the Dublin Core (DC), may vastly improve the quality of an automatic index at low cost; indeed, recent work suggests that some types of simple description may be generated with little or no human intervention.
    Metadata is not monolithic. Instead, it is helpful to think of metadata as multiple views that can be projected from a single information object. Such views can form the basis of customized information services, such as search engines. Multiple views -- different types of metadata associated with a Web resource -- can facilitate a "drill-down" search paradigm, whereby people start their searches at a high level and later narrow their focus using domain-specific search categories. In Figure 1, for example, Mona Lisa may be viewed from the perspective of non-specialized searchers, with categories that are valid across domains (who painted it and when?); in the context of a museum (when and how was it acquired?); in the geo-spatial context of a walking tour using mobile devices (where is it in the gallery?); and in a legal framework (who owns the rights to its reproduction?). Multiple descriptive views imply a modular approach to metadata. Modularity is the basis of metadata architectures such as the Resource Description Framework (RDF), which permit different communities of expertise to associate and maintain multiple metadata packages for Web resources. As noted elsewhere, static association of multiple metadata packages with resources is but one way of achieving modularity. Another method is to computationally derive order-making views customized to the current needs of a client. This paper examines the evolution and scope of the Dublin Core from this perspective of metadata modularization. Dublin Core began in 1995 with a specific goal and scope -- as an easy-to-create and maintain descriptive format to facilitate cross-domain resource discovery on the Web. Over the years, this goal of "simple metadata for coarse-granularity discovery" came to mix with another goal -- that of community and domain-specific resource description and its attendant complexity. A notion of "qualified Dublin Core" evolved whereby the model for simple resource discovery -- a set of simple metadata elements in a flat, document-centric model -- would form the basis of more complex descriptions by treating the values of its elements as entities with properties ("component elements") in their own right.
    At the time of writing, the Dublin Core Metadata Initiative (DCMI) has clarified its commitment to the simple approach. The qualification principles announced in early 2000 support the use of DC elements as the basis for simple statements about resources, rather than as the foundation for more descriptive clauses. This paper takes a critical look at some of the issues that led up to this renewed commitment to simplicity. We argue that: * There remains a compelling need for simple, "pidgin" metadata. From a technical and economic perspective, document-centric metadata, where simple string values are associated with a finite set of properties, is most appropriate for generic, cross-domain discovery queries in the Internet Commons. Such metadata is not necessarily fixed in physical records, but may be projected algorithmically from more complex metadata or from content itself. * The Dublin Core, while far from perfect from an engineering perspective, is an acceptable standard for such simple metadata. Agreements in the global information space are as much social as technical, and the process by which the Dublin Core has been developed, involving a broad cross-section of international participants, is a model for such "socially developed" standards. * Efforts to introduce complexity into Dublin Core are misguided. Complex descriptions may be necessary for some Web resources and for some purposes, such as administration, preservation, and reference linking. However, complex descriptions require more expressive data models that differentiate between agents, documents, contexts, events, and the like. An attempt to intermix simplicity and complexity, and the data models most appropriate for them, defeats the equally noble goals of cross-domain description and extensive resource description. * The principle of modularity suggests that metadata formats tailored for simplicity be used alongside others tailored for complexity.
  18. Howarth, L.C.: Designing a "Human Understandable" metalevel ontology for enhancing resource discovery in knowledge bases (2000) 0.04
    0.037874047 = product of:
      0.07574809 = sum of:
        0.029088326 = weight(_text_:web in 114) [ClassicSimilarity], result of:
          0.029088326 = score(doc=114,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.18028519 = fieldWeight in 114, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=114)
        0.046659768 = weight(_text_:search in 114) [ClassicSimilarity], result of:
          0.046659768 = score(doc=114,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.27153727 = fieldWeight in 114, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0390625 = fieldNorm(doc=114)
      0.5 = coord(2/4)
    
    Abstract
    With the explosion of digitized resources accessible via networked information systems, and the corresponding proliferation of general purpose and domain-specific schemes, metadata have assumed a special prominence. While recent work emanating from the World Wide Web Consortium (W3C) has focused on the Resource Description Framework (RDF) to support the interoperability of metadata standards - thus converting metatags from diverse domains from merely "machine-readable" to "machine-understandable" - the next iteration, to "human-understandable," remains a challenge. This apparent gap provides a framework for three-phase research (Howarth, 1999) to develop a tool which will provide a "human-understandable" front-end search assist to any XML-compliant metadata scheme. Findings from phase one, the analyses and mapping of seven metadata schemes, identify the particular challenges of designing a common "namespace", populated with element tags which are appropriately descriptive, yet readily understood by a lay searcher, when there is little congruence within, and a high degree of variability across, the metadata schemes under study. Implications for the subsequent design and testing of both the proposed "metalevel ontology" (phase two), and the prototype search assist tool (phase three) are examined
  19. Catarino, M.E.; Baptista, A.A.: Relating folksonomies with Dublin Core (2008) 0.04
    0.037032373 = product of:
      0.07406475 = sum of:
        0.050382458 = weight(_text_:web in 2652) [ClassicSimilarity], result of:
          0.050382458 = score(doc=2652,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3122631 = fieldWeight in 2652, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0390625 = fieldNorm(doc=2652)
        0.02368229 = product of:
          0.04736458 = sum of:
            0.04736458 = weight(_text_:22 in 2652) [ClassicSimilarity], result of:
              0.04736458 = score(doc=2652,freq=4.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.27358043 = fieldWeight in 2652, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2652)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Folksonomy is the result of describing Web resources with tags created by Web users. Although it has become a popular application for the description of resources, in general terms Folksonomies are not being conveniently integrated in metadata. However, if the appropriate metadata elements are identified, then further work may be conducted to automatically assign tags to these elements (RDF properties) and use them in Semantic Web applications. This article presents research carried out to continue the project Kinds of Tags, which intends to identify elements required for metadata originating from folksonomies and to propose an application profile for DC Social Tagging. The work provides information that may be used by software applications to assign tags to metadata elements and, therefore, means for tags to be conveniently gathered by metadata interoperability tools. Despite the unquestionably high value of DC and the significance of the already existing properties in DC Terms, the pilot study show revealed a significant number of tags for which no corresponding properties yet existed. A need for new properties, such as Action, Depth, Rate, and Utility was determined. Those potential new properties will have to be validated in a later stage by the DC Social Tagging Community.
    Pages
    S.14-22
    Source
    Metadata for semantic and social applications : proceedings of the International Conference on Dublin Core and Metadata Applications, Berlin, 22 - 26 September 2008, DC 2008: Berlin, Germany / ed. by Jane Greenberg and Wolfgang Klas
  20. Wusteman, J.: Whither HTML? (2004) 0.04
    0.036667388 = product of:
      0.073334776 = sum of:
        0.046541322 = weight(_text_:web in 1001) [ClassicSimilarity], result of:
          0.046541322 = score(doc=1001,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 1001, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1001)
        0.026793454 = product of:
          0.053586908 = sum of:
            0.053586908 = weight(_text_:22 in 1001) [ClassicSimilarity], result of:
              0.053586908 = score(doc=1001,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.30952093 = fieldWeight in 1001, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=1001)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    HTML has reinvented itself as an XML application. The working draft of the latest version, XHTML 2.0, is causing controversy due to its lack of backward compatibility and the deprecation - and in some cases disappearance - of some popular tags. But is this commotion distracting us from the big picture of what XHTML has to offer? Where is HTML going? And is it taking the Web community with it?
    Source
    Library hi tech. 22(2004) no.1, S.99-105

Authors

Languages

  • e 108
  • d 14

Types

  • a 105
  • el 18
  • s 6
  • m 5
  • b 2
  • x 1
  • More… Less…