Search (56 results, page 1 of 3)

  • × theme_ss:"Metadaten"
  • × year_i:[1990 TO 2000}
  1. Thonely, J.: ¬The road to meta : the implementation of Dublin Core metadata in the State Library of Queensland website (1998) 0.07
    0.06670072 = product of:
      0.13340144 = sum of:
        0.08061194 = weight(_text_:web in 2585) [ClassicSimilarity], result of:
          0.08061194 = score(doc=2585,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.49962097 = fieldWeight in 2585, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=2585)
        0.052789498 = weight(_text_:search in 2585) [ClassicSimilarity], result of:
          0.052789498 = score(doc=2585,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.30720934 = fieldWeight in 2585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=2585)
      0.5 = coord(2/4)
    
    Abstract
    The goal of the State Library of Queensland's Metadata Project is the deployment of metadata using the Dublin Core Metadata Element Set in the State Library' WWW Web pages. The deployment of metadata is expected to improve resource discovery by Internet users, through provision of index information (metadata) in State Library Web pages which is then available to search engines for indexing. The project is also an initial attempt to set standards for metadata deployment in queensland libraries Web pages
  2. Liechti, O.; Sifer, M.J.; Ichikawa, T.: Structured graph format : XML metadata for describing Web site structure (1998) 0.06
    0.061598226 = product of:
      0.12319645 = sum of:
        0.09975218 = weight(_text_:web in 3597) [ClassicSimilarity], result of:
          0.09975218 = score(doc=3597,freq=12.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.6182494 = fieldWeight in 3597, product of:
              3.4641016 = tf(freq=12.0), with freq of:
                12.0 = termFreq=12.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3597)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 3597) [ClassicSimilarity], result of:
              0.046888545 = score(doc=3597,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 3597, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3597)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    To improve searching, filtering and processing of information on the Web, a common effort is made in the direction of metadata, defined as machine understandable information about Web resources or other things. In particular, the eXtensible Markup Language (XML) aims at providing a common syntax to emerging metadata formats. Proposes the Structured Graph Format (SGF) an XML compliant markup language based on structured graphs, for capturing Web sites' structure. Presents SGMapper, a client-site tool, which aims to facilitate navigation in large Web sites by generating highly interactive site maps using SGF metadata
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  3. Turner, T.P.; Brackbill, L.: Rising to the top : evaluating the use of HTML META tag to improve retrieval of World Wide Web documents through Internet search engines (1998) 0.06
    0.05897005 = product of:
      0.1179401 = sum of:
        0.049364526 = weight(_text_:web in 5230) [ClassicSimilarity], result of:
          0.049364526 = score(doc=5230,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.3059541 = fieldWeight in 5230, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=5230)
        0.068575576 = weight(_text_:search in 5230) [ClassicSimilarity], result of:
          0.068575576 = score(doc=5230,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.39907667 = fieldWeight in 5230, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=5230)
      0.5 = coord(2/4)
    
    Abstract
    Reports results of a study to evaluate the effectiveness of using the HTML META tag to improve retrieval of World Wide Web documents through Internet search engines. 20 documents were created in 5 subject areas: agricultural trade; farm business statistics; poultry statistics; vegetable statistics; and cotton statistics. 4 pages were created in each subject area: one with no META tags, one with a META tag using the keywords attribute, one with a META tag using the description attribute, and one with META tags using both the keywords and description attributes. Searches were performed in Alta Vista and Infoseek to find terms common to all pages as well as for each keyword term contained in the META tag. Analysis of the searches suggests that use of the keywords attribute in a META tag substantially improves accessibility while use of the description attribute alone does not. Concludes that HTML document authors should consider using keywords attribute META tags and suggests that more search engines index the META tag to improve resource discovery
  4. Qin, J.; Wesley, K.: Web indexing with meta fields : a survey of Web objects in polymer chemistry (1998) 0.06
    0.058822148 = product of:
      0.117644295 = sum of:
        0.07805218 = weight(_text_:web in 3589) [ClassicSimilarity], result of:
          0.07805218 = score(doc=3589,freq=10.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.48375595 = fieldWeight in 3589, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.046875 = fieldNorm(doc=3589)
        0.03959212 = weight(_text_:search in 3589) [ClassicSimilarity], result of:
          0.03959212 = score(doc=3589,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.230407 = fieldWeight in 3589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.046875 = fieldNorm(doc=3589)
      0.5 = coord(2/4)
    
    Abstract
    Reports results of a study of 4 WWW search engines: AltaVista; Lycos; Excite and WebCrawler to collect data on Web objects on polymer chemistry. 1.037 Web objects were examined for data in 4 categories: document information; use of meta fields; use of images and use of chemical names. Issues raised included: whether to provide metadata elements for parts of entities or whole entities only, the use of metasyntax, problems in representation of special types of objects, and whether links should be considered when encoding metadata. Use of metafields was not widespread in the sample and knowledge of metafields in HTML varied greatly among Web object creators. The study formed part of a metadata project funded by the OCLC Library and Information Science Research Grant Program
  5. Marchiori, M.: ¬The limits of Web metadata, and beyond (1998) 0.06
    0.057252567 = product of:
      0.114505135 = sum of:
        0.09106086 = weight(_text_:web in 3383) [ClassicSimilarity], result of:
          0.09106086 = score(doc=3383,freq=10.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5643819 = fieldWeight in 3383, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=3383)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 3383) [ClassicSimilarity], result of:
              0.046888545 = score(doc=3383,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 3383, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=3383)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Highlights 2 major problems of the WWW metadata: it will take some time before a reasonable number of people start using metadata to provide a better Web classification, and that no one can guarantee that a majority of the Web objects will be ever properly classified via metadata. Addresses the problem of how to cope with intrinsic limits of Web metadata, proposes a method to solve these problems and show evidence of its effectiveness. Examines the important problem of what is the required critical mass in the WWW for metadata in order for it to be really useful
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  6. Wolfekuhler, M.R.; Punch, W.F.: Finding salient features for personal Web pages categories (1997) 0.05
    0.04698986 = product of:
      0.09397972 = sum of:
        0.07053544 = weight(_text_:web in 2673) [ClassicSimilarity], result of:
          0.07053544 = score(doc=2673,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43716836 = fieldWeight in 2673, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2673)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 2673) [ClassicSimilarity], result of:
              0.046888545 = score(doc=2673,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 2673, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=2673)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    Examines techniques that discover features in sets of pre-categorized documents, such that similar documents can be found on the WWW. Examines techniques which will classifiy training examples with high accuracy, then explains why this is not necessarily useful. Describes a method for extracting word clusters from the raw document features. Results show that the clustering technique is successful in discovering word groups in personal Web pages which can be used to find similar information on the WWW
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue of papers from the 6th International World Wide Web conference, held 7-11 Apr 1997, Santa Clara, California
  7. Tammaro, A.M.: Catalogando, catalogando ... metacatalogando (1997) 0.04
    0.043457236 = product of:
      0.08691447 = sum of:
        0.04072366 = weight(_text_:web in 902) [ClassicSimilarity], result of:
          0.04072366 = score(doc=902,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=902)
        0.046190813 = weight(_text_:search in 902) [ClassicSimilarity], result of:
          0.046190813 = score(doc=902,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=902)
      0.5 = coord(2/4)
    
    Abstract
    A crucial question for librarians is whether to catalogue Internet information sources, and electronic sources in general, which may contain metainformation of the texts of articles. Librarians can help researchers with data identification and access in 4 ways: making OPAC available on the Internet; providing a complete selection of Gopher, Ftp, WWW, etc. site lists; maintaining a Web site, coordinateted by the library, that functions as an Internet access point; and organising access to existing search engines that do automatic indexing. Briefly reviews several metadata formats, including USMARC field 856, IAFA templates, SOIP (Harvest), TEI Headers, Capcas Head and URC
  8. Wu, C.-J.: Experiments on using the Dublin Core to reduce the retrieval error ratio (1998) 0.04
    0.043457236 = product of:
      0.08691447 = sum of:
        0.04072366 = weight(_text_:web in 5201) [ClassicSimilarity], result of:
          0.04072366 = score(doc=5201,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.25239927 = fieldWeight in 5201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5201)
        0.046190813 = weight(_text_:search in 5201) [ClassicSimilarity], result of:
          0.046190813 = score(doc=5201,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.2688082 = fieldWeight in 5201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5201)
      0.5 = coord(2/4)
    
    Abstract
    In order to test the power of metadata on information retrieval, an experiment was designed and conducted on a group of 7 graduate students using the Dublin Core as the cataloguing metadata. Results show that, on average, the retrieval error rate is only 2.9 per cent for the MES system (http://140.136.85.194), which utilizes the Dublin Core to describe the documents on the World Wide Web, in contrast to 20.7 per cent for the 7 famous search engines including HOTBOT, GAIS, LYCOS, EXCITE, INFOSEEK, YAHOO, and OCTOPUS. The very low error rate indicates that the users can use the information of the Dublin Core to decide whether to retrieve the documents or not
  9. Brasethvik, T.: ¬A semantic modeling approach to metadata (1998) 0.04
    0.040518112 = product of:
      0.081036225 = sum of:
        0.05759195 = weight(_text_:web in 5165) [ClassicSimilarity], result of:
          0.05759195 = score(doc=5165,freq=4.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.35694647 = fieldWeight in 5165, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5165)
        0.023444273 = product of:
          0.046888545 = sum of:
            0.046888545 = weight(_text_:22 in 5165) [ClassicSimilarity], result of:
              0.046888545 = score(doc=5165,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.2708308 = fieldWeight in 5165, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=5165)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    States that heterogeneous project groups today may be expected to use the mechanisms of the Web for sharing information. Metadata has been proposed as a mechanism for expressing the semantics of information and, hence, facilitate information retrieval, understanding and use. Presents an approach to sharing information which aims to use a semantic modeling language as the basis for expressing the semantics of information and designing metadata schemes. Functioning on the borderline between human and computer understandability, the modeling language would be able to express the semantics of published Web documents. Reporting on work in progress, presents the overall framework and ideas
    Date
    9. 9.2000 17:22:23
  10. Waugh, A.: Specifying metadata standards for metadata tool configuration (1998) 0.04
    0.036667388 = product of:
      0.073334776 = sum of:
        0.046541322 = weight(_text_:web in 3596) [ClassicSimilarity], result of:
          0.046541322 = score(doc=3596,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.2884563 = fieldWeight in 3596, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=3596)
        0.026793454 = product of:
          0.053586908 = sum of:
            0.053586908 = weight(_text_:22 in 3596) [ClassicSimilarity], result of:
              0.053586908 = score(doc=3596,freq=2.0), product of:
                0.17312855 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.049439456 = queryNorm
                0.30952093 = fieldWeight in 3596, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.0625 = fieldNorm(doc=3596)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Date
    1. 8.1996 22:08:06
    Footnote
    Contribution to a special issue devoted to the Proceedings of the 7th International World Wide Web Conference, held 14-18 April 1998, Brisbane, Australia
  11. Roszkowski, M.; Lukas, C.: ¬A distributed architecture for resource discovery using metadata (1998) 0.03
    0.03429555 = product of:
      0.0685911 = sum of:
        0.03732781 = weight(_text_:search in 1256) [ClassicSimilarity], result of:
          0.03732781 = score(doc=1256,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.21722981 = fieldWeight in 1256, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.03125 = fieldNorm(doc=1256)
        0.031263296 = product of:
          0.06252659 = sum of:
            0.06252659 = weight(_text_:engine in 1256) [ClassicSimilarity], result of:
              0.06252659 = score(doc=1256,freq=2.0), product of:
                0.26447627 = queryWeight, product of:
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.049439456 = queryNorm
                0.23641664 = fieldWeight in 1256, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.349498 = idf(docFreq=570, maxDocs=44218)
                  0.03125 = fieldNorm(doc=1256)
          0.5 = coord(1/2)
      0.5 = coord(2/4)
    
    Abstract
    This article describes an approach for linking geographically distributed collections of metadata so that they are searchable as a single collection. We describe the infrastructure, which uses standard Internet protocols such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP), to distribute queries, return results, and exchange index information. We discuss the advantages of using linked collections of authoritative metadata as an alternative to using a keyword indexing search-engine for resource discovery. We examine other architectures that use metadata for resource discovery, such as Dienst/NCSTRL, the AHDS HTTP/Z39.50 Gateway, and the ROADS initiative. Finally, we discuss research issues and future directions of the project. The Internet Scout Project, which is funded by the National Science Foundation and is located in the Computer Sciences Department at the University of Wisconsin-Madison, is charged with assisting the higher education community in resource discovery on the Internet. To that end, the Scout Report and subsequent subject-specific Scout Reports were developed to guide the U.S. higher education community to research-quality resources. The Scout Report Signpost utilizes the content from the Scout Reports as the basis of a metadata collection. Signpost consists of more than 2000 cataloged Internet sites using established standards such as Library of Congress subject headings and abbreviated call letters, and emerging standards such as the Dublin Core (DC). This searchable and browseable collection is free and freely accessible, as are all of the Internet Scout Project's services.
    As well developed as both the Scout Reports and Signpost are, they cannot capture the wealth of high-quality content that is available on the Internet. An obvious next step toward increasing the usefulness of our own collection and its value to our customer base is to partner with other high-quality content providers who have developed similar collections and to develop a single, virtual collection. Project Isaac (working title) is the Internet Scout Project's latest resource discovery effort. Project Isaac involves the development of a research testbed that allows experimentation with protocols and algorithms for creating, maintaining, indexing and searching distributed collections of metadata. Project Isaac's infrastructure uses standard Internet protocols, such as the Lightweight Directory Access Protocol (LDAP) and the Common Indexing Protocol (CIP) to distribute queries, return results, and exchange index or centroid information. The overall goal is to support a single-search interface to geographically distributed and independently maintained metadata collections.
  12. Rhyno, A.: RDF and metadata : adding value to the Web (1998) 0.02
    0.023270661 = product of:
      0.093082644 = sum of:
        0.093082644 = weight(_text_:web in 6457) [ClassicSimilarity], result of:
          0.093082644 = score(doc=6457,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.5769126 = fieldWeight in 6457, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.125 = fieldNorm(doc=6457)
      0.25 = coord(1/4)
    
  13. Baker, T.: Languages for Dublin Core (1998) 0.02
    0.021728618 = product of:
      0.043457236 = sum of:
        0.02036183 = weight(_text_:web in 1257) [ClassicSimilarity], result of:
          0.02036183 = score(doc=1257,freq=2.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.12619963 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1257)
        0.023095407 = weight(_text_:search in 1257) [ClassicSimilarity], result of:
          0.023095407 = score(doc=1257,freq=2.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.1344041 = fieldWeight in 1257, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.02734375 = fieldNorm(doc=1257)
      0.5 = coord(2/4)
    
    Abstract
    Over the past three years, the Dublin Core Metadata Initiative has achieved a broad international consensus on the semantics of a simple element set for describing electronic resources. Since the first workshop in March 1995, which was reported in the very first issue of D-Lib Magazine, Dublin Core has been the topic of perhaps a dozen articles here. Originally intended to be simple and intuitive enough for authors to tag Web pages without special training, Dublin Core is being adapted now for more specialized uses, from government information and legal deposit to museum informatics and electronic commerce. To meet such specialized requirements, Dublin Core can be customized with additional elements or qualifiers. However, these refinements can compromise interoperability across applications. There are tradeoffs between using specific terms that precisely meet local needs versus general terms that are understood more widely. We can better understand this inevitable tension between simplicity and complexity if we recognize that metadata is a form of human language. With Dublin Core, as with a natural language, people are inclined to stretch definitions, make general terms more specific, specific terms more general, misunderstand intended meanings, and coin new terms. One goal of this paper, therefore, will be to examine the experience of some related ways to seek semantic interoperability through simplicity: planned languages, interlingua constructs, and pidgins. The problem of semantic interoperability is compounded when we consider Dublin Core in translation. All of the workshops, documents, mailing lists, user guides, and working group outputs of the Dublin Core Initiative have been in English. But in many countries and for many applications, people need a metadata standard in their own language. In principle, the broad elements of Dublin Core can be defined equally well in Bulgarian or Hindi. Since Dublin Core is a controlled standard, however, any parallel definitions need to be kept in sync as the standard evolves. Another goal of the paper, then, will be to define the conceptual and organizational problem of maintaining a metadata standard in multiple languages. In addition to a name and definition, which are meant for human consumption, each Dublin Core element has a label, or indexing token, meant for harvesting by search engines. For practical reasons, these machine-readable tokens are English-looking strings such as Creator and Subject (just as HTML tags are called HEAD, BODY, or TITLE). These tokens, which are shared by Dublin Cores in every language, ensure that metadata fields created in any particular language are indexed together across repositories. As symbols of underlying universal semantics, these tokens form the basis of semantic interoperability among the multiple Dublin Cores. As long as we limit ourselves to sharing these indexing tokens among exact translations of a simple set of fifteen broad elements, the definitions of which fit easily onto two pages, the problem of Dublin Core in multiple languages is straightforward. But nothing having to do with human language is ever so simple. Just as speakers of various languages must learn the language of Dublin Core in their own tongues, we must find the right words to talk about a metadata language that is expressable in many discipline-specific jargons and natural languages and that inevitably will evolve and change over time.
  14. Eichmann, D.; McGregor, T.; Danley, D.: Integrating structured databases into the Web : the MORE system (1994) 0.02
    0.020152984 = product of:
      0.08061194 = sum of:
        0.08061194 = weight(_text_:web in 1501) [ClassicSimilarity], result of:
          0.08061194 = score(doc=1501,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.49962097 = fieldWeight in 1501, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=1501)
      0.25 = coord(1/4)
    
    Abstract
    Administering large quantities of information will be an increasing problem as the WWW grows in size and popularity. The MORE system is a metadatabase repository employing Mosaic and the Web as its sole user interface. Describes the design and implementation experience in migrating a repository system onto the Web
  15. Schweibenz, W.: Proactive Web design : Maßnahmen zur Verbesserung der Auffindbarkeit von Webseiten durch Suchmaschinen (1999) 0.02
    0.020152984 = product of:
      0.08061194 = sum of:
        0.08061194 = weight(_text_:web in 4065) [ClassicSimilarity], result of:
          0.08061194 = score(doc=4065,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.49962097 = fieldWeight in 4065, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0625 = fieldNorm(doc=4065)
      0.25 = coord(1/4)
    
    Abstract
    Unter proactive Web design versteht man alle Maßnahmen, welche die Auffindbarkeit von Webseiten durch Suchmaschinen verbessern und bereits im Vorfeld oder im Moment der Publikation im WWW ergriffen werden können. Diese Maßnahmen reichen von der Registrierung einer Webseite bei Suchmaschinen über die Verknüpfung mit verwandten Web-Seiten und der aussagekräftigen Gestaltung von Titeln von Webseiten bis zur Verwendung von Metadaten
  16. Perkins, M.: Why don't search engines work better? (1997) 0.02
    0.02000121 = product of:
      0.08000484 = sum of:
        0.08000484 = weight(_text_:search in 753) [ClassicSimilarity], result of:
          0.08000484 = score(doc=753,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.46558946 = fieldWeight in 753, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=753)
      0.25 = coord(1/4)
    
    Abstract
    Despite the proliferation of new search engines and improvements to existing ones, their use with the WWW continues to produce innumerable false hits. The reason for this is that HTML is mainly a presentation tool, and does a fairly poor job of describing the contents of a document while search engines are a long way from artificial intelligence. The use of SGML would ease the problem considerably, but is much more complex and time consuming to learn to be of general use. The alternative 'metadata' approach is proving slow to get off the ground. Researchers are investigating these and various other lines of enquiry
  17. Revelli, C.: Integrare o sostituire? : Un dilemma per la norme catalografiche (1997) 0.02
    0.02000121 = product of:
      0.08000484 = sum of:
        0.08000484 = weight(_text_:search in 1624) [ClassicSimilarity], result of:
          0.08000484 = score(doc=1624,freq=6.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.46558946 = fieldWeight in 1624, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1624)
      0.25 = coord(1/4)
    
    Abstract
    Discusses a range of professional librarians' opinions on the urgent need either to adopt or replace the current cataloguing rules, a theme closely linked to the identity crisis facing libraries and librarians in the online electronic era. Topics examined include: Gorman and Oddy's views on restructuring AACR principles; the 13 metadata elements contained in the Dublin Core document (1995); catalogue search by known item; keyword search versus subject search; and the US Library of Congress's Program for cooperation cataloging
  18. Desai, B.C.: Supporting discovery in virtual libraries (1997) 0.02
    0.018663906 = product of:
      0.07465562 = sum of:
        0.07465562 = weight(_text_:search in 543) [ClassicSimilarity], result of:
          0.07465562 = score(doc=543,freq=4.0), product of:
            0.17183559 = queryWeight, product of:
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.049439456 = queryNorm
            0.43445963 = fieldWeight in 543, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.475677 = idf(docFreq=3718, maxDocs=44218)
              0.0625 = fieldNorm(doc=543)
      0.25 = coord(1/4)
    
    Abstract
    Discusses the development and implementation of models for indexing and searching information resources on the Internet. Examines briefly the results of a simple query on a number of existing search systems and discusses 2 proposed index metadata structures for indexing and supporting search and discovery: The Dublin Core Elements List and the Semantic Header. Presents an indexing and discovery system based on the Semantic Header
  19. Minas, M.; Shklar, L.: Visualizing information repositories on the World-Wide Web (1996) 0.02
    0.01763386 = product of:
      0.07053544 = sum of:
        0.07053544 = weight(_text_:web in 6267) [ClassicSimilarity], result of:
          0.07053544 = score(doc=6267,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43716836 = fieldWeight in 6267, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=6267)
      0.25 = coord(1/4)
    
    Abstract
    The main objective of the proposed high-level 'Visual Repository Definition Language' is to anbale advanced Web presentation of large amounts of exisitng heterogeneous information. Statements of the language serve to describe the desired structure of information repositories, which are composed of metadata entities encapsulating the original data. Such approach helps to to avoid the usual relocation and restructuring of data that occurs when providing Web access to it. The language has been designed to be useful even for inexperienced programmers. Its applicability is demonstrated by a real example, creating a repository of judicial opinions from publicly available raw data
  20. Weibel, S.: ¬The Dublin Core : a simple content description model for electronic resources (1997) 0.02
    0.01763386 = product of:
      0.07053544 = sum of:
        0.07053544 = weight(_text_:web in 2563) [ClassicSimilarity], result of:
          0.07053544 = score(doc=2563,freq=6.0), product of:
            0.16134618 = queryWeight, product of:
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.049439456 = queryNorm
            0.43716836 = fieldWeight in 2563, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.2635105 = idf(docFreq=4597, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2563)
      0.25 = coord(1/4)
    
    Abstract
    The Dublin Core is a 15 element set intended to facilitate discovery of electronic resources. Its characteristics are: simplicity, semantic interoperability, international consensus, flexibility, metadata modularity on the Web and a metadata architecture for the Web. The WWW Consortium is developing the Resource Description Framework to support different metadata needs. It will support 3 resource description models: embedded metadata, third party metadata, and view filter. Development continues into: refinement of elements, user education and application guides, metadata registries, tools and standardization. Includes a list of related Web sites and details of the core elements