Search (107 results, page 1 of 6)

  • × theme_ss:"Metadaten"
  1. Ilik, V.; Storlien, J.; Olivarez, J.: Metadata makeover (2014) 0.05
    0.047834687 = product of:
      0.19133875 = sum of:
        0.19133875 = sum of:
          0.14885838 = weight(_text_:programming in 2606) [ClassicSimilarity], result of:
            0.14885838 = score(doc=2606,freq=2.0), product of:
              0.29361802 = queryWeight, product of:
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.04479146 = queryNorm
              0.5069797 = fieldWeight in 2606, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                6.5552235 = idf(docFreq=170, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
          0.042480372 = weight(_text_:22 in 2606) [ClassicSimilarity], result of:
            0.042480372 = score(doc=2606,freq=2.0), product of:
              0.15685207 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.04479146 = queryNorm
              0.2708308 = fieldWeight in 2606, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=2606)
      0.25 = coord(1/4)
    
    Abstract
    Catalogers have become fluent in information technology such as web design skills, HyperText Markup Language (HTML), Cascading Stylesheets (CSS), eXensible Markup Language (XML), and programming languages. The knowledge gained from learning information technology can be used to experiment with methods of transforming one metadata schema into another using various software solutions. This paper will discuss the use of eXtensible Stylesheet Language Transformations (XSLT) for repurposing, editing, and reformatting metadata. Catalogers have the requisite skills for working with any metadata schema, and if they are excluded from metadata work, libraries are wasting a valuable human resource.
    Date
    10. 9.2000 17:38:22
  2. Zhang, J.; Dimitroff, A.: Internet search engines' response to Metadata Dublin Core implementation (2005) 0.04
    0.044713248 = product of:
      0.17885299 = sum of:
        0.17885299 = weight(_text_:engines in 4652) [ClassicSimilarity], result of:
          0.17885299 = score(doc=4652,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.7858995 = fieldWeight in 4652, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.109375 = fieldNorm(doc=4652)
      0.25 = coord(1/4)
    
  3. Perkins, M.: Why don't search engines work better? (1997) 0.04
    0.038722813 = product of:
      0.15489125 = sum of:
        0.15489125 = weight(_text_:engines in 753) [ClassicSimilarity], result of:
          0.15489125 = score(doc=753,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.68060905 = fieldWeight in 753, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0546875 = fieldNorm(doc=753)
      0.25 = coord(1/4)
    
    Abstract
    Despite the proliferation of new search engines and improvements to existing ones, their use with the WWW continues to produce innumerable false hits. The reason for this is that HTML is mainly a presentation tool, and does a fairly poor job of describing the contents of a document while search engines are a long way from artificial intelligence. The use of SGML would ease the problem considerably, but is much more complex and time consuming to learn to be of general use. The alternative 'metadata' approach is proving slow to get off the ground. Researchers are investigating these and various other lines of enquiry
  4. Henshaw, R.; Valauskas, E.J.: Metadata as a catalyst: : experiments with metadata and search engines in the Internet journal, First Monday (2001) 0.04
    0.03832564 = product of:
      0.15330257 = sum of:
        0.15330257 = weight(_text_:engines in 7098) [ClassicSimilarity], result of:
          0.15330257 = score(doc=7098,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.67362815 = fieldWeight in 7098, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.09375 = fieldNorm(doc=7098)
      0.25 = coord(1/4)
    
  5. Wallis, R.; Isaac, A.; Charles, V.; Manguinhas, H.: Recommendations for the application of Schema.org to aggregated cultural heritage metadata to increase relevance and visibility to search engines : the case of Europeana (2017) 0.04
    0.035707813 = product of:
      0.14283125 = sum of:
        0.14283125 = weight(_text_:engines in 3372) [ClassicSimilarity], result of:
          0.14283125 = score(doc=3372,freq=10.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.62761605 = fieldWeight in 3372, product of:
              3.1622777 = tf(freq=10.0), with freq of:
                10.0 = termFreq=10.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=3372)
      0.25 = coord(1/4)
    
    Abstract
    Europeana provides access to more than 54 million cultural heritage objects through its portal Europeana Collections. It is crucial for Europeana to be recognized by search engines as a trusted authoritative repository of cultural heritage objects. Indeed, even though its portal is the main entry point, most Europeana users come to it via search engines. Europeana Collections is fuelled by metadata describing cultural objects, represented in the Europeana Data Model (EDM). This paper presents the research and consequent recommendations for publishing Europeana metadata using the Schema.org vocabulary and best practices. Schema.org html embedded metadata to be consumed by search engines to power rich services (such as Google Knowledge Graph). Schema.org is an open and widely adopted initiative (used by over 12 million domains) backed by Google, Bing, Yahoo!, and Yandex, for sharing metadata across the web It underpins the emergence of new web techniques, such as so called Semantic SEO. Our research addressed the representation of the embedded metadata as part of the Europeana HTML pages and sitemaps so that the re-use of this data can be optimized. The practical objective of our work is to produce a Schema.org representation of Europeana resources described in EDM, being the richest as possible and tailored to Europeana's realities and user needs as well the search engines and their users.
  6. Turner, T.P.; Brackbill, L.: Rising to the top : evaluating the use of HTML META tag to improve retrieval of World Wide Web documents through Internet search engines (1998) 0.03
    0.03319098 = product of:
      0.13276392 = sum of:
        0.13276392 = weight(_text_:engines in 5230) [ClassicSimilarity], result of:
          0.13276392 = score(doc=5230,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.58337915 = fieldWeight in 5230, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.046875 = fieldNorm(doc=5230)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a study to evaluate the effectiveness of using the HTML META tag to improve retrieval of World Wide Web documents through Internet search engines. 20 documents were created in 5 subject areas: agricultural trade; farm business statistics; poultry statistics; vegetable statistics; and cotton statistics. 4 pages were created in each subject area: one with no META tags, one with a META tag using the keywords attribute, one with a META tag using the description attribute, and one with META tags using both the keywords and description attributes. Searches were performed in Alta Vista and Infoseek to find terms common to all pages as well as for each keyword term contained in the META tag. Analysis of the searches suggests that use of the keywords attribute in a META tag substantially improves accessibility while use of the description attribute alone does not. Concludes that HTML document authors should consider using keywords attribute META tags and suggests that more search engines index the META tag to improve resource discovery
  7. What is Schema.org? (2011) 0.03
    0.03319098 = product of:
      0.13276392 = sum of:
        0.13276392 = weight(_text_:engines in 4437) [ClassicSimilarity], result of:
          0.13276392 = score(doc=4437,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.58337915 = fieldWeight in 4437, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.046875 = fieldNorm(doc=4437)
      0.25 = coord(1/4)
    
    Abstract
    This site provides a collection of schemas, i.e., html tags, that webmasters can use to markup their pages in ways recognized by major search providers. Search engines including Bing, Google and Yahoo! rely on this markup to improve the display of search results, making it easier for people to find the right web pages. Many sites are generated from structured data, which is often stored in databases. When this data is formatted into HTML, it becomes very difficult to recover the original structured data. Many applications, especially search engines, can benefit greatly from direct access to this structured data. On-page markup enables search engines to understand the information on web pages and provide richer search results in order to make it easier for users to find relevant information on the web. Markup can also enable new tools and applications that make use of the structure. A shared markup vocabulary makes easier for webmasters to decide on a markup schema and get the maximum benefit for their efforts. So, in the spirit of sitemaps.org, Bing, Google and Yahoo! have come together to provide a shared collection of schemas that webmasters can use.
  8. Roux, M.: Metadata for search engines : what can be learned from e-Sciences? (2012) 0.03
    0.03319098 = product of:
      0.13276392 = sum of:
        0.13276392 = weight(_text_:engines in 96) [ClassicSimilarity], result of:
          0.13276392 = score(doc=96,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.58337915 = fieldWeight in 96, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.046875 = fieldNorm(doc=96)
      0.25 = coord(1/4)
    
    Footnote
    Vgl.: http://www.igi-global.com/book/next-generation-search-engines/64420.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a
  9. Dawson, A.; Hamilton, V.: Optimising metadata to make high-value content more accessible to Google users (2006) 0.03
    0.027659154 = product of:
      0.110636614 = sum of:
        0.110636614 = weight(_text_:engines in 5598) [ClassicSimilarity], result of:
          0.110636614 = score(doc=5598,freq=6.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.4861493 = fieldWeight in 5598, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0390625 = fieldNorm(doc=5598)
      0.25 = coord(1/4)
    
    Abstract
    Purpose - This paper aims to show how information in digital collections that have been catalogued using high-quality metadata can be retrieved more easily by users of search engines such as Google. Design/methodology/approach - The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines. Findings - The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high-quality metadata. Originality/value - The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public-sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices.
  10. Thonely, J.: ¬The road to meta : the implementation of Dublin Core metadata in the State Library of Queensland website (1998) 0.03
    0.025550429 = product of:
      0.102201715 = sum of:
        0.102201715 = weight(_text_:engines in 2585) [ClassicSimilarity], result of:
          0.102201715 = score(doc=2585,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.44908544 = fieldWeight in 2585, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0625 = fieldNorm(doc=2585)
      0.25 = coord(1/4)
    
    Abstract
    The goal of the State Library of Queensland's Metadata Project is the deployment of metadata using the Dublin Core Metadata Element Set in the State Library' WWW Web pages. The deployment of metadata is expected to improve resource discovery by Internet users, through provision of index information (metadata) in State Library Web pages which is then available to search engines for indexing. The project is also an initial attempt to set standards for metadata deployment in queensland libraries Web pages
  11. Tammaro, A.M.: Catalogando, catalogando ... metacatalogando (1997) 0.02
    0.022356624 = product of:
      0.089426495 = sum of:
        0.089426495 = weight(_text_:engines in 902) [ClassicSimilarity], result of:
          0.089426495 = score(doc=902,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.39294976 = fieldWeight in 902, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0546875 = fieldNorm(doc=902)
      0.25 = coord(1/4)
    
    Abstract
    A crucial question for librarians is whether to catalogue Internet information sources, and electronic sources in general, which may contain metainformation of the texts of articles. Librarians can help researchers with data identification and access in 4 ways: making OPAC available on the Internet; providing a complete selection of Gopher, Ftp, WWW, etc. site lists; maintaining a Web site, coordinateted by the library, that functions as an Internet access point; and organising access to existing search engines that do automatic indexing. Briefly reviews several metadata formats, including USMARC field 856, IAFA templates, SOIP (Harvest), TEI Headers, Capcas Head and URC
  12. Carroll, D.J.; Lele, P.: Human intervention in the networked environment : metadata alternatives (1998) 0.02
    0.022356624 = product of:
      0.089426495 = sum of:
        0.089426495 = weight(_text_:engines in 2221) [ClassicSimilarity], result of:
          0.089426495 = score(doc=2221,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.39294976 = fieldWeight in 2221, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0546875 = fieldNorm(doc=2221)
      0.25 = coord(1/4)
    
    Abstract
    Emphasizes the increased importance of the role of the librarian as a 'human' interface in the organization and retrieval of resources in the networked environment. Comments on the recent increase in metadata and compares the long established MARC format and adaptations of MARC with several other alternative metadata systems. Outlines the use of embedded META tag information in HTML documents and describes how existing search engines find and index resources on the WWW, with their pros and cons. Discusses the implications for effective research of the inherent limitations of these automated indexing schemes
  13. Wu, C.-J.: Experiments on using the Dublin Core to reduce the retrieval error ratio (1998) 0.02
    0.022356624 = product of:
      0.089426495 = sum of:
        0.089426495 = weight(_text_:engines in 5201) [ClassicSimilarity], result of:
          0.089426495 = score(doc=5201,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.39294976 = fieldWeight in 5201, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0546875 = fieldNorm(doc=5201)
      0.25 = coord(1/4)
    
    Abstract
    In order to test the power of metadata on information retrieval, an experiment was designed and conducted on a group of 7 graduate students using the Dublin Core as the cataloguing metadata. Results show that, on average, the retrieval error rate is only 2.9 per cent for the MES system (http://140.136.85.194), which utilizes the Dublin Core to describe the documents on the World Wide Web, in contrast to 20.7 per cent for the 7 famous search engines including HOTBOT, GAIS, LYCOS, EXCITE, INFOSEEK, YAHOO, and OCTOPUS. The very low error rate indicates that the users can use the information of the Dublin Core to decide whether to retrieve the documents or not
  14. Dawson, A.: Creating metadata that work for digital libraries and Google (2004) 0.02
    0.022356624 = product of:
      0.089426495 = sum of:
        0.089426495 = weight(_text_:engines in 4762) [ClassicSimilarity], result of:
          0.089426495 = score(doc=4762,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.39294976 = fieldWeight in 4762, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0546875 = fieldNorm(doc=4762)
      0.25 = coord(1/4)
    
    Abstract
    For many years metadata has been recognised as a significant component of the digital information environment. Substantial work has gone into creating complex metadata schemes for describing digital content. Yet increasingly Web search engines, and Google in particular, are the primary means of discovering and selecting digital resources, although they make little use of metadata. This article considers how digital libraries can gain more value from their metadata by adapting it for Google users, while still following well-established principles and standards for cataloguing and digital preservation.
  15. Godby, C.J.; Young, J.A.; Childress, E.: ¬A repository of metadata crosswalks (2004) 0.02
    0.022356624 = product of:
      0.089426495 = sum of:
        0.089426495 = weight(_text_:engines in 1155) [ClassicSimilarity], result of:
          0.089426495 = score(doc=1155,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.39294976 = fieldWeight in 1155, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.0546875 = fieldNorm(doc=1155)
      0.25 = coord(1/4)
    
    Abstract
    This paper proposes a model for metadata crosswalks that associates three pieces of information: the crosswalk, the source metadata standard, and the target metadata standard, each of which may have a machine-readable encoding and human-readable description. The crosswalks are encoded as METS records that are made available to a repository for processing by search engines, OAI harvesters, and custom-designed Web services. The METS object brings together all of the information required to access and interpret crosswalks and represents a significant improvement over previously available formats. But it raises questions about how best to describe these complex objects and exposes gaps that must eventually be filled in by the digital library community.
  16. Rusch-Feja, D.: Subject oriented collection of information resources from the Internet (1997) 0.02
    0.01916282 = product of:
      0.07665128 = sum of:
        0.07665128 = weight(_text_:engines in 528) [ClassicSimilarity], result of:
          0.07665128 = score(doc=528,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.33681408 = fieldWeight in 528, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.046875 = fieldNorm(doc=528)
      0.25 = coord(1/4)
    
    Abstract
    Subject oriented information sources on the Internet remain relativley unstructured despite attempts at indexing them and despite the use of search engines to index sources in a collective database and to retrieve relevant information sources. Describes the rationale for developing a means to capture and structure Internet resources for scientific research use in a clearinghouse, and methods for retrieval, information filtering, and structuring subject orientated information sources from the Internet for specific user groups. Discusses the issues of design, maintenance, implementation of metadata, and obtaining use feedback. Cooperation among several institutions involved in the German national subject special collections (SSG) library support programme of the DFG have led to recommendations to expand this programme to include coordination of collective Internet subject information sites. In addition to the compilation of subject oriented information sites on the Internet by library and information staff, connection to other value added services serve to make processes of information searching, retrieval, acquisition, and evaluation more effective for researchers
  17. Qin, J.; Wesley, K.: Web indexing with meta fields : a survey of Web objects in polymer chemistry (1998) 0.02
    0.01916282 = product of:
      0.07665128 = sum of:
        0.07665128 = weight(_text_:engines in 3589) [ClassicSimilarity], result of:
          0.07665128 = score(doc=3589,freq=2.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.33681408 = fieldWeight in 3589, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.046875 = fieldNorm(doc=3589)
      0.25 = coord(1/4)
    
    Abstract
    Reports results of a study of 4 WWW search engines: AltaVista; Lycos; Excite and WebCrawler to collect data on Web objects on polymer chemistry. 1.037 Web objects were examined for data in 4 categories: document information; use of meta fields; use of images and use of chemical names. Issues raised included: whether to provide metadata elements for parts of entities or whole entities only, the use of metasyntax, problems in representation of special types of objects, and whether links should be considered when encoding metadata. Use of metafields was not widespread in the sample and knowledge of metafields in HTML varied greatly among Web object creators. The study formed part of a metadata project funded by the OCLC Library and Information Science Research Grant Program
  18. Aldana, J.F.; Gómez, A.C.; Moreno, N.; Nebro, A.J.; Roldán, M.M.: Metadata functionality for semantic Web integration (2003) 0.02
    0.018066881 = product of:
      0.072267525 = sum of:
        0.072267525 = weight(_text_:engines in 2731) [ClassicSimilarity], result of:
          0.072267525 = score(doc=2731,freq=4.0), product of:
            0.22757743 = queryWeight, product of:
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.04479146 = queryNorm
            0.31755137 = fieldWeight in 2731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              5.080822 = idf(docFreq=746, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
      0.25 = coord(1/4)
    
    Abstract
    We propose an extension of a mediator architecture. This extension is oriented to ontology-driven data integration. In our architecture ontologies are not managed by an extemal component or service, but are integrated in the mediation layer. This approach implies rethinking the mediator design, but at the same time provides advantages from a database perspective. Some of these advantages include the application of optimization and evaluation techniques that use and combine information from all abstraction levels (physical schema, logical schema and semantic information defined by ontology). 1. Introduction Although the Web is probably the richest information repository in human history, users cannot specify what they want from it. Two major problems that arise in current search engines (Heflin, 2001) are: a) polysemy, when the same word is used with different meanings; b) synonymy, when two different words have the same meaning. Polysemy causes irrelevant information retrieval. On the other hand, synonymy produces lost of useful documents. The lack of a capability to understand the context of the words and the relationships among required terms, explains many of the lost and false results produced by search engines. The Semantic Web will bring structure to the meaningful content of Web pages, giving semantic relationships among terms and possibly avoiding the previous problems. Various proposals have appeared for meta-data representation and communication standards, and other services and tools that may eventually merge into the global Semantic Web (Berners-lee, 2001). Hopefully, in the next few years we will see the universal adoption of open standards for representation and sharing of meta-information. In this environment, software agents roaming from page to page can readily carry out sophisticated tasks for users (Berners-Lee, 2001). In this context, ontologies can be seen as metadata that represent semantic of data; providing a knowledge domain standard vocabulary, like DTDs and XML Schema do. If its pages were so structured, the Web could be seen as a heterogeneous collection of autonomous databases. This suggests that techniques developed in the Database area could be useful. Database research mainly deals with efficient storage and retrieval and with powerful query languages.
  19. Özel, S.A.; Altingövde, I.S.; Ulusoy, Ö.; Özsoyoglu, G.; Özsoyoglu, Z.M.: Metadata-Based Modeling of Information Resources an the Web (2004) 0.01
    0.013290926 = product of:
      0.053163704 = sum of:
        0.053163704 = product of:
          0.10632741 = sum of:
            0.10632741 = weight(_text_:programming in 2093) [ClassicSimilarity], result of:
              0.10632741 = score(doc=2093,freq=2.0), product of:
                0.29361802 = queryWeight, product of:
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.04479146 = queryNorm
                0.36212835 = fieldWeight in 2093, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  6.5552235 = idf(docFreq=170, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=2093)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Abstract
    This paper deals with the problem of modeling Web information resources using expert knowledge and personalized user information for improved Web searching capabilities. We propose a "Web information space" model, which is composed of Web-based information resources (HTML/XML [Hypertext Markup Language/Extensible Markup Language] documents an the Web), expert advice repositories (domain-expert-specified metadata for information resources), and personalized information about users (captured as user profiles that indicate users' preferences about experts as well as users' knowledge about topics). Expert advice, the heart of the Web information space model, is specified using topics and relationships among topics (called metalinks), along the lines of the recently proposed topic maps. Topics and metalinks constitute metadata that describe the contents of the underlying HTML/XML Web resources. The metadata specification process is semiautomated, and it exploits XML DTDs (Document Type Definition) to allow domain-expert guided mapping of DTD elements to topics and metalinks. The expert advice is stored in an object-relational database management system (DBMS). To demonstrate the practicality and usability of the proposed Web information space model, we created a prototype expert advice repository of more than one million topics/metalinks for DBLP (Database and Logic Programming) Bibliography data set. We also present a query interface that provides sophisticated querying fa cilities for DBLP Bibliography resources using the expert advice repository.
  20. Jimenez, V.O.R.: Nuevas perspectivas para la catalogacion : metadatos ver MARC (1999) 0.01
    0.012873498 = product of:
      0.05149399 = sum of:
        0.05149399 = product of:
          0.10298798 = sum of:
            0.10298798 = weight(_text_:22 in 5743) [ClassicSimilarity], result of:
              0.10298798 = score(doc=5743,freq=4.0), product of:
                0.15685207 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.04479146 = queryNorm
                0.6565931 = fieldWeight in 5743, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=5743)
          0.5 = coord(1/2)
      0.25 = coord(1/4)
    
    Date
    30. 3.2002 19:45:22
    Source
    Revista Española de Documentaçion Cientifica. 22(1999) no.2, S.198-219

Authors

Years

Languages

  • e 94
  • d 9
  • chi 1
  • i 1
  • sp 1
  • More… Less…

Types

  • a 98
  • el 10
  • s 5
  • m 3
  • b 2
  • More… Less…