Search (3 results, page 1 of 1)

  • × language_ss:"e"
  • × theme_ss:"Metadaten"
  • × type_ss:"a"
  1. Rossiter, B.N.; Sillitoe, T.J.; Heather, M.A.: Database support for very large hypertexts (1990) 0.16
    0.16014546 = product of:
      0.24021818 = sum of:
        0.101930246 = weight(_text_:storage in 48) [ClassicSimilarity], result of:
          0.101930246 = score(doc=48,freq=2.0), product of:
            0.24187757 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.04439062 = queryNorm
            0.42141256 = fieldWeight in 48, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.0546875 = fieldNorm(doc=48)
        0.031413812 = weight(_text_:retrieval in 48) [ClassicSimilarity], result of:
          0.031413812 = score(doc=48,freq=2.0), product of:
            0.13427785 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04439062 = queryNorm
            0.23394634 = fieldWeight in 48, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.0546875 = fieldNorm(doc=48)
        0.05616028 = weight(_text_:systems in 48) [ClassicSimilarity], result of:
          0.05616028 = score(doc=48,freq=6.0), product of:
            0.1364201 = queryWeight, product of:
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.04439062 = queryNorm
            0.41167158 = fieldWeight in 48, product of:
              2.4494898 = tf(freq=6.0), with freq of:
                6.0 = termFreq=6.0
              3.0731742 = idf(docFreq=5561, maxDocs=44218)
              0.0546875 = fieldNorm(doc=48)
        0.050713852 = product of:
          0.101427704 = sum of:
            0.101427704 = weight(_text_:architecture in 48) [ClassicSimilarity], result of:
              0.101427704 = score(doc=48,freq=2.0), product of:
                0.24128059 = queryWeight, product of:
                  5.4353957 = idf(docFreq=523, maxDocs=44218)
                  0.04439062 = queryNorm
                0.42037243 = fieldWeight in 48, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  5.4353957 = idf(docFreq=523, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=48)
          0.5 = coord(1/2)
      0.6666667 = coord(4/6)
    
    Abstract
    Current hypertext systems have been widely and effectively used on relatively small data volumes. Explores the potential of database technology for aiding the implementation of hypertext systems holding very large amounts of complex data. Databases meet many requirements of the hypermedium: persistent data management, large volumes, data modelling, multi-level architecture with abstractions and views, metadata integrated with operational data, short-term transaction processing and high-level end-user languages for searching and updating data. Describes a system implementing the storage, retrieval and recall of trails through hypertext comprising textual complex objects (to illustrate the potential for the use of data bases). Discusses weaknesses in current database systems for handling the complex modelling required
  2. Aldana, J.F.; Gómez, A.C.; Moreno, N.; Nebro, A.J.; Roldán, M.M.: Metadata functionality for semantic Web integration (2003) 0.06
    0.062307518 = product of:
      0.124615036 = sum of:
        0.058245856 = weight(_text_:storage in 2731) [ClassicSimilarity], result of:
          0.058245856 = score(doc=2731,freq=2.0), product of:
            0.24187757 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.04439062 = queryNorm
            0.24080718 = fieldWeight in 2731, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
        0.025386192 = weight(_text_:retrieval in 2731) [ClassicSimilarity], result of:
          0.025386192 = score(doc=2731,freq=4.0), product of:
            0.13427785 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04439062 = queryNorm
            0.18905719 = fieldWeight in 2731, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.03125 = fieldNorm(doc=2731)
        0.040982984 = product of:
          0.08196597 = sum of:
            0.08196597 = weight(_text_:architecture in 2731) [ClassicSimilarity], result of:
              0.08196597 = score(doc=2731,freq=4.0), product of:
                0.24128059 = queryWeight, product of:
                  5.4353957 = idf(docFreq=523, maxDocs=44218)
                  0.04439062 = queryNorm
                0.33971223 = fieldWeight in 2731, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  5.4353957 = idf(docFreq=523, maxDocs=44218)
                  0.03125 = fieldNorm(doc=2731)
          0.5 = coord(1/2)
      0.5 = coord(3/6)
    
    Abstract
    We propose an extension of a mediator architecture. This extension is oriented to ontology-driven data integration. In our architecture ontologies are not managed by an extemal component or service, but are integrated in the mediation layer. This approach implies rethinking the mediator design, but at the same time provides advantages from a database perspective. Some of these advantages include the application of optimization and evaluation techniques that use and combine information from all abstraction levels (physical schema, logical schema and semantic information defined by ontology). 1. Introduction Although the Web is probably the richest information repository in human history, users cannot specify what they want from it. Two major problems that arise in current search engines (Heflin, 2001) are: a) polysemy, when the same word is used with different meanings; b) synonymy, when two different words have the same meaning. Polysemy causes irrelevant information retrieval. On the other hand, synonymy produces lost of useful documents. The lack of a capability to understand the context of the words and the relationships among required terms, explains many of the lost and false results produced by search engines. The Semantic Web will bring structure to the meaningful content of Web pages, giving semantic relationships among terms and possibly avoiding the previous problems. Various proposals have appeared for meta-data representation and communication standards, and other services and tools that may eventually merge into the global Semantic Web (Berners-lee, 2001). Hopefully, in the next few years we will see the universal adoption of open standards for representation and sharing of meta-information. In this environment, software agents roaming from page to page can readily carry out sophisticated tasks for users (Berners-Lee, 2001). In this context, ontologies can be seen as metadata that represent semantic of data; providing a knowledge domain standard vocabulary, like DTDs and XML Schema do. If its pages were so structured, the Web could be seen as a heterogeneous collection of autonomous databases. This suggests that techniques developed in the Database area could be useful. Database research mainly deals with efficient storage and retrieval and with powerful query languages.
  3. Roux, M.: Metadata for search engines : what can be learned from e-Sciences? (2012) 0.04
    0.041816026 = product of:
      0.12544808 = sum of:
        0.08736878 = weight(_text_:storage in 96) [ClassicSimilarity], result of:
          0.08736878 = score(doc=96,freq=2.0), product of:
            0.24187757 = queryWeight, product of:
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.04439062 = queryNorm
            0.36121076 = fieldWeight in 96, product of:
              1.4142135 = tf(freq=2.0), with freq of:
                2.0 = termFreq=2.0
              5.4488444 = idf(docFreq=516, maxDocs=44218)
              0.046875 = fieldNorm(doc=96)
        0.03807929 = weight(_text_:retrieval in 96) [ClassicSimilarity], result of:
          0.03807929 = score(doc=96,freq=4.0), product of:
            0.13427785 = queryWeight, product of:
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.04439062 = queryNorm
            0.2835858 = fieldWeight in 96, product of:
              2.0 = tf(freq=4.0), with freq of:
                4.0 = termFreq=4.0
              3.024915 = idf(docFreq=5836, maxDocs=44218)
              0.046875 = fieldNorm(doc=96)
      0.33333334 = coord(2/6)
    
    Abstract
    E-sciences are data-intensive sciences that make a large use of the Web to share, collect, and process data. In this context, primary scientific data is becoming a new challenging issue as data must be extensively described (1) to account for empiric conditions and results that allow interpretation and/or analyses and (2) to be understandable by computers used for data storage and information retrieval. With this respect, metadata is a focal point whatever it is considered from the point of view of the user to visualize and exploit data as well as this of the search tools to find and retrieve information. Numerous disciplines are concerned with the issues of describing complex observations and addressing pertinent knowledge. In this paper, similarities and differences in data description and exploration strategies among disciplines in e-sciences are examined.
    Source
    Next generation search engines: advanced models for information retrieval. Eds.: C. Jouis, u.a