Search (75 results, page 1 of 4)

  • × type_ss:"a"
  • × year_i:[2000 TO 2010}
  • × type_ss:"el"
  1. Heflin, J.; Hendler, J.: Semantic interoperability on the Web (2000) 0.06
    0.061869845 = product of:
      0.12373969 = sum of:
        0.12373969 = sum of:
          0.07432922 = weight(_text_:web in 759) [ClassicSimilarity], result of:
            0.07432922 = score(doc=759,freq=6.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.43716836 = fieldWeight in 759, product of:
                2.4494898 = tf(freq=6.0), with freq of:
                  6.0 = termFreq=6.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
          0.049410466 = weight(_text_:22 in 759) [ClassicSimilarity], result of:
            0.049410466 = score(doc=759,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.2708308 = fieldWeight in 759, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0546875 = fieldNorm(doc=759)
      0.5 = coord(1/2)
    
    Abstract
    XML will have a profound impact on the way data is exchanged on the Internet. An important feature of this language is the separation of content from presentation, which makes it easier to select and/or reformat the data. However, due to the likelihood of numerous industry and domain specific DTDs, those who wish to integrate information will still be faced with the problem of semantic interoperability. In this paper we discuss why this problem is not solved by XML, and then discuss why the Resource Description Framework is only a partial solution. We then present the SHOE language, which we feel has many of the features necessary to enable a semantic web, and describe an existing set of tools that make it easy to use the language.
    Date
    11. 5.2013 19:22:18
    Theme
    Semantic Web
  2. Boldi, P.; Santini, M.; Vigna, S.: PageRank as a function of the damping factor (2005) 0.04
    0.039321437 = product of:
      0.078642875 = sum of:
        0.078642875 = sum of:
          0.04334968 = weight(_text_:web in 2564) [ClassicSimilarity], result of:
            0.04334968 = score(doc=2564,freq=4.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.25496176 = fieldWeight in 2564, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2564)
          0.03529319 = weight(_text_:22 in 2564) [ClassicSimilarity], result of:
            0.03529319 = score(doc=2564,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 2564, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2564)
      0.5 = coord(1/2)
    
    Abstract
    PageRank is defined as the stationary state of a Markov chain. The chain is obtained by perturbing the transition matrix induced by a web graph with a damping factor alpha that spreads uniformly part of the rank. The choice of alpha is eminently empirical, and in most cases the original suggestion alpha=0.85 by Brin and Page is still used. Recently, however, the behaviour of PageRank with respect to changes in alpha was discovered to be useful in link-spam detection. Moreover, an analytical justification of the value chosen for alpha is still missing. In this paper, we give the first mathematical analysis of PageRank when alpha changes. In particular, we show that, contrarily to popular belief, for real-world graphs values of alpha close to 1 do not give a more meaningful ranking. Then, we give closed-form formulae for PageRank derivatives of any order, and an extension of the Power Method that approximates them with convergence O(t**k*alpha**t) for the k-th derivative. Finally, we show a tight connection between iterated computation and analytical behaviour by proving that the k-th iteration of the Power Method gives exactly the PageRank value obtained using a Maclaurin polynomial of degree k. The latter result paves the way towards the application of analytical methods to the study of PageRank.
    Date
    16. 1.2016 10:22:28
    Source
    http://vigna.di.unimi.it/ftp/papers/PageRankAsFunction.pdf [Proceedings of the ACM World Wide Web Conference (WWW), 2005]
  3. Baeza-Yates, R.; Boldi, P.; Castillo, C.: Generalizing PageRank : damping functions for linkbased ranking algorithms (2006) 0.03
    0.03297302 = product of:
      0.06594604 = sum of:
        0.06594604 = sum of:
          0.030652853 = weight(_text_:web in 2565) [ClassicSimilarity], result of:
            0.030652853 = score(doc=2565,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.18028519 = fieldWeight in 2565, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2565)
          0.03529319 = weight(_text_:22 in 2565) [ClassicSimilarity], result of:
            0.03529319 = score(doc=2565,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.19345059 = fieldWeight in 2565, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.0390625 = fieldNorm(doc=2565)
      0.5 = coord(1/2)
    
    Abstract
    This paper introduces a family of link-based ranking algorithms that propagate page importance through links. In these algorithms there is a damping function that decreases with distance, so a direct link implies more endorsement than a link through a long path. PageRank is the most widely known ranking function of this family. The main objective of this paper is to determine whether this family of ranking techniques has some interest per se, and how different choices for the damping function impact on rank quality and on convergence speed. Even though our results suggest that PageRank can be approximated with other simpler forms of rankings that may be computed more efficiently, our focus is of more speculative nature, in that it aims at separating the kernel of PageRank, that is, link-based importance propagation, from the way propagation decays over paths. We focus on three damping functions, having linear, exponential, and hyperbolic decay on the lengths of the paths. The exponential decay corresponds to PageRank, and the other functions are new. Our presentation includes algorithms, analysis, comparisons and experiments that study their behavior under different parameters in real Web graph data. Among other results, we show how to calculate a linear approximation that induces a page ordering that is almost identical to PageRank's using a fixed small number of iterations; comparisons were performed using Kendall's tau on large domain datasets.
    Date
    16. 1.2016 10:22:28
  4. Reiner, U.: Automatische DDC-Klassifizierung bibliografischer Titeldatensätze der Deutschen Nationalbibliografie (2009) 0.03
    0.03145715 = product of:
      0.0629143 = sum of:
        0.0629143 = sum of:
          0.034679744 = weight(_text_:web in 3284) [ClassicSimilarity], result of:
            0.034679744 = score(doc=3284,freq=4.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.2039694 = fieldWeight in 3284, product of:
                2.0 = tf(freq=4.0), with freq of:
                  4.0 = termFreq=4.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
          0.028234553 = weight(_text_:22 in 3284) [ClassicSimilarity], result of:
            0.028234553 = score(doc=3284,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.15476047 = fieldWeight in 3284, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=3284)
      0.5 = coord(1/2)
    
    Abstract
    Die Menge der zu klassifizierenden Veröffentlichungen steigt spätestens seit der Existenz des World Wide Web schneller an, als sie intellektuell sachlich erschlossen werden kann. Daher werden Verfahren gesucht, um die Klassifizierung von Textobjekten zu automatisieren oder die intellektuelle Klassifizierung zumindest zu unterstützen. Seit 1968 gibt es Verfahren zur automatischen Dokumentenklassifizierung (Information Retrieval, kurz: IR) und seit 1992 zur automatischen Textklassifizierung (ATC: Automated Text Categorization). Seit immer mehr digitale Objekte im World Wide Web zur Verfügung stehen, haben Arbeiten zur automatischen Textklassifizierung seit ca. 1998 verstärkt zugenommen. Dazu gehören seit 1996 auch Arbeiten zur automatischen DDC-Klassifizierung bzw. RVK-Klassifizierung von bibliografischen Titeldatensätzen und Volltextdokumenten. Bei den Entwicklungen handelt es sich unseres Wissens bislang um experimentelle und keine im ständigen Betrieb befindlichen Systeme. Auch das VZG-Projekt Colibri/DDC ist seit 2006 u. a. mit der automatischen DDC-Klassifizierung befasst. Die diesbezüglichen Untersuchungen und Entwicklungen dienen zur Beantwortung der Forschungsfrage: "Ist es möglich, eine inhaltlich stimmige DDC-Titelklassifikation aller GVK-PLUS-Titeldatensätze automatisch zu erzielen?"
    Date
    22. 1.2010 14:41:24
  5. Auer, S.; Bizer, C.; Kobilarov, G.; Lehmann, J.; Cyganiak, R.; Ives, Z.: DBpedia: a nucleus for a Web of open data (2007) 0.03
    0.029079849 = product of:
      0.058159698 = sum of:
        0.058159698 = product of:
          0.116319396 = sum of:
            0.116319396 = weight(_text_:web in 4260) [ClassicSimilarity], result of:
              0.116319396 = score(doc=4260,freq=20.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6841342 = fieldWeight in 4260, product of:
                  4.472136 = tf(freq=20.0), with freq of:
                    20.0 = termFreq=20.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4260)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machineconsumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data.
    Source
    ¬The Semantic Web : 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007 : proceedings. Ed.: Karl Aberer et al
    Theme
    Semantic Web
  6. Scheir, P.; Pammer, V.; Lindstaedt, S.N.: Information retrieval on the Semantic Web : does it exist? (2007) 0.03
    0.02838494 = product of:
      0.05676988 = sum of:
        0.05676988 = product of:
          0.11353976 = sum of:
            0.11353976 = weight(_text_:web in 4329) [ClassicSimilarity], result of:
              0.11353976 = score(doc=4329,freq=14.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.6677857 = fieldWeight in 4329, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0546875 = fieldNorm(doc=4329)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Plenty of contemporary attempts to search exist that are associated with the area of Semantic Web. But which of them qualify as information retrieval for the Semantic Web? Do such approaches exist? To answer these questions we take a look at the nature of the Semantic Web and Semantic Desktop and at definitions for information and data retrieval. We survey current approaches referred to by their authors as information retrieval for the Semantic Web or that use Semantic Web technology for search.
    Theme
    Semantic Web
  7. Baker, T.: ¬A grammar of Dublin Core (2000) 0.03
    0.026378417 = product of:
      0.052756835 = sum of:
        0.052756835 = sum of:
          0.024522282 = weight(_text_:web in 1236) [ClassicSimilarity], result of:
            0.024522282 = score(doc=1236,freq=2.0), product of:
              0.17002425 = queryWeight, product of:
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.052098576 = queryNorm
              0.14422815 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.2635105 = idf(docFreq=4597, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
          0.028234553 = weight(_text_:22 in 1236) [ClassicSimilarity], result of:
            0.028234553 = score(doc=1236,freq=2.0), product of:
              0.18244034 = queryWeight, product of:
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.052098576 = queryNorm
              0.15476047 = fieldWeight in 1236, product of:
                1.4142135 = tf(freq=2.0), with freq of:
                  2.0 = termFreq=2.0
                3.5018296 = idf(docFreq=3622, maxDocs=44218)
                0.03125 = fieldNorm(doc=1236)
      0.5 = coord(1/2)
    
    Abstract
    Dublin Core is often presented as a modern form of catalog card -- a set of elements (and now qualifiers) that describe resources in a complete package. Sometimes it is proposed as an exchange format for sharing records among multiple collections. The founding principle that "every element is optional and repeatable" reinforces the notion that a Dublin Core description is to be taken as a whole. This paper, in contrast, is based on a much different premise: Dublin Core is a language. More precisely, it is a small language for making a particular class of statements about resources. Like natural languages, it has a vocabulary of word-like terms, the two classes of which -- elements and qualifiers -- function within statements like nouns and adjectives; and it has a syntax for arranging elements and qualifiers into statements according to a simple pattern. Whenever tourists order a meal or ask directions in an unfamiliar language, considerate native speakers will spontaneously limit themselves to basic words and simple sentence patterns along the lines of "I am so-and-so" or "This is such-and-such". Linguists call this pidginization. In such situations, a small phrase book or translated menu can be most helpful. By analogy, today's Web has been called an Internet Commons where users and information providers from a wide range of scientific, commercial, and social domains present their information in a variety of incompatible data models and description languages. In this context, Dublin Core presents itself as a metadata pidgin for digital tourists who must find their way in this linguistically diverse landscape. Its vocabulary is small enough to learn quickly, and its basic pattern is easily grasped. It is well-suited to serve as an auxiliary language for digital libraries. This grammar starts by defining terms. It then follows a 200-year-old tradition of English grammar teaching by focusing on the structure of single statements. It concludes by looking at the growing dictionary of Dublin Core vocabulary terms -- its registry, and at how statements can be used to build the metadata equivalent of paragraphs and compositions -- the application profile.
    Date
    26.12.2011 14:01:22
  8. Zhang, L.; Liu, Q.L.; Zhang, J.; Wang, H.F.; Pan, Y.; Yu, Y.: Semplore: an IR approach to scalable hybrid query of Semantic Web data (2007) 0.03
    0.025416005 = product of:
      0.05083201 = sum of:
        0.05083201 = product of:
          0.10166402 = sum of:
            0.10166402 = weight(_text_:web in 231) [ClassicSimilarity], result of:
              0.10166402 = score(doc=231,freq=22.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.59793836 = fieldWeight in 231, product of:
                  4.690416 = tf(freq=22.0), with freq of:
                    22.0 = termFreq=22.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0390625 = fieldNorm(doc=231)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    As an extension to the current Web, Semantic Web will not only contain structured data with machine understandable semantics but also textual information. While structured queries can be used to find information more precisely on the Semantic Web, keyword searches are still needed to help exploit textual information. It thus becomes very important that we can combine precise structured queries with imprecise keyword searches to have a hybrid query capability. In addition, due to the huge volume of information on the Semantic Web, the hybrid query must be processed in a very scalable way. In this paper, we define such a hybrid query capability that combines unary tree-shaped structured queries with keyword searches. We show how existing information retrieval (IR) index structures and functions can be reused to index semantic web data and its textual information, and how the hybrid query is evaluated on the index structure using IR engines in an efficient and scalable manner. We implemented this IR approach in an engine called Semplore. Comprehensive experiments on its performance show that it is a promising approach. It leads us to believe that it may be possible to evolve current web search engines to query and search the Semantic Web. Finally, we briefy describe how Semplore is used for searching Wikipedia and an IBM customer's product information.
    Source
    Proceeding ISWC'07/ASWC'07 : Proceedings of the 6th international The semantic web and 2nd Asian conference on Asian semantic web conference. Ed.: K. Aberer et al
    Theme
    Semantic Web
  9. Van der Veer Martens, B.: Do citation systems represent theories of truth? (2001) 0.02
    0.024956053 = product of:
      0.049912106 = sum of:
        0.049912106 = product of:
          0.09982421 = sum of:
            0.09982421 = weight(_text_:22 in 3925) [ClassicSimilarity], result of:
              0.09982421 = score(doc=3925,freq=4.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.54716086 = fieldWeight in 3925, product of:
                  2.0 = tf(freq=4.0), with freq of:
                    4.0 = termFreq=4.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.078125 = fieldNorm(doc=3925)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    22. 7.2006 15:22:28
  10. Ding, L.; Finin, T.; Joshi, A.; Peng, Y.; Cost, R.S.; Sachs, J.; Pan, R.; Reddivari, P.; Doshi, V.: Swoogle : a Semantic Web search and metadata engine (2004) 0.02
    0.02432995 = product of:
      0.0486599 = sum of:
        0.0486599 = product of:
          0.0973198 = sum of:
            0.0973198 = weight(_text_:web in 4704) [ClassicSimilarity], result of:
              0.0973198 = score(doc=4704,freq=14.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.57238775 = fieldWeight in 4704, product of:
                  3.7416575 = tf(freq=14.0), with freq of:
                    14.0 = termFreq=14.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4704)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Swoogle is a crawler-based indexing and retrieval system for the Semantic Web, i.e., for Web documents in RDF or OWL. It extracts metadata for each discovered document, and computes relations between documents. Discovered documents are also indexed by an information retrieval system which can use either character N-Gram or URIrefs as keywords to find relevant documents and to compute the similarity among a set of documents. One of the interesting properties we compute is rank, a measure of the importance of a Semantic Web document.
    Content
    Vgl. unter: http://www.dblab.ntua.gr/~bikakis/LD/5.pdf Vgl. auch: http://swoogle.umbc.edu/. Vgl. auch: http://ebiquity.umbc.edu/paper/html/id/183/. Vgl. auch: Radhakrishnan, A.: Swoogle : An Engine for the Semantic Web unter: http://www.searchenginejournal.com/swoogle-an-engine-for-the-semantic-web/5469/.
    Theme
    Semantic Web
  11. Wielinga, B.; Wielemaker, J.; Schreiber, G.; Assem, M. van: Methods for porting resources to the Semantic Web (2004) 0.02
    0.022525156 = product of:
      0.04505031 = sum of:
        0.04505031 = product of:
          0.09010062 = sum of:
            0.09010062 = weight(_text_:web in 4640) [ClassicSimilarity], result of:
              0.09010062 = score(doc=4640,freq=12.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.5299281 = fieldWeight in 4640, product of:
                  3.4641016 = tf(freq=12.0), with freq of:
                    12.0 = termFreq=12.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=4640)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Ontologies will play a central role in the development of the Semantic Web. It is unrealistic to assume that such ontologies will be developed from scratch. Rather, we assume that existing resources such as thesauri and lexical data bases will be reused in the development of ontologies for the Semantic Web. In this paper we describe a method for converting existing source material to a representation that is compatible with Semantic Web languages such as RDF(S) and OWL. The method is illustrated with three case studies: converting Wordnet, AAT and MeSH to RDF(S) and OWL.
    Source
    Proceedings of the First European Semantic Web Symposium (ESWS2004), Eds.: C. Bussler, J. Davies, D. Fensel and R. Studer. 2004. S.299-311
    Theme
    Semantic Web
  12. Cross, P.: DESIRE: making the most of the Web (2000) 0.02
    0.021456998 = product of:
      0.042913996 = sum of:
        0.042913996 = product of:
          0.08582799 = sum of:
            0.08582799 = weight(_text_:web in 2146) [ClassicSimilarity], result of:
              0.08582799 = score(doc=2146,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.50479853 = fieldWeight in 2146, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.109375 = fieldNorm(doc=2146)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  13. Bradley, P.: ¬The relevance of underpants to searching the Web (2000) 0.02
    0.021456998 = product of:
      0.042913996 = sum of:
        0.042913996 = product of:
          0.08582799 = sum of:
            0.08582799 = weight(_text_:web in 3961) [ClassicSimilarity], result of:
              0.08582799 = score(doc=3961,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.50479853 = fieldWeight in 3961, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.109375 = fieldNorm(doc=3961)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  14. Smith, A.G.: Web links as analogues of citations (2004) 0.02
    0.021456998 = product of:
      0.042913996 = sum of:
        0.042913996 = product of:
          0.08582799 = sum of:
            0.08582799 = weight(_text_:web in 4205) [ClassicSimilarity], result of:
              0.08582799 = score(doc=4205,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.50479853 = fieldWeight in 4205, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.109375 = fieldNorm(doc=4205)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  15. Lindholm, J.; Schönthal, T.; Jansson , K.: Experiences of harvesting Web resources in engineering using automatic classification (2003) 0.02
    0.02123692 = product of:
      0.04247384 = sum of:
        0.04247384 = product of:
          0.08494768 = sum of:
            0.08494768 = weight(_text_:web in 4088) [ClassicSimilarity], result of:
              0.08494768 = score(doc=4088,freq=6.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.49962097 = fieldWeight in 4088, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.0625 = fieldNorm(doc=4088)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Authors describe the background and the work involved in setting up Engine-e, a Web index that uses automatic classification as a mean for the selection of resources in Engineering. Considerations in offering a robot-generated Web index as a successor to a manually indexed quality-controlled subject gateway are also discussed
  16. Qin, J.; Paling, S.: Converting a controlled vocabulary into an ontology : the case of GEM (2001) 0.02
    0.021175914 = product of:
      0.042351827 = sum of:
        0.042351827 = product of:
          0.084703654 = sum of:
            0.084703654 = weight(_text_:22 in 3895) [ClassicSimilarity], result of:
              0.084703654 = score(doc=3895,freq=2.0), product of:
                0.18244034 = queryWeight, product of:
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.052098576 = queryNorm
                0.46428138 = fieldWeight in 3895, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.5018296 = idf(docFreq=3622, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3895)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Date
    24. 8.2005 19:20:22
  17. Bergman, M.K.: ¬The Deep Web : surfacing hidden value (2001) 0.02
    0.01839171 = product of:
      0.03678342 = sum of:
        0.03678342 = product of:
          0.07356684 = sum of:
            0.07356684 = weight(_text_:web in 39) [ClassicSimilarity], result of:
              0.07356684 = score(doc=39,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.43268442 = fieldWeight in 39, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.09375 = fieldNorm(doc=39)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  18. Brooks, T.A.: Where is meaning when form is gone? : Knowledge representation an the Web (2001) 0.02
    0.01839171 = product of:
      0.03678342 = sum of:
        0.03678342 = product of:
          0.07356684 = sum of:
            0.07356684 = weight(_text_:web in 3889) [ClassicSimilarity], result of:
              0.07356684 = score(doc=3889,freq=2.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.43268442 = fieldWeight in 3889, product of:
                  1.4142135 = tf(freq=2.0), with freq of:
                    2.0 = termFreq=2.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.09375 = fieldNorm(doc=3889)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
  19. Heery, R.; Wagner, H.: ¬A metadata registry for the Semantic Web (2002) 0.02
    0.016092747 = product of:
      0.032185495 = sum of:
        0.032185495 = product of:
          0.06437099 = sum of:
            0.06437099 = weight(_text_:web in 1210) [ClassicSimilarity], result of:
              0.06437099 = score(doc=1210,freq=18.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.37859887 = fieldWeight in 1210, product of:
                  4.2426405 = tf(freq=18.0), with freq of:
                    18.0 = termFreq=18.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.02734375 = fieldNorm(doc=1210)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    The Semantic Web activity is a W3C project whose goal is to enable a 'cooperative' Web where machines and humans can exchange electronic content that has clear-cut, unambiguous meaning. This vision is based on the automated sharing of metadata terms across Web applications. The declaration of schemas in metadata registries advance this vision by providing a common approach for the discovery, understanding, and exchange of semantics. However, many of the issues regarding registries are not clear, and ideas vary regarding their scope and purpose. Additionally, registry issues are often difficult to describe and comprehend without a working example. This article will explore the role of metadata registries and will describe three prototypes, written by the Dublin Core Metadata Initiative. The article will outline how the prototypes are being used to demonstrate and evaluate application scope, functional requirements, and technology solutions for metadata registries. Metadata schema registries are, in effect, databases of schemas that can trace an historical line back to shared data dictionaries and the registration process encouraged by the ISO/IEC 11179 community. New impetus for the development of registries has come with the development activities surrounding creation of the Semantic Web. The motivation for establishing registries arises from domain and standardization communities, and from the knowledge management community. Examples of current registry activity include:
    * Agencies maintaining directories of data elements in a domain area in accordance with ISO/IEC 11179 (This standard specifies good practice for data element definition as well as the registration process. Example implementations are the National Health Information Knowledgebase hosted by the Australian Institute of Health and Welfare and the Environmental Data Registry hosted by the US Environmental Protection Agency.); * The xml.org directory of the Extended Markup Language (XML) document specifications facilitating re-use of Document Type Definition (DTD), hosted by the Organization for the Advancement of Structured Information Standards (OASIS); * The MetaForm database of Dublin Core usage and mappings maintained at the State and University Library in Goettingen; * The Semantic Web Agreement Group Dictionary, a database of terms for the Semantic Web that can be referred to by humans and software agents; * LEXML, a multi-lingual and multi-jurisdictional RDF Dictionary for the legal world; * The SCHEMAS registry maintained by the European Commission funded SCHEMAS project, which indexes several metadata element sets as well as a large number of activity reports describing metadata related activities and initiatives. Metadata registries essentially provide an index of terms. Given the distributed nature of the Web, there are a number of ways this can be accomplished. For example, the registry could link to terms and definitions in schemas published by implementers and stored locally by the schema maintainer. Alternatively, the registry might harvest various metadata schemas from their maintainers. Registries provide 'added value' to users by indexing schemas relevant to a particular 'domain' or 'community of use' and by simplifying the navigation of terms by enabling multiple schemas to be accessed from one view. An important benefit of this approach is an increase in the reuse of existing terms, rather than users having to reinvent them. Merging schemas to one view leads to harmonization between applications and helps avoid duplication of effort. Additionally, the establishment of registries to index terms actively being used in local implementations facilitates the metadata standards activity by providing implementation experience transferable to the standards-making process.
    Theme
    Semantic Web
  20. Matylonek, J.C.; Ottow, C.; Reese, T.: Organizing ready reference and administrative information with the reference desk manager (2001) 0.02
    0.015927691 = product of:
      0.031855382 = sum of:
        0.031855382 = product of:
          0.063710764 = sum of:
            0.063710764 = weight(_text_:web in 1156) [ClassicSimilarity], result of:
              0.063710764 = score(doc=1156,freq=6.0), product of:
                0.17002425 = queryWeight, product of:
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.052098576 = queryNorm
                0.37471575 = fieldWeight in 1156, product of:
                  2.4494898 = tf(freq=6.0), with freq of:
                    6.0 = termFreq=6.0
                  3.2635105 = idf(docFreq=4597, maxDocs=44218)
                  0.046875 = fieldNorm(doc=1156)
          0.5 = coord(1/2)
      0.5 = coord(1/2)
    
    Abstract
    Non-academic questions regarding special services, phone numbers, web-sites, library policies, current procedures, technical notices, and other pertinent local institutional information are often asked at the academic library reference desk. These frequent and urgent information requests require tools and resources to answer efficiently. Although ready reference collections at the desk provide a tool for academic information, specialized local information resources are more difficult to create and maintain. As reference desk responsibilities become increasingly complex and communication becomes more problematic, a web database to collect and manage this non-academic, local information can be very useful. At the Oregon State University, librarians in the Reference Services Management group created a custom-designed web-log bulletin board to deal with this non-academic, local information. The resulting database provides reference librarians a one-stop location for the information and makes it easier for them to update the information, via email, as conditions, procedures, and information needs change in their busy, highly computerized information commons.

Languages

  • e 69
  • d 5
  • More… Less…